Тёмный

Self Learning GPTs: Using Feedback to Improve Your Application 

LangChain
Подписаться 62 тыс.
Просмотров 15 тыс.
50% 1

Опубликовано:

 

29 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 22   
@RobertoDuransh
@RobertoDuransh 7 месяцев назад
Okay, this is exciting! awesome work!!!
@kierkegaardrulez
@kierkegaardrulez 7 месяцев назад
Can you rescind an instruction? Eg you later decide you don’t want emojis. If you say, “stop using emojis” will it add a new instruction or will it know that it can remove the previous instruction to achieve the same effect?
@theyvesloy
@theyvesloy 7 месяцев назад
I don't think thats working :(
@jzam5426
@jzam5426 7 месяцев назад
Thanks for sharing!! So many questions! Is it really finetuning the model on the fly? how can one implement this using the API?
@kalahaval
@kalahaval 7 месяцев назад
That's awesome stuff. Curious to learn how this is done under the hood.
@infocyde2024
@infocyde2024 7 месяцев назад
Yeah, is this dataset an embedding? Or is there some sort of automated fine tunning going on here?
@crotonium
@crotonium 7 месяцев назад
the description mentions that "then automatically use that feedback to improve over time. It does this by creating few-shot examples from that feedback and incorporating those into the prompt."
@insitegd7483
@insitegd7483 7 месяцев назад
@@crotonium I think that could be solved easily retrieving top-k last messages in a vector database or retrieving the last examples in a sql/nosql database to remember, but I am not sure.
@kirby145x
@kirby145x 7 месяцев назад
My guess is it takes the top user scored items and then relays those if a related text is given to the agent. "Use these examples for a response that is desirable" etc.
@darwingli1772
@darwingli1772 7 месяцев назад
Thanks for the video. May I know if the token will be increased as user provides more feedback? I am not sure. But I assume the feedback from the user is also fed into the prompt to change the behavior of the LLM?
@pavelhimself
@pavelhimself 7 месяцев назад
this is amazing!
@Rahulkp220
@Rahulkp220 7 месяцев назад
Powerful.
@saeeds851
@saeeds851 7 месяцев назад
Awesome!
@artur50
@artur50 7 месяцев назад
Can I do it completely offline, on prem?
@johnfinch437
@johnfinch437 7 месяцев назад
Love the video but you need to amp up the audio level, it's extremely quiet.
@ezdoezit9635
@ezdoezit9635 7 месяцев назад
Remember in Terminator 2 Judgement Day, The T-800 (Arnold Schwarzenegger) gives John Connor a thumbs up! 👍👍👍
@dfrnascimento
@dfrnascimento Месяц назад
Where is the source code for it? 😞
@broomva
@broomva 7 месяцев назад
Is that frontend open source?
@abdullahhabib7603
@abdullahhabib7603 7 месяцев назад
Yes
@broomva
@broomva 7 месяцев назад
@@abdullahhabib7603 Where can it be found?
@maxi-g
@maxi-g 7 месяцев назад
what is it though? is it the langserve playground?
@broomva
@broomva 7 месяцев назад
@@abdullahhabib7603 Where can it be found? Could you please share the repo link?
Далее
Building long context RAG with RAPTOR from scratch
21:30
Cutting LLM Costs with MongoDB Semantic Caching
30:15
MAGIC TIME ​⁠@Whoispelagheya
00:28
Просмотров 22 млн
This is how Halo felt as a kid 🤣 (phelan.davies)
00:14
Brawl Stars expliquez ça
00:11
Просмотров 7 млн
LangServe Chat Playground Launch
15:58
Просмотров 10 тыс.
Build a Customer Support Bot | LangGraph
47:08
Просмотров 44 тыс.
LangChain v0.1.0 Launch: Retrieval
9:43
Просмотров 8 тыс.
Building adaptive RAG from scratch with Command-R
27:54
Open Source Extraction Service
9:06
Просмотров 18 тыс.
Deploying code agents without all the agonizing pain
49:48
LangSmith Highlights: Evaluation
10:22
Просмотров 4,6 тыс.
MAGIC TIME ​⁠@Whoispelagheya
00:28
Просмотров 22 млн