Тёмный

Q: How to create an Instruction Dataset for Fine-tuning my LLM? 

code_your_own_AI
Подписаться 39 тыс.
Просмотров 17 тыс.
50% 1

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 19   
@RobbbbM-qk3ei
@RobbbbM-qk3ei 10 месяцев назад
I love your Channel. I can’t code but need to understand this stuff. You’re helping me piece together all of this.
@vivienmeally
@vivienmeally Год назад
Hello and thank you for the fantastic video! As a beginner, I find the topics you cover to be especially helpful. I also have a topic suggestion for a future video: Could you explain how to translate an LLM into another language or teach it a new language? For example, it would be great to know how to translate LLama2 into German. If you could elaborate on that with a practical example, that would be awesome! Thanks in advance.
@VaibhavPatil-rx7pc
@VaibhavPatil-rx7pc Год назад
Hey excellent detailed information for beginners
@VaibhavPatil-rx7pc
@VaibhavPatil-rx7pc Год назад
Cool tricks, bard vs chatgpt, I like it
@dragosdima8758
@dragosdima8758 Год назад
Hi, I have a question. How can it make sense to QA fine-tunning (on specific domain data) without having fine-tuned on the domain data before hand ? In my understanding the first fine-tunning will kinda allow the LLM to learn new stuff about the domain and then in QA fine-tune will learn how to answear based on that domain (ofc, it will still learn a little bit about it). I am really looking forward to your answear, thanks.
@AJ-ek7lp
@AJ-ek7lp Год назад
You can do both in one step. Like in the example here the QA pairs are both teaching the LLM how to respond as well as adding domain specificity. However, in this appraoch the domain understanding is limited within the scope of the task ie. it would perform badly on a non-QA type of prompt about the domain.
@travel_with_ankit
@travel_with_ankit 10 месяцев назад
Hey did you got the answer ? I have same query.
@gileneusz
@gileneusz Год назад
this is pure gold
@cedwin4
@cedwin4 Год назад
Thanks for the video. I have a question. When a model is fune-tuned, its weights are over-written and the knowledge learned in the pre-training phase is gone due to catastrophic forgetting. I tried to use a domain specific pre-trained language model but after fine-tuning, it acted based on only what it learned during fine-tuning. Pre-trained knowledge seems to be gone. How to handle this issue?
Год назад
What is the best way to train an AI for a specific language?
@mukeshkund4465
@mukeshkund4465 Год назад
Thanks for your video. I have a specific question for you. I am working for an organisation which deals with research and they have 50k research documents. The objective is to build a generative q&a to answer the specific question regarding those docs. Constraint is we can't expose our data anytime. So my question is how can we leverage llm for this specific domain related task. Any help/ suggestion will be highly appreciated.
@code4AI
@code4AI Год назад
Sorry, this RU-vid channel can not provide specific contract information for commercial entities, given current legal restrictions on my side. Please contact any legally established organisations of your choice, with adequate insurances and legal counsellor.
@talhayousuf4599
@talhayousuf4599 Год назад
Yes, you are dealing with extractive qa task here, I used haystack for this purpose a while ago. Have a look at their docs. They have tutorial for this
@ahsaniftikhar1037
@ahsaniftikhar1037 Год назад
How can i create a dataset to fine tune llm for a python library? Thank you
@code4AI
@code4AI Год назад
Hmmm .... simplest solution: A library is a sequence of code, copy the code in the prompt of a CODE LLM (w/ multiple ICL steps, if necessary, given a specific token length) and there you have it. In-Context Learning happened.
@YTBohne
@YTBohne 11 месяцев назад
the more interessting question is how i fine tune my llm when i only god visual powerpoints and so on...
@raregear
@raregear 11 месяцев назад
why that bard results at 16:00 get me embarassed 🤣
@kevon217
@kevon217 7 месяцев назад
Bard being bardy
@mauritsmosterd5691
@mauritsmosterd5691 10 месяцев назад
Don't got any rice :(
Далее
Сказала дочке НЕТ!
00:24
Просмотров 1,1 млн
REFLECTION 70B: What Happened? (#1 on HuggingFace)
16:11
311 - Fine tuning GPT2 using custom documents​
14:51
Finetuning Open-Source LLMs
20:05
Просмотров 31 тыс.
Supercharge Your Coding Skills: Fine-Tuning CODE LLMs
22:18
Сказала дочке НЕТ!
00:24
Просмотров 1,1 млн