Тёмный
No video :(

Fine-tuning Tiny LLM on Your Data | Sentiment Analysis with TinyLlama and LoRA on a Single GPU 

Venelin Valkov
Подписаться 26 тыс.
Просмотров 14 тыс.
50% 1

Опубликовано:

 

27 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@venelin_valkov
@venelin_valkov 7 месяцев назад
Full text tutorial (requires MLExpert Pro): www.mlexpert.io/bootcamp/fine-tuning-tiny-llm-on-custom-dataset
@ansea1234
@ansea1234 7 месяцев назад
Thank you very much for this wonderful video. Among other things, the details you give are really very useful!
@geniusxbyofejiroagbaduta8665
@geniusxbyofejiroagbaduta8665 7 месяцев назад
Thanks for this totorial
@ziddiengineer
@ziddiengineer 7 месяцев назад
Can u send notebook of this tutorial
@unclecode
@unclecode 3 месяца назад
Such an ontime tutorial! I'm working on some SLMs and need insights on Fine-tuning parameters. Your video is a huge help, thx for that! Couldn't find the colab for this project in the repository, any chance the colab is available? Btw I'm one of your mlexpert members.
@venelin_valkov
@venelin_valkov 3 месяца назад
Here is the colab link: colab.research.google.com/github/curiousily/AI-Bootcamp/blob/master/08.llm-fine-tuning.ipynb From the GitHub repo:github.com/curiousily/AI-Bootcamp Thank you for watching and subscribing!
@Iiochilios1756
@Iiochilios1756 5 месяцев назад
Please explain one interesting moment: First you add special token and then enlage embedding dimension to take this new token into account. At that point the new embedding is initialized by random values. Later you apply to target modules and embedding layer is absent in that list. My questions: 1) when you will train new embedding you have just added? Original model is freezed, only LoRA layers will be trained by trainer(). 2) why you do not add ###Titel, ###Text and ###Prediction as special tokens and let'em be part of the text?
@devtest202
@devtest202 5 месяцев назад
Hi thanks!! A question for a model in which I have more than 2,000 pdfs. Do you recommend improving the handling of vector databases? When do you recommend fine tunning and when do you recommend vector database
@xugefu
@xugefu 6 месяцев назад
Thanks!
@researchforumonline
@researchforumonline 6 месяцев назад
Thanks
@mohamedkeddache4202
@mohamedkeddache4202 6 месяцев назад
thanks for the video. but ... is that a language model? idk a lot about Ai, but it looks like a multi class classification. LLM is supposed to be like chat gpt right ?
@temiwale88
@temiwale88 5 месяцев назад
Hello. Thank you for this work! I don't see the jupyter notebook in the github repo.
@trollerninja4356
@trollerninja4356 4 месяца назад
getting NameError: DataCollatorForCompletionOnlyLM not found. I also checked the docs. I didn't find any Class named DataCollatorForCompletionOnlyLM
@kekuramusa
@kekuramusa 2 месяца назад
from trl import DataCollatorForCompletionOnlyLM
@onoff5604
@onoff5604 7 месяцев назад
Kind of hard to tell if this is a close match to needs...where I can't see anything at all...
Далее
Optimize Your AI Models
11:43
Просмотров 7 тыс.
LoRA & QLoRA Fine-tuning Explained In-Depth
14:39
Просмотров 37 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 468 тыс.
Fine-tuning on Wikipedia Datasets
43:40
Просмотров 2,5 тыс.