Тёмный

Fine-Tuning with ReFT: Create an Emoji LLM for Medical Diagnosis 

AI Anytime
Подписаться 30 тыс.
Просмотров 1,5 тыс.
50% 1

Hey everyone! In this tutorial, I'll walk you through an exciting new fine-tuning method called ReFT (representation fine-tuning) using the powerful pyreft library. 🌟
Discover how you can fine-tune pretrained language models with fewer parameters and achieve potentially better performance. We'll dive into the process of creating an emoji LLM (Language Model) tailored for medical diagnosis questions and their corresponding emoji responses.
Here's what you'll learn:
1. How to set up and use pyreft for fine-tuning any HuggingFace pretrained language models.
2. Configuring ReFT hyperparameters via easy-to-use configs.
3. Sharing your fine-tuned models effortlessly on HuggingFace.
This video is perfect for anyone looking to boost fine-tuning efficiency, reduce costs, and explore the interpretability of adapting parameters.
Don't forget to like, comment, and subscribe to stay updated with the latest in AI and fine-tuning techniques. Hit the bell icon to get notified of new videos.
GitHub: github.com/AIA...
PyReFT Library: github.com/sta...
Join this channel to get access to perks:
/ @aianytime
To further support the channel, you can contribute via the following methods:
Bitcoin Address: 32zhmo5T9jvu8gJDGW3LTuKBM1KPMHoCsW
UPI: sonu1000raw@ybl
#ai #finetuning #llm

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 5   
@jaganbaburajamanickam7502
@jaganbaburajamanickam7502 2 месяца назад
In your code "reft_config = pyreft.ReftConfig(representations={ "layer": 8, "component": "block_output", "low_rank_dimension": 4, "intervention": pyreft.LoreftIntervention(embed_dim=model.config.hidden_size, low_rank_dimension=4)})" How do we choose the layer and low_rank_dimension value ? It would be nice if you could describe more about the arguments and give some suggestions/examples how and what to adjust when the output is not exepcted
@AIAnytime
@AIAnytime 2 месяца назад
Thanks for the tip, can you watch this video of mine for more understanding on Hyper parameters: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-y-9G41zELIY.html
@charitasri8162
@charitasri8162 4 месяца назад
similar to this..can i take the dataset as text and label..and in text i will writing patient symptoms,medical history, medical report results and in label column it has the disease..and when we take the user input symptoms,medical history,medical reports and output should be the medical diagnosis and treatment....? can you please help me
@thisurawz
@thisurawz 4 месяца назад
Is ReFT performing better than LoRA? i mean the Accuracy mainly. Moreover, which is the best performing one among LoRA, DoRA and ReFT when we compare the Accuracy
@muhammedajmalg6426
@muhammedajmalg6426 4 месяца назад
thanks for sharing
Далее
Fine-Tuning GPT Models with Python
23:14
Просмотров 12 тыс.
ПОЮ ВЖИВУЮ🎙
3:19:12
Просмотров 879 тыс.
Se las dejo ahí.
00:10
Просмотров 2,5 млн
RAG vs. Fine Tuning
8:57
Просмотров 18 тыс.
Finetuning Open-Source LLMs
20:05
Просмотров 32 тыс.
Very Few Parameter Fine tuning with ReFT and LoRA
54:39
I love small and awesome models
11:43
Просмотров 15 тыс.
ПОЮ ВЖИВУЮ🎙
3:19:12
Просмотров 879 тыс.