Тёмный

LoRA Land: How We Trained 25 Fine-Tuned Mistral-7b Models that Outperform GPT-4 

Predibase
Подписаться 1,5 тыс.
Просмотров 6 тыс.
50% 1

Наука

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 8   
@JulianHarris
@JulianHarris 6 месяцев назад
Have you guys looked at the next generation of quantisation: eg ternary/1.58 bit quantisation? It’s a different technique to conventional quantisation because you have matrices that only have 0, 1, -1, and you eliminate matrix multiplication almost entirely. The intuition is that the combination may not bring quite as many benefits, but it might be interesting to see how it performs in CPU architectures for instance.
@ofir952
@ofir952 7 месяцев назад
Thanks! How did you manage to remove the surrounding text of the LLM response?
@pieromolino_pb
@pieromolino_pb 7 месяцев назад
It's a side effect of fine-tuning on output that contains only the JSON without tany other text
@ofir952
@ofir952 7 месяцев назад
So, we cannot achieve this without fine-tuning? Llama2 keeps on adding it all the time 🥲@@pieromolino_pb
@jeffg4686
@jeffg4686 6 месяцев назад
Nice !
@tankieslayer6927
@tankieslayer6927 7 месяцев назад
FINE-TUNED MODEL RESPONSE Named Entity Recognition (CoNLL++) {"person": ["Such"], "organization": ["Yorkshire"], "location": [], "miscellaneous": []} Yeah, I am not impressed with the result of this fine-tuning.
@pieromolino_pb
@pieromolino_pb 7 месяцев назад
The input text is: By the close Yorkshire had turned that into a 37-run advantage but off-spinner Such had scuttled their hopes , taking four for 24 in 48 balls and leaving them hanging on 119 for five and praying for rain. Yorkshire in this case is a sports team, so organization is correct, and Such is a a player, so both model's predictions are correct indeed. I'd suggest to try to understand better what is going on next time.
@The_Real_Goodboy_Link
@The_Real_Goodboy_Link 19 дней назад
Found the real solution, @tankieslayer6927, click on your icon on the top-right screen here, then settings, advanced settings, delete channel. Then go over to Google and do similarly for your account there. Problem solved!
Далее
Ко мне подкатил бармен
00:58
Просмотров 92 тыс.
Fine-Tuning Mistral-7B with LoRA (Low Rank Adaptation)
1:01:16
5 Reasons Why Adapters are the Future of Fine-tuning LLMs
1:01:18
Efficient Fine-Tuning for Llama-v2-7b on a Single GPU
59:53
Has Generative AI Already Peaked? - Computerphile
12:48
КУПИЛ IPHONE 15 PRO ЗА 87000 РУБЛЕЙ!
27:33