If done in proper way, model can learn your style of writing (if you have those kind of data), Tonality for example. Better alignment of the models as well.
Llama 3 is so good that fine-tuning I saw tons of people complaining that actually performance decay :/ garbage in garbage out , data are not good enough :/
Thank you, sir, for providing this valuable information. I have one question, which I am watching: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Iyzvka711pc.htmlsi=Ajxw5XK5rWOHy3if (pet care chat bot). video in the video you indicated in the next video, we will deploy the chatbot over the AWS, so I am not getting that video, if possible, can you guide me or anyone know the video, please share the video link.
I would like to ask sir, if I want to train Llama-3 that has been trained with English datasets to Spanish, do I have to train it with full unstructured data in Spanish (wikipedia, books, etc.), or is fine-tuning such as question-answer-pairs in Spanish possible? Then if fine-tuning is enough, what if the fine-tuning method uses QLoRA? And the main purpose of training the model with Spanish is to solve the QnA problem in domain-specific. Thank you, I hope you can help me.
You may want take a look at Towerinstruct 12b , tho it depends what language are you looking for , (my experience ...stay on 8bit model cause I saw a translation difference with quantized below 8bit)