Тёмный

Unlock Ollama's Modelfile | How to Upgrade your Model's Brain using the Modelfile 

Prompt Engineer
Подписаться 12 тыс.
Просмотров 17 тыс.
50% 1

In this video, we are going to analyse the Modelfile of Ollama and how we can change the Brain of the Models in Ollama.
A model file is the blueprint to create and share models with Ollama.
The modelfile includes the following Instructions viz. FROM, PARAMETER, TEMPLATE, SYSTEM, ADAPTER, LICENSE AND MESSAGE.
Link: github.com/ollama/ollama/blob...
Let’s do this!
Join the AI Revolution!
#ollama #modelfile #milestone #AGI #openai #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #qstar #openai #ollama #webui #github #python #llm #largelanguagemodels
CHANNEL LINKS:
🕵️‍♀️ Join my Patreon: / promptengineer975
☕ Buy me a coffee: ko-fi.com/promptengineer
📞 Get on a Call with me - at $125 Calendly: calendly.com/prompt-engineer4...
❤️ Subscribe: / @promptengineer48
💀 GitHub Profile: github.com/PromptEngineer48
🔖 Twitter Profile: / prompt48
TIME STAMPS:
0:00 Intro
0:30 Download Ollama
1:15 Startup Ollama
4:10 Introducing the Modelfile
5:15 Modelfile in Depth
7:46 System in Modelfile
8:20 Construct Custom Model from Modelfile
9:18 Test the new Custom Model
10:43 Messages in Modelfile
12:57 Next Video Conclusion
🎁Subscribe to my channel: / @promptengineer48
If you have any questions, comments or suggestions, feel free to comment below.
🔔 Don't forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!

Наука

Опубликовано:

 

12 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 40   
@drmetroyt
@drmetroyt 4 месяца назад
Thanks for taking up the request ... 😊
@PromptEngineer48
@PromptEngineer48 4 месяца назад
🤗 Welcome
@renierdelacruz4652
@renierdelacruz4652 4 месяца назад
Great Video, Thanks very much.
@PromptEngineer48
@PromptEngineer48 4 месяца назад
You are welcome!
@TokyoNeko8
@TokyoNeko8 4 месяца назад
I use the web ui and I feel it's much easier to manage the modelfiles and the obvious history tracking oc the chat etc etc.
@saramirabi1485
@saramirabi1485 11 дней назад
Have a question is it possible to fine-tune the llama-3 in Ollama?
@fkxfkx
@fkxfkx 4 месяца назад
Great 👍
@PromptEngineer48
@PromptEngineer48 4 месяца назад
Thank you! Cheers!
@user-ms2ss4kg3m
@user-ms2ss4kg3m 2 месяца назад
great thanks
@PromptEngineer48
@PromptEngineer48 2 месяца назад
You are welcome!
@enesnesnese
@enesnesnese Месяц назад
Thanks for the clear explanation. But can we also do this for the llama3 model built on the ollama image in Docker? I assume that containers do not have access to our local files
@PromptEngineer48
@PromptEngineer48 Месяц назад
Yes, you can
@enesnesnese
@enesnesnese Месяц назад
@@PromptEngineer48 how? Should I create a file named Modelfile in container? Or should I create in my local? I am confused
@PromptEngineer48
@PromptEngineer48 Месяц назад
@@enesnesnese you should create the modelfile in local and you could run the model created from this modelfile in container
@enesnesnese
@enesnesnese Месяц назад
@@PromptEngineer48 got it. Thanks
@JavierCamacho
@JavierCamacho 2 месяца назад
Stupid question. Does this creates a new model file or it just creates a instruction file for the base model to follow instructions?
@PromptEngineer48
@PromptEngineer48 2 месяца назад
New Model File
@JavierCamacho
@JavierCamacho 2 месяца назад
@PromptEngineer48 so the size on driver gets duplicated...? I mean 4gb of llama3 plus an extra 4gb for whatever copy we make?
@PromptEngineer48
@PromptEngineer48 2 месяца назад
@@JavierCamacho No the old is not used. just the new one
@JavierCamacho
@JavierCamacho 2 месяца назад
@@PromptEngineer48 thanks
@khalidkifayat
@khalidkifayat 4 месяца назад
nice one, questions was how to use mistral_prompt for production purposes OR sending to client ??
@PromptEngineer48
@PromptEngineer48 4 месяца назад
Yes. U can push this to your Ollama login under your models. Then anyone will be able to pull the model by saying like Ollama pull promptengineer48/mistral_prompt . I will show the process in the next video on Ollama for sure.
@khalidkifayat
@khalidkifayat 4 месяца назад
​@@PromptEngineer48appreciated mate
@michaelroberts1120
@michaelroberts1120 3 месяца назад
What exactly does this do that koboldcpp or sillytavern does not already do in a much simpler way?
@PromptEngineer48
@PromptEngineer48 3 месяца назад
basically if i can get the models running on ollama, we open another door of integration.
@user-wr4yl7tx3w
@user-wr4yl7tx3w 3 месяца назад
do you have a video showing how to use crewai and ollama together?
@PromptEngineer48
@PromptEngineer48 3 месяца назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-GKr5URJvNDQ.html
@UTubeGuyJK
@UTubeGuyJK 3 месяца назад
How does modelfile not have a file extension? This keeps me up at night not understanding how that works :)
@PromptEngineer48
@PromptEngineer48 3 месяца назад
I will find the reason and give you a night's sleep.
@robertranjan
@robertranjan 3 месяца назад
❯ ollama run mistral >>> does a computer filename must have a extension? A computer file name does not strictly have to have an extension, but it is a common convention in many computing systems, including popular operating systems like Windows and macOS. An extension provides additional information about the type or format of the data contained within the file. For instance, a file named "example.txt" with no extension would still be considered a valid file, but the system might not recognize it as a text file and may not open it with the default text editor. In contrast, if the same file is saved with the ".txt" extension, the system is more likely to open it using the appropriate text editor. One popular file like `Modelfile` without an extension is `Dockerfile`. I think, developers named it like that one...
@EngineerAAJ
@EngineerAAJ 4 месяца назад
Is it possible to prepare a model with RAG and then save it as a new model?
@PromptEngineer48
@PromptEngineer48 4 месяца назад
To prepare a model for RAG, we would need to do finetune the model separately using other tools, get the .bin file or gguf file, then convert to Ollama intergration mode.
@EngineerAAJ
@EngineerAAJ 4 месяца назад
@@PromptEngineer48Thanks, I will try to take a deeper look into that, but something says that I won't have enough memory for that :(
@PromptEngineer48
@PromptEngineer48 4 месяца назад
Try on runpods
@autoboto
@autoboto 4 месяца назад
This is great info. One thing I have wanted to do is migrate all my local models to another drive. With Win11 I was using wsl2 with Linux ollama then I installed windows ollama and lost the reference to the local models. I rather not download the models again. In addition would be nice to be able to migrate models to another SSD and have ollama reference the alternate model path. OLLAMA_MODELS in windows works but only for downloading new models. When I copied models from the original wsl2 location to new location ollama would not recognize the models in the list command Curious if anyone has needed to relocate the high number models to new location and have ollama able to refence this new model location
@PromptEngineer48
@PromptEngineer48 4 месяца назад
Got it
@romanmed9035
@romanmed9035 2 месяца назад
how do I find out when the model is actually updated? when was it filled with data and how outdated are they?
@PromptEngineer48
@PromptEngineer48 2 месяца назад
U will have to put a different name for the model...
@romanmed9035
@romanmed9035 2 месяца назад
@@PromptEngineer48 Thank you. but I asked how to find out the date of relevance when I download someone else's model and not make my own.
@PromptEngineer48
@PromptEngineer48 2 месяца назад
if you ollama list command in cmd, you will see all the list of models in your own system
Далее
Adding Custom Models to Ollama
10:12
Просмотров 23 тыс.
Create your own CUSTOMIZED Llama 3 model using Ollama
12:55
This is how I run my OWN Custom-Models using OLLAMA
22:25
The EASIEST way to finetune LLAMA-v2 on local machine!
17:26
RAG from the Ground Up with Python and Ollama
15:32
Просмотров 25 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 440 тыс.
Run your own AI (but private)
22:13
Просмотров 1,2 млн
How I Made AI Assistants Do My Work For Me: CrewAI
19:21
Understanding How Ollama Stores Models
6:43
Просмотров 6 тыс.
Как работает экосистема Apple?
18:08