Тёмный

Master Ollama's File Layout in Minutes! 

Matt Williams
Подписаться 40 тыс.
Просмотров 6 тыс.
50% 1

Опубликовано:

 

25 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 29   
@william-1776
@william-1776 2 дня назад
Great content Matt. Keep up the good work!
@ShaneHolloman
@ShaneHolloman 2 дня назад
pure gold Matt!
@ErroneousTheory
@ErroneousTheory 2 дня назад
Your videos are a gem. Thank you. Just a suggestion for a topic: Resource management. I don't understand how, in a multi-tier system with dedicated servers, there is such a difference in memory allocated for ollama when operating from curl, open webui, or membpt/letta. How can I tune what the client reserves on the ollama server?
@technovangelist
@technovangelist 2 дня назад
hmmm. there isn't anything different that ollama does. Maybe webui and that other thing do something
@technovangelist
@technovangelist 2 дня назад
what is membpt/letta
@erichey6394
@erichey6394 2 дня назад
Any chance this difference can be accounted for by the inclusion (or lack of) of conversation context history?
@technovangelist
@technovangelist 2 дня назад
Well there is that or the clients have adjusted the model without telling you to the max supported context size of the model.
@Igbon5
@Igbon5 2 дня назад
It is all interesting even though I struggle to understand some things. I hope your channel does well going forward.
@tibssy1982
@tibssy1982 2 дня назад
I have just an old quadro m4000 but ollama works fine. I'm so happy with it.
@saoter
@saoter 2 дня назад
Hi, thanks for all the great videos! I have an unusual issue and hope you can help. I use Ollama for both daily tasks and larger projects. For the bigger models, I’ve moved the files to an external drive due to their size and set the environment path (on Windows), which works well. However, for my daily tasks, it’s inconvenient to always have the external drive connected, especially for using basic models like LLaMA 3.2. Is there a way to set up two model locations so it can read from both when available, or default to the laptop when the external drive isn’t connected? Thanks in advance! 🥛
@КравчукІгор-т2э
@КравчукІгор-т2э 2 дня назад
Thank you so much Matt! As always everything is relevant, clear and interesting! I have several questions for you: 1. How do I know what information the model was trained on? What skills does it contain? I have a weak computer, so I use small models. If I know what information was put into the model, I will understand if I should use it for my purposes. 2. Is there any way to remove unnecessary information from the model, so that I can train this model on my own. I am grateful to you in advance for your professional answers. From the bottom of my heart I wish you the soonest 1 000 000 subscribers, success and prosperity, you are the best !!!
@technovangelist
@technovangelist 20 часов назад
usually the card describing the model says where the data it was trained on comes from. Removing info from a model is very hard and computationally very expensive.
@КравчукІгор-т2э
@КравчукІгор-т2э 19 часов назад
@@technovangelist Thanks!
@anthony-fi
@anthony-fi 2 дня назад
😊 hi Matt hope you find time to do a chat stream at some point.
@tomwawer5714
@tomwawer5714 2 дня назад
Matt thanks for great ple content. I have a 2016 i7 32Gb ram and 6GB 1070Ti laptop I can run 13b and 27b models easily. It’s great platform! Please do crash course on templates
@technovangelist
@technovangelist День назад
I was planning to do that. Thanks
@emaayan
@emaayan День назад
thanks, matt this seems strangely related to my questions i was asking on discord of what is called "a model" vs what is the GGUF file, because it can somewhat confusing to see the catalog on ollama of models, and see a catalog of models on hugging face, i'm trying sort grasp the notion of a model which makes it look like it's code, even though it's not, and how it's related to model template which is not the same as the system prompt. i understand that model template is somehow used for the creation of a model, but it's the language itself standard, that all tools besides ollama could understand?
@technovangelist
@technovangelist 20 часов назад
All models require a template to use. the general syntax is the same, but some tools will use jinja templates to express that, and ollama uses gotemplates
@fabriai
@fabriai 2 дня назад
Thanks a lot for the course, Matt. I have a 2020 iMac with an AMD Radeon which doesn’t work with cuda. In your experience, is there a way to use an external graphics card that works with Ollama?
@technovangelist
@technovangelist 2 дня назад
Intel Mac won’t be able to access the gpu
@fabriai
@fabriai 2 дня назад
@@technovangelist OK, thanks a lot for taking the time, sir. Time to change my Mac. Would you share which specs are relevant for working with Ollama? Thank you!
@technovangelist
@technovangelist День назад
I think any apple silicon Mac is amazing. Getting the most memory and dis you can afford is important. With 64gb ram I can do up to a 70b model though I rarely do. Depending on your workload I would pick at least 1tb. I have 4 and it’s great. Though I spend a lot of time offloading stuff. The new Mac’s should be out soon but a used m1 or m2 is great too
@fabriai
@fabriai День назад
@@technovangelist Once again, Matt, thanks for taking the time. I appreciate it a lot, sir. You're the best.
@autoboto
@autoboto День назад
I wish there is a model move command. The internal model folder can use up my ssd free space. Some of my models are huge and don't want to re-download again. Be nice to do a model move command to offload models I'm not using for now onto external ssd. But convenient to copy back to the internal ssd model folder when needed again. Made a py script that just moves the "big" files. But its not 100% reliable and few times I was forced to re-download the model to get it working again. An official ollama model mover tool would be most useful. keeps all the dependency files organized and working for the target model when moving between volumes ssd model folders.
@JNET_Reloaded
@JNET_Reloaded 2 дня назад
a better lesson is swap file! the bigger your swap file the bigger the models one can load even on a lil rpi 5 i can run massive models any yer its slower but free :D this is for linux maybe mac aswell i dont have crapple stuff! function swap() { # Set the default size to 20 GB local default_swap_size=20 # Check if an argument is supplied if [ -z "$1" ]; then read -p "Enter Swap Size (GB) [default: $default_swap_size]: " sizeofswap # If no input is provided, use the default size sizeofswap=${sizeofswap:-$default_swap_size} echo "Setting New Swap Size To $sizeofswap GB" else echo "Setting New Swap Size To $1 GB" sizeofswap=$1 fi sudo swapoff /swapfile sudo rm /swapfile sudo fallocate -l "${sizeofswap}G" /swapfile sudo chmod 600 /swapfile sudo mkswap /swapfile sudo swapon /swapfile echo "New Swap Size Of $sizeofswap GB" free -h }
@technovangelist
@technovangelist 2 дня назад
that’s too bad. Apple gets so much more right....;-)
@kepenge
@kepenge 2 дня назад
Why the need to every single video, emphasise that part of Ollama founding team?
@technovangelist
@technovangelist 2 дня назад
For most this is the first video of mine they have seen. Why should I be listened to to learn about ollama?it’s about credibility. And I am able to see watch time has generally gone up since adding that.
@technovangelist
@technovangelist 2 дня назад
That’s also why I add some pics from the group to give the regulars something to see
Далее
Unlock the Power of AI with Ollama and Hugging Face
8:03
Unlock AI Mastery with Pro Tips on Prompting!
8:36
Просмотров 10 тыс.
Бокс - Финты Дмитрия Бивола
00:31
Using Ollama and N8N for AI Automation
13:43
Просмотров 27 тыс.
Explore a New C# Library for AI
9:55
Просмотров 17 тыс.
Synology vs UniFi UNAS Pro - WHICH NAS IS BEST?
28:43
Tucker Carlson’s Bizarre Trump Spanking Comment
13:17
Upgrade Your AI Using Web Search - The Ollama Course
8:12
No One Hires Jr Devs So I Made A Game
39:31
Просмотров 232 тыс.
When You "Save Money" Making it Yourself
36:19
Просмотров 175 тыс.