Тёмный

Getting Started on Ollama 

Matt Williams
Подписаться 25 тыс.
Просмотров 38 тыс.
50% 1

Here is everything you need to know about getting started with Ollama. It's not hard, but sometimes the first steps can be daunting.
Be sure to sign up to my monthly newsletter at technovangelist.com/newsletter
And if interested in supporting me, sign up for my patreon at / technovangelist

Наука

Опубликовано:

 

24 мар 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 110   
@MonsieurGinger
@MonsieurGinger 3 дня назад
I just did all of this yesterday and still watched your video from start to finish. Very clear and concise. I look forward to your other videos.
@milorad9301
@milorad9301 3 месяца назад
Thank you, Matt! Please create more videos like this; they're really clear and simple.
@sdaiwepm
@sdaiwepm 28 дней назад
Thank you for such a helpful explanation. I wish more tech explainers and presenters were this clear and structured.
@zerotheory941
@zerotheory941 3 месяца назад
If you can make a video about crew AI explaining it as simply as you did here, you'd be my hero.
@edwardrhodes4403
@edwardrhodes4403 3 месяца назад
And also Autogen and other agents like Devika and how to integrate them
@Filipe9171
@Filipe9171 2 дня назад
This is gold. Thank you, Mr Williams!
@talktoeric
@talktoeric День назад
I really like this channel! The presentation is great and understandable. It is easy to follow along. Thanks.
@hiltonwong5419
@hiltonwong5419 14 дней назад
I have watched a few of your videos. I love the way you explain things simply and clearly. Keep it up. Thank you for your work to all of us.
@technovangelist
@technovangelist 14 дней назад
Thanks so much
@sidnewby7111
@sidnewby7111 25 дней назад
Im so happy there's someone in the mix who actually has a career in AI/Dev. Seriously, really enjoying your content. Dont listen to any of these jerks.
@wirreswuergen
@wirreswuergen Месяц назад
Thank you, Matt! Your videos are awesome and already helped me a lot :)
@vicnent75
@vicnent75 Месяц назад
thank you for you job Matt.
@richardurwin
@richardurwin Месяц назад
Thank you for the video
@YotamGuttman
@YotamGuttman 14 дней назад
fascinating. thank you for these videos!
@liammcmullen4497
@liammcmullen4497 3 месяца назад
Great Overview Matt, your a star!
@enmingwang6332
@enmingwang6332 20 дней назад
What a great tutorial, clear, concise and informative!!!
@technovangelist
@technovangelist 15 дней назад
Glad you liked it.
@Mike-vj8do
@Mike-vj8do Месяц назад
AMAZING!
@incrastic6437
@incrastic6437 2 месяца назад
Excellent introduction. Thanks for the help
@qewolf
@qewolf Месяц назад
Verry cool, thank you 🙏
@hotbird3
@hotbird3 Месяц назад
You're a very smart person 👍👊
@JoaoKruschewsky
@JoaoKruschewsky 2 месяца назад
Hello from Brazil. I really liked your content ! thanks
@continuouslearner
@continuouslearner 13 дней назад
Would have been good to cover what ollama is and what problems does it solve, for about 30sec-1min, before going into hardwarre requirements etc.
@Drkayb
@Drkayb 3 месяца назад
Excellent summary, thanks alot.
@JeppeGybergyoutube
@JeppeGybergyoutube Месяц назад
Nice video
@bens4446
@bens4446 2 месяца назад
Thanks! Just downloaded Ollama and was feeling a bit lost. Would really appreciate some guidance on integrating speech recognition and text to speech into the chatbot. But just about anything you say will probably be useful. Please keep 'em coming!
@juanjesusligero391
@juanjesusligero391 3 месяца назад
Thank you so much for your tutorials! :D I would like to suggest an idea for a future video that I would be really interested in watching: a more detailed exploration of the various models (such as the instruct/base/etc. ones you've mentioned). Again, thank you very much! You rock! ^^/
@nholmes86
@nholmes86 Месяц назад
I successful run Ollama with llama 3 on Mac OS M1 8G, it runs better when you close other apps .
@flexchamp
@flexchamp 6 дней назад
10 out of 10!
@SergiySev
@SergiySev 3 месяца назад
Great video, thank you for Ollama introduction! is there a way to add my own data to the model or shrink model to a particular topic? for example TailwindCSS, there is a sorce code, docs, library of the project, is there any way to train model to be able generate layouts and components based on a provided data?
@ec_gadgets
@ec_gadgets 3 месяца назад
You explained it perfectly, thank you
@technovangelist
@technovangelist 2 месяца назад
Glad it was helpful!
@ftlbaby
@ftlbaby Месяц назад
Thanks for this! I just setup Ollama with wizard-vicuna-uncensored:30b-q8_0. Do you know what's different in the fp16 models?
@user-wr4yl7tx3w
@user-wr4yl7tx3w 3 месяца назад
Great content
@emil8367
@emil8367 3 месяца назад
thanks for sharing, prune is something what I missed but very useful due to the fact of downloading large files and loosing them after each restart, was very annoying. I see ollama didn't documented it well or maybe I overlooked it
@K600K300
@K600K300 3 месяца назад
thank you
@ValentinPletzer
@ValentinPletzer 3 месяца назад
Thanks. I really learned a lot by watching your videos. I recently ran into an issue when writing a new template model for few shot learning. Most of the times it responds like expected but sometimes it responds to my prompt and then also inserts it's own command by adding [INST] some other prompt … and also answers it. I probably made some mistake but I cannot figure it out. That's why I would love to see you make a video on templates (if this isn't too much to ask).
@jahbini
@jahbini 3 месяца назад
I second that request!
@sebington-ai
@sebington-ai 3 месяца назад
Hi Matt, do you know what determines the length of a model's answer? How does the model 'know' when to stop? Is it hard coded into the model or is it controlled by Ollama? Thanks
@dmbrv
@dmbrv 3 месяца назад
thanks
@samsquamsh78
@samsquamsh78 3 месяца назад
I like your videos, always spot on and pedagogical! Why did you leave ollama?
@technovangelist
@technovangelist 3 месяца назад
If we find ourselves in the same room I’ll talk about it there.
@nicosilva4750
@nicosilva4750 3 месяца назад
Do the models return Markdown ...like lists? extended Markdown ...like tables and LaTeX? I have written my own desktop client that I put on all our machines to use OpenAI and their API (cheaper than $20 * 5/month). So I would like to have a network server for my home to run a local model. Can I set it up there and have everyone use it, or would there be performance issues? ...what about simultaneous usage?
@abhijeetkumar8044
@abhijeetkumar8044 2 месяца назад
Please create videos on how to fine tune these models 🙏
@RobCowie
@RobCowie Месяц назад
Does it phone "home" at all, or is the model I use locally, assuming the machine is connected to the Internet, shared publicly at all, and is it secure?
@technovangelist
@technovangelist Месяц назад
It doesn’t reach out anywhere unless you write a program to have it do something like that.
@Delchursing
@Delchursing 2 месяца назад
Great video. The costs are a bit unclear to me. Would a local ollama/llm be free to use?
@technovangelist
@technovangelist 2 месяца назад
What costs? You have to own a computer with a gpu. That’s it
@mshonle
@mshonle 3 месяца назад
Here’s a video request: can you do one on LMSys’s SGLang? Particularly using constrained decoding?
@thepassionatecoder5404
@thepassionatecoder5404 2 месяца назад
Do I need to know match, statistics, etc... apart from programming?
@PoGGiE06
@PoGGiE06 2 месяца назад
Thanks Matt, why does everyone use Mistral rather than Mixtral?
@technovangelist
@technovangelist 2 месяца назад
Too slow
@tecnopadre
@tecnopadre 3 месяца назад
Sometimes your level is so high some others too simple. Cheers.
@anshulsingh8326
@anshulsingh8326 28 дней назад
Subed ❤️ If only you taught maths too
@bens4446
@bens4446 2 месяца назад
FYI- My llama2 install is working reasonably fast without a GPU, just a ryzan 5600G CPU, which has some rudimentary graphics capacity built into it.
@Thymed
@Thymed 8 дней назад
More than rudimentary but still I get your point. much less than modern GPUs or recent APUs
@blackwinegum
@blackwinegum Месяц назад
I just dont get any sort of CLI when i install Ollama , the app just shows "view logs" and "Quit Ollama"
@technovangelist
@technovangelist Месяц назад
so when you run ollama at the command line you don't see anything?
@blackwinegum
@blackwinegum Месяц назад
@@technovangelist I think i've figured it out, i think my firewall was blocking something, thanks for replying.
@AwesomeCanadianHomes
@AwesomeCanadianHomes 2 месяца назад
I have a feeling Duncan Trussell is a subscriber : )
@mrrohitjadhav470
@mrrohitjadhav470 3 месяца назад
it would have been great to know how to install other models not mentioned in Ollama library with specific type of Low Vram and GGUF
@technovangelist
@technovangelist 3 месяца назад
check out ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-0ou51l-MLCo.html
@mrrohitjadhav470
@mrrohitjadhav470 3 месяца назад
@@technovangelist Thanks a lot❤
@thiagoassisfernandes
@thiagoassisfernandes 3 месяца назад
arch and nix are system-d distros
@technovangelist
@technovangelist 3 месяца назад
Doh!
@stebansb
@stebansb Месяц назад
great content, a telegram group would be great!
@technovangelist
@technovangelist Месяц назад
Telegram??? I think I used it once at an Idan Raichel concert but never since. What’s special about a telegram group?
@stebansb
@stebansb Месяц назад
@@technovangelist the other option being Discord, I feel is simpler, cleaner user interface, yet very powerful; popular with business and a slightly more mature cohort. The other option is Discord, slower more complex, popular among gamers. Either way, will be cool to have something to build a community beyond RU-vid.
@user-du2jz9wx6k
@user-du2jz9wx6k 3 месяца назад
Ollama runs very well on a M3 Max
@userou-ig1ze
@userou-ig1ze 3 месяца назад
I wish there was an easier way to fill in template text, and parsing pdfs. I've seen the 'function calling' video's, but somehow it's still eluding me how do get this done as easily as possible (e.g. sending a pdf over api in a curl request from another machine, and rename it sensibly/according to content)
@technovangelist
@technovangelist 3 месяца назад
The biggest problem there is the pdf. You can’t easily get to the contents of the pdf. The text. It’s often jumbled up. PDF is the worst format you can use if you want the text and to do something with it. That’s also one of the benefits of pdf. It obfuscates the source text so folks can’t do anything with the text.
@userou-ig1ze
@userou-ig1ze 3 месяца назад
​@@technovangelistthanks for the reply! I used pdf2text but it was not exactly perfectly successful. I wonder how ollama frontends (e.g. webgui or webui) solve this for their RAG? Gave me hope that there is a good way of doing it 🎉
@makesnosense6304
@makesnosense6304 3 месяца назад
9:40 To have the same result you just need the same input, seed (and other parameters), no? Reason it's different every time is because the seed is random for every request, right? The seed used (and other parameters) make the result different because it takes a different path in the weight model.
@technovangelist
@technovangelist 3 месяца назад
Using the same seed and temp doesn’t always guarantee the same result
@makesnosense6304
@makesnosense6304 3 месяца назад
@@technovangelist Ah, because temp is a percentage randomness scale of sort.
@makesnosense6304
@makesnosense6304 3 месяца назад
@@technovangelist What if temp is 0? Or 1?
@technovangelist
@technovangelist 3 месяца назад
It’s not guaranteed
@viniciussilvano4177
@viniciussilvano4177 21 день назад
Please, do more compatibility with GPUs. I have rx580. My processor is crying hehe
@technovangelist
@technovangelist 21 день назад
That’s a request for AMD to add support to those older lower end cards I think.
@viniciussilvano4177
@viniciussilvano4177 21 день назад
@@technovangelist Is there any way I can use a library that allows me to do this. Or is it actually something that depends on AMD? I'm really impressed with what using Ollama as an API has added to my projects. I would like to find some way to speed up processing without having to spend money, at least for now.
@technovangelist
@technovangelist 20 дней назад
But amd support requires a certain level of the drivers which amd only has working for newer cards. I think the only option is to buy a more recent card. The 580 is 5 years old.
@alexsnow2993
@alexsnow2993 2 месяца назад
Hello! My video card is an RX580. Is there a way to make it work?
@alexsnow2993
@alexsnow2993 2 месяца назад
Using the rx580, it will be slow? Or not work at all?
@technovangelist
@technovangelist 2 месяца назад
I don’t see it on the compatibility list. github.com/ollama/ollama/blob/main/docs/gpu.md
@technovangelist
@technovangelist 2 месяца назад
Just won’t work at all. I think ollama requires the newer amd drivers and amd didn’t make it backwards compatible with older cards.
@alexsnow2993
@alexsnow2993 2 месяца назад
Thanks for the info! I can't get another V-card at the moment, and using the CPU it is a no go. Is there any version or any other AI out there, that can be configured locally?
@technovangelist
@technovangelist 2 месяца назад
Everything I know of is going to need a decent recent gpu.
@sergey_a
@sergey_a 3 месяца назад
thanks for the informative video. some examples should be displayed in a video, rather than spoken, for example, to show how to use environment variables
@technovangelist
@technovangelist 3 месяца назад
There are a number of videos pointed out throughout the video that provide all the examples
@briannezhad1804
@briannezhad1804 2 месяца назад
Can Ollama be used in prod on Linux server?
@technovangelist
@technovangelist 2 месяца назад
absolutely. lots of folks are doing just that.
@briannezhad1804
@briannezhad1804 2 месяца назад
@@technovangelistWow, that is amazing. I would appreciate it if you could provide documents that would guide me in deploying a model for production use and "Function Calling." Ollama is an excellent tool for a startup to keep costs down and avoid OpenAI usage costs.
@briannezhad1804
@briannezhad1804 2 месяца назад
It also gives us flexibility to keep our data in-house.
@briannezhad1804
@briannezhad1804 2 месяца назад
​@@technovangelist This is awesome!. Do you have any reference deploy successfully into prod? We are trying to avoid OpenAI and are looking for open-source AI models with "Function calling."
@technovangelist
@technovangelist 2 месяца назад
all the docs in the github repo. but it’s a pretty simple app without many dependencies. I don't know of any guidance though.
@axeljohannes3464
@axeljohannes3464 2 месяца назад
Wait do I need to download anything or not? You say "Now the model should be downloaded, so you can run it with ollama run mistral" Why would it be downloaded? I just installed the Ollama software. Does it download all the models automatically? This seems very unclear
@technovangelist
@technovangelist 2 месяца назад
I think you must have skipped around a bit. I very clearly said to install and then run ollama pull to download the model. Then while downloading talked about what’s going on. Then the model is downloaded and you can run it. When you downloaded the model you only downloaded that model. Why download anything? Because you want to run it.
@axeljohannes3464
@axeljohannes3464 Месяц назад
Thanks! I got it to work@@technovangelist
@axeljohannes3464
@axeljohannes3464 Месяц назад
I think what confused me is the term pull, and what that actually meant. So when you got to the point of speaking about downloaded, I was like "Hey, no one said anything about downloading anything"
@florentflote
@florentflote 3 месяца назад
@robert_kotula
@robert_kotula 2 месяца назад
Booted up Ollama with the llama2 model and my M1 MBP just froze 💀
@technovangelist
@technovangelist 2 месяца назад
That is bizarre…as in you would be the first person that has happened to. Running macOS I assume. Installed using the installer? What else was running? So you installed and then opened a terminal and run ollama run llama2 and then nothing? Probably easiest to solve on the discord.
@robert_kotula
@robert_kotula 2 месяца назад
@@technovangelist I’ll join the discord channel and try to troubleshoot. I’ve had a couple of tabs open in Safari and one tab in Firefox Developer addition, nothing else. Will need to dig into the performance stats on the laptop.
@mcawesome4150
@mcawesome4150 2 месяца назад
you should have more views and subscribers
@technovangelist
@technovangelist 2 месяца назад
Thanks. Both are accelerating quickly. But feel free to share. I like to say I am working on my first million subscribers. Only 985,000 short.
@jyashi1
@jyashi1 3 месяца назад
First
@technovangelist
@technovangelist 3 месяца назад
First what.
@technovangelist
@technovangelist 3 месяца назад
About 6 hours late to be first comment
@rude_people_die_young
@rude_people_die_young 3 месяца назад
My model file refuses to create an awkward silence at the end of its output 😡
Далее
Is Open Webui The Ultimate Ollama Frontend Choice?
16:43
The Secret Behind Ollama's Magic: Revealed!
8:27
Просмотров 28 тыс.
Run your own AI (but private)
22:13
Просмотров 1,2 млн
Ollama - Local Models on your machine
9:33
Просмотров 73 тыс.
Local RAG using Ollama and Anything LLM
15:07
Просмотров 12 тыс.
I switched to Linux 30 days ago... How did it go?
28:46
Better Searches With Local AI
8:30
Просмотров 24 тыс.
Adding Custom Models to Ollama
10:12
Просмотров 22 тыс.
Reliable, fully local RAG agents with LLaMA3
21:19
Просмотров 98 тыс.
#miniphone
0:16
Просмотров 3,6 млн
Дорогие компы БЕСПОЛЕЗНЫ?
1:00
Просмотров 735 тыс.