Тёмный

Ollama - Libraries, Vision and Updates 

Sam Witteveen
Подписаться 67 тыс.
Просмотров 26 тыс.
50% 1

Наука

Опубликовано:

 

3 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 49   
@lucioussmoothy
@lucioussmoothy 7 месяцев назад
Thanks for pulling this together. Really like the /show /save capabilities. Suggests new ways of creating and updating model files.
@motbus3
@motbus3 7 месяцев назад
Wow. I am impressed to find one useful AI related channel. I mean you show things running with your code, you state real problems you find and you discuss your own results. Please continue with that 🙏 and thank you very much!
@mukkeshmckenzie7386
@mukkeshmckenzie7386 7 месяцев назад
If they had an option to load multiple models at the same time (if there's enough ram/vram), it would be cool. The current workaround is to dockerize an ollama instance and run multiple of them on the same gpu.
@samwitteveenai
@samwitteveenai 7 месяцев назад
good tip!
@RaspberryPi-gs7ts
@RaspberryPi-gs7ts 2 месяца назад
i love this series of introduction ollama!!! a lot!!!
@Leonid.Shamis
@Leonid.Shamis 7 месяцев назад
Thank you for another very informative video. It would indeed be cool to hear more about using Ollama and local LLMs with AutoGen and for a fully local RAG system.
@changchoi4820
@changchoi4820 7 месяцев назад
Wow so cool how local LLMs are progressing! So many ideas - can't handle hahah
@acekorneya1
@acekorneya1 7 месяцев назад
would be awesome for some tutorial videos on how you make those automated tools would be awesome to know how to do things like that
@dllsmartphone3214
@dllsmartphone3214 7 месяцев назад
ollama is the nest bro. i use it witz web ui its amazing
@mr.daniish
@mr.daniish 7 месяцев назад
the logs feature is a game changer!
@theh1ve
@theh1ve 7 месяцев назад
Greater canter through the recent updates, have to say I am a fan of ollama and have switched to using it almost exclusively in projects now. Not least as it's easier for others in my team to pick up. Really short learning curve to get up and running with local LLMs.
@samwitteveenai
@samwitteveenai 7 месяцев назад
totally how I feel about it. it is simple to the point and the code is open source. I have got my team using it and everyone picks it up quickly
@attilavass6935
@attilavass6935 7 месяцев назад
Pls. create a video about hosting an LLM server with Ollama on Google Colab (free T4) available via API. That might be a cost efficient way of hosting "local" models.
@stephenthumb2912
@stephenthumb2912 7 месяцев назад
Essentially this is based on llama.cpp embedded in Go but stranglely cannot handle concurrency. Love ollama and use it a lot but to run it in a production setting you have to basically spin multiple ollama server each of which can take a queue. In other words a load balancer setup with niginx or something.
@supernewuser
@supernewuser 7 месяцев назад
I just noticed some of these updates yesterday and it let me simplify some bits of my stack and remove litellm. It's actually kind of scary how quickly all of this stuff is becoming commodity parts.
@samwitteveenai
@samwitteveenai 7 месяцев назад
totally agree things are moving so quick
@ojasvisingh786
@ojasvisingh786 7 месяцев назад
👏👏
@kenchang3456
@kenchang3456 7 месяцев назад
I just saw on Matt Williams channel that Ollama now run on Windows natively. Just thought I'd mention it to you.
@samwitteveenai
@samwitteveenai 7 месяцев назад
yeah I saw they have been in beta. I don't use Windows but glad it is out.
@Zale370
@Zale370 7 месяцев назад
Great video! Can you please cover stanfordnlp's dspy, amazing library!
@samwitteveenai
@samwitteveenai 7 месяцев назад
Yeah I have been working on a few ideas for this. Anything in particular you wanted me to build etc with it?
@Karl-Asger
@Karl-Asger 7 месяцев назад
​@@samwitteveenaiI'll throw in a suggestion - using DSPy for an LLM agent with tool usage! Imo DSPy seems really powerful for bootstrapping examples for optimal answers. Let's say we have an LLM agent that has the purpose of performing five or six different main purposes with one or two functions for each purpose. Can use DSPY to optimize the pipeline for each of those purposes, it would be amazing.
@Zale370
@Zale370 7 месяцев назад
@@samwitteveenai I'd love to see some app that uses dspy with langchain and pinecone maybe.
@aiexplainai2
@aiexplainai2 7 месяцев назад
great video as always! Would you consider to cover lepton ai? looks like a great way to host llm on local machine
@equious8413
@equious8413 7 месяцев назад
I serve a model with ollama and I hooked it up to a discord bot :D
@redbaron3555
@redbaron3555 4 месяца назад
👏🏻👍🏻
@ShikharDadhich
@ShikharDadhich 7 месяцев назад
Ollama is Awesome however there are some minor issues with oLlama: 1. Single threaded, so can not run on server serving single url to team. It’s big issue, I don’t want everyone in my team install ollama in their machine. 2. With Stream response its not easy to create client app as the response is not same is OpenAI 3. CORS issue, so need a wrapping around the APIs, which means you need to install ollama and install api wrapper on every machine
@samwitteveenai
@samwitteveenai 7 месяцев назад
great points!
@IronMechanic7110
@IronMechanic7110 7 месяцев назад
Does ollama can working without internet connection when i'm using a local llm ?
@samwitteveenai
@samwitteveenai 7 месяцев назад
yes it doesn't need an internet connection once you have downloaded it locally
@squiddymute
@squiddymute 7 месяцев назад
can you actually stop ollama (linux) somehow ? or it runs forever and ever on the background ?
@notankeshverma
@notankeshverma 7 месяцев назад
sudo systemctl stop ollama if you are using systemd.
@guanjwcn
@guanjwcn 7 месяцев назад
does this mean it can run on windows now? it has been saying windows version coming soon on its website.
@samwitteveenai
@samwitteveenai 7 месяцев назад
pretty sure they are still working on it and getting close.
@matikaevur6299
@matikaevur6299 7 месяцев назад
heh, run ollama run llama-pro:text "what are you" .. about 10 times and confirm that i'm not going crazy, it's the model . . . that thing is outputting it's fine-tuning data verbatim .. AFAIK
@miladmirmoghtadaei5038
@miladmirmoghtadaei5038 7 месяцев назад
I just don't get how it doesn't need an API for the OpenAI models.
@samwitteveenai
@samwitteveenai 7 месяцев назад
its not running the OpenAI models it is using a mirror of their API to run local models
@miladmirmoghtadaei5038
@miladmirmoghtadaei5038 7 месяцев назад
@@samwitteveenai thanks man. I guess I have to test it to find out.
@sirusThu
@sirusThu 7 месяцев назад
I always thought that it is a pig
@Trendish_channel
@Trendish_channel 7 месяцев назад
command line??? are you kidding?? This is super unconvinient + confusing + NOT for regular people! Not even half way close to LM Studio
@MarceloSevergnini
@MarceloSevergnini 7 месяцев назад
Maybe if you actually take the time to check for yourself, you’ll notice that there is a web interface available, just need to point to your ollama instance, exactly the same as chatGPT, actually it is even better 🙃
@redbaron3555
@redbaron3555 4 месяца назад
Learn CLI and stop whining.
@thampasaurusrex3716
@thampasaurusrex3716 7 месяцев назад
what is better llama.cpp or ollama?
@Joe-yi5nv
@Joe-yi5nv 7 месяцев назад
I'm pretty sure ollama is built on top of llama.cpp
@mshonle
@mshonle 7 месяцев назад
Does Ollama support the same grammar specification that restricts your output, the way llama.cpp does? That’s a great feature which I’ve used in a project recently to force JSON output.
@blender_wiki
@blender_wiki 7 месяцев назад
​@@mshonleif you need Constrained grammars I suggest you use localAI Is very easy to implement local
Далее
Ollama - Loading Custom Models
5:07
Просмотров 42 тыс.
Explaining OpenAI's o1 Reasoning Models
27:18
Просмотров 12 тыс.
Living life on the edge 😳 #wrc
00:17
Просмотров 3,5 млн
How Many Twins Can You Spot?
00:17
Просмотров 11 млн
The Value of Source Code
17:46
Просмотров 51 тыс.
Upgrade Your AI Using Web Search - The Ollama Course
8:12
I Remade YouTube From Scratch Using Just Bash
17:51
Просмотров 14 тыс.
LLM Chat App in Python w/ Ollama-py and Streamlit
22:28
CED: часть 1
23:37
Просмотров 91 тыс.