Тёмный

Using Ollama to Run Local LLMs on the Raspberry Pi 5 

Ian Wootten
Подписаться 4,3 тыс.
Просмотров 49 тыс.
50% 1

My favourite local LLM tool Ollama is simple to set up and works on a raspberry pi 5. I check it out and compare it to some benchmarks from more powerful machines.
00:00 Introduction
00:41 Installation
02:12 Model Runs
09:01 Conclusion
Ollama: ollama.ai
Blog: www.ianwootten.co.uk/2024/01/...
Support My Work:
Check out my website: www.ianwootten.co.uk
Follow me on twitter: / iwootten
Subscribe to my newsletter: newsletter.ianwootten.co.uk
Buy me a cuppa: ko-fi.com/iwootten
Learn how devs make money from Side Projects: niftydigits.gumroad.com/l/sid...
Gear:
RPi 5 from Pimoroni on Amazon: amzn.to/4aoalOd
As an affiliate I earn on qualifying purchases at no extra cost to you.

Наука

Опубликовано:

 

25 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 91   
@metacob
@metacob 2 месяца назад
I just got a RPi 5 and ran the new Llama 3 (ollama run llama3). I was not expecting it to be this fast for something that is on the level of GPT-3.5 (or above). On a Raspberry Pi. Wow.
@brando2818
@brando2818 Месяц назад
I just recieved my pi, and I'm about to do the same thing.. Are you doing anything else on it?
@nilutpolsrobolab
@nilutpolsrobolab 2 месяца назад
Such a calm tutorial but so informative💙
@KDG860
@KDG860 3 месяца назад
Thank u for sharing this. I am blown away.
@sweetbb125
@sweetbb125 Месяц назад
I've trie drunning OLLAMA on my Raspberry Pi 5, as well as an Intel Celeron based computer, and also an old Intel i7 based computer, and it worked everywhere. It is really behind impressive, thank you for this video to show me how to do it!
@markr9640
@markr9640 4 месяца назад
Really useful stuff on your videos. Subscribed 👍
@isuckatthat
@isuckatthat 5 месяцев назад
I've been testing llamacpp on it and it works great as well. Although, I've had to use my air purifier as a fan to keep it from overheating even with the aftermarket cooling fan/heatsync on it.
@whitneydesignlabs8738
@whitneydesignlabs8738 5 месяцев назад
Thanks, Ian. Can confirm. It works and is plausible. I am getting about 8-10 minutes for multi-modal image processing with Llava. I find the tiny models to be too dodgy for good responses, and have currently settled on Llama2-uncensored as my go to LLM for the moment. Response times are acceptable, but looking for better performance. (BTW my Pi5 is using an nVME drive and a Hat from Pineberry)
@IanWootten
@IanWootten 5 месяцев назад
Nice, I'd like to compare to see how much faster an nVME would run these models.
@whitneydesignlabs8738
@whitneydesignlabs8738 5 месяцев назад
If you want to do a test, let me know. I could run the same model and query as you, and we could compare notes. My guess is that processing time has more to do with CPU and RAM. but not 100% sure. Having said that large (1TB+) nvme makes storing models on the Pi convenient. Also boot times are rather expeditious. When the Pi5 was announced, I knew right away that I wanted to to add an nvme via the PCI express connector. Worth the money, IMO. @@IanWootten
@SocialNetwooky
@SocialNetwooky 5 месяцев назад
As I just said on the discord server : you might be able to squeeze a (very) tiny bit of performance by not loading the WM and just interact with ollama via SSH. But great that it works as well with tinyllama! Phi based models might work well too! Dolphin-Phi is a 2.7B model.
@BradleyPitts666
@BradleyPitts666 4 месяца назад
I don't follow? What VM? ssh into what?
@SocialNetwooky
@SocialNetwooky 4 месяца назад
@@BradleyPitts666 WM ... windows Manager.
@SocialNetwooky
@SocialNetwooky 4 месяца назад
​ @BradleyPitts666 meh ... youtube not showing my previous (phone written) answer again, so I can't edit it, and I can't see/edit my previous answer ... so this might be a near identical answer to another answer, sorry. I blame RU-vid :P The Edit is that I disabled even more services, and marginally faster answer. So : WM is the Windows Manager. It uses resources (processor time and memory) while it runs, not a lot, but it's not marginal. So disabling the WM with 'sudo systemctl disable lightdm' and rebooting is beneficial for this particular usecase. Technically ,just calling 'systemctl stop lightdm' would work too, but by disabling and rebooting you make sure any services lightdm started really aren't running in the background. You can then use ollama on the command line. If you want to use it from your main system without hooking the rpi to a monitor and plug a keyboard in it you can enable sshd (the ssh daemon, which isn't enabled by default in the pi-os image afaik) and then ssh to it, and then use ollama there (THAT uses a marginal amount of memory though). I also disabled bluetooth, sound.target and graphical.target, snapd (though I only stop that one, as I need it for nvim), pipewire and pipewire-pulse (those two are disabled using systemctl --user disable pipewire.socket and systemctl --user disable pipewire-pulse.socket). Without any models loaded, at idle, I only have 154MB of memory used. With that configuration tinyllama on the question 'why is the sky blue' I get 13.02 t/s on my rpi5, so nearly 1/3rd faster than with all the unneeded services
@m41ek
@m41ek 5 месяцев назад
Thanks for the video! What's your camera please ?
@MarkSze
@MarkSze 5 месяцев назад
Might be worth trying the quantised versions of llama2
@BillYovino
@BillYovino 4 месяца назад
Thanks for this. So far I've tested TinyLlama, Llama2, and Gemma:2b with the question "Who's on first" ( a baseball reference from a classic Abbott and Costello comedy skit). TinyLlama and Llama2 understood that it was a baseball reference, but had some bizarre ideas on how baseball works. Gemma:2b didn't understand the question but when asked "What is a designated hitter?" came up with an equally incorrect answer.
@IanWootten
@IanWootten 4 месяца назад
Nice. I love your Hal replica. Was that done with a Raspberry Pi?
@BillYovino
@BillYovino 4 месяца назад
@@IanWoottenYes, a 3B+. I'm working on a JARVIS that uses ChatGPT API and I'm interested in preforming the AI function locally. That's why I'm looking into Ollama.
@donmitchinson3611
@donmitchinson3611 Месяц назад
Thanks for video and testing. I was wondering if you have tried setting num_threads =3. I can't find video of where I saw this but I think they set it before calling ollama. Like environment variable. It's supposed to run faster. I'm just building a rpi5 test station now
@daveys
@daveys 3 месяца назад
The Pi 5 is pretty good when you consider the cost, and what you can do with it. I picked one up recently for Python coding, and it runs Jupyter Notebook beautifully on my 4k screen. I might give the GPIO a whirl at some point in the near future.
@Vhbaske
@Vhbaske 2 месяца назад
In the USA Digilent also has many Raspberys5 available!
@Augmented_AI
@Augmented_AI 4 месяца назад
How do we run this in python, so for voice to text and text to speech for a voice assistant
@AlwaysCensored-xp1be
@AlwaysCensored-xp1be 3 месяца назад
Been having fun running different LLM. The small ones are fast, the 7B ones are slow. I have Pi5 8G. The small LLMs should run on a Pi4? Tinyllama has trouble adding 2+2. They also seem Monotropic, spiting out random vaguely related answers. I need more Pi5 so I can network a bunch with different LLM on each.
@davidkisielewski605
@davidkisielewski605 4 месяца назад
Hi! I have the m.2 VMe hat and I am waiting for my coral accelerator. Does anyone else run with the accelerator and how much does it speed things up? I know what they say it does, but I am interested in real-world figures. I'll post when it arrives from blighty.
@dinoscheidt
@dinoscheidt 5 месяцев назад
Would love to know if the google coral board would provide a substantial improvement. If Ollama can even utilize that. Also, how it would compare to a jetson nano. Nonetheless: Thank you very much for posting this. Chirps to the Birds ❤️
@IanWootten
@IanWootten 5 месяцев назад
That would be great to try out if I could get my hands on one.
@dibu28
@dibu28 2 месяца назад
also try MS Phi2 for Python and Gemma-2b
@BenAulbrook
@BenAulbrook 5 месяцев назад
I finally got my Pi5 yesterday and already have ollama working with a couple of models. But id like to provide a text to speech for the output on the screen having a hard time wrapping my brain around it how it works... like allowing the Ollama functions from the terminal to turn into audible speech.. but so many resources too pick from and also just getting the code/scripts working, i wish it was easy to install an external package and allow the internal functions to just "work" without having to move files and scripts around it becomes confusing sometimes.
@Wolkebuch99
@Wolkebuch99 5 месяцев назад
Well, how about a pi-cluster where one node runs ollama and one runs a screen reader ssh'd into the ollama node? Could add another layer and have another node running NLP for the screen reader node, or a series of nodes connected to animatronics and sensors.
@davidkisielewski605
@davidkisielewski605 4 месяца назад
You can run meta whisper alongside your model from what I read. t-t-s and s-t-t
@Lp-ze1tg
@Lp-ze1tg 3 месяца назад
Was this pi 5 consisted of microsd card or external storage? How big the storage size is suitable?
@IanWootten
@IanWootten 3 месяца назад
Just using the microsd. I'd imagine speeds would be a fair bit better from USB or nvme.
@1091tube
@1091tube 4 месяца назад
could the compute process be distributed, like a grid compute? 4 raspberry pi?
@IanWootten
@IanWootten 3 месяца назад
Not really - a model file is downloaded to the machine using Ollama and brought into memory.
@technocorpus1
@technocorpus1 2 месяца назад
Awesome! I want to try this now! Can someone tell me if it necessary to install the model on an exterior SSD?
@IanWootten
@IanWootten 2 месяца назад
Not necessary, but may be faster. All the experiments here I was just using a microsd.
@technocorpus1
@technocorpus1 2 месяца назад
​@@IanWootten That's just amazing to me. I have a Pi3, but am planning on upgrading to a pi5. After I saw your video, I downloaded ollama onto my windows pc. It only has 4 GB RAM, but I will still able to run several models!
@juanmesid
@juanmesid 5 месяцев назад
ur from the discord server! keep going
@IanWootten
@IanWootten 5 месяцев назад
You mean the ollama one? I'm on there from time to time.
@jdray
@jdray 5 месяцев назад
@@IanWootten Just posted this video there. Glad to know you're part of the community.
@Bigjuergo
@Bigjuergo 5 месяцев назад
can you connect it with speach recognition and make tts output with pretrained voicemodel (*.index and *.pth) file?
@IanWootten
@IanWootten 5 месяцев назад
You probably could, but it wouldn't give a quick enough response for something like a conversation.
@whitneydesignlabs8738
@whitneydesignlabs8738 5 месяцев назад
I am working on something similar, but using a Pi4 for STT & TTS (and animatronics) and a dedicated Pi5 for running the LLM with Ollama like Ian demonstrates. They are on the same network and use MQTT for communication protocol. This is for robotics project.@@IanWootten
@isuckatthat
@isuckatthat 5 месяцев назад
I've been trying to do this, but its impossibly hard to get tts setup.
@donniealfonso7100
@donniealfonso7100 5 месяцев назад
@@isuckatthat Yes not easy. I was trying to implement speech with Google Wavenet using RU-vid Data Slayer example. I put the key reference in pi's user.profile as export. Script runs okay now creating the mp3 files but no speech so pretty much gave up as other fish to fry.
@nmstoker
@nmstoker 5 месяцев назад
​@@isuckatthathave you tried espeak? It would give robotic quality output but uses very little processing and works fine on a Pi
@fontende
@fontende 4 месяца назад
maybe better try genius Mozilla LLM container in one file project LLAMAFILE. I was able to run it on my 2011 laptop(some ancient Gpu) with Windows 8 a LLAVA in llamafile, which is also an image scanner llm. Ollama i've tested can't run on win 8.
@nmstoker
@nmstoker 5 месяцев назад
Great video but it's not a good idea to encourage use of those all-in-one curl commands. Best to download the shell script, ideally look over it before you run it, but even if you don't check it first at least you have the file if something goes wrong
@IanWootten
@IanWootten 5 месяцев назад
Yes, I've mentioned this in my other videos and have in my blog on this too.
@nmstoker
@nmstoker 5 месяцев назад
@@IanWootten ah, sorry hadn't seen that. Anyway thanks again for the video! I've subscribed to your channel as looks great 🙂
@markmonroe4154
@markmonroe4154 5 месяцев назад
This is a good start - I bet the Raspberry Pi makers have a Pi 6 in the works with a better GPU to really drive these LLM's.
@IanWootten
@IanWootten 5 месяцев назад
No doubt they will do. But, the Pi 4 was released 4 years ago, so you might have to wait a while.
@madmax2069
@madmax2069 4 месяца назад
That's wishful thinking. You might as well try to figure out how to run an ADLINK Pocket AI on a Pi 5.
@GuillermoTs
@GuillermoTs 3 месяца назад
Is possible to run in a Raspberry Pi 3?
@IanWootten
@IanWootten 3 месяца назад
Maybe one of the smaller models, but it'll run a lot slower than here
@anonymously-rex-cole
@anonymously-rex-cole 3 месяца назад
is that realtime? is that how fast it replies?
@IanWootten
@IanWootten 3 месяца назад
All the text model responses are in realtime. I've only made edits when using llava since there was a 5 min delay between hitting enter and it responding...
@user-vl4vo2vz4f
@user-vl4vo2vz4f 4 месяца назад
please try adding a coral module to the pi and see the difference
@madmax2069
@madmax2069 4 месяца назад
A Coral module is not suited for this. It lacks the available Ram to really partake in helping an LLM run. What you really need is something like an external GPU, something like one of those ADLINK Pocket AI GPUs to hook up to the system, BUT it only has 4GB Vram.
@galdakaMusic
@galdakaMusic 9 дней назад
What about renew this video with the new Rpi Hat AI? Thanks
@IanWootten
@IanWootten 9 дней назад
Could do, but I don't think Ollama would be able to leverage it, plus it's not out yet.
@chetana9802
@chetana9802 5 месяцев назад
now lets try it on a cluster or ampere altra?
@IanWootten
@IanWootten 5 месяцев назад
Happy to give it a try if there's one going spare!
@NicolasSilvaVasault
@NicolasSilvaVasault Месяц назад
that's super impressive even if it takes quite a while to respond, is a RASPBERRY PI
@IanWootten
@IanWootten Месяц назад
EXACTLY!
@TreeLuvBurdpu
@TreeLuvBurdpu 4 месяца назад
What if you put a compute module on it or something?
@IanWootten
@IanWootten 4 месяца назад
A compute module is a RPi in a slightly different form. So I think it would behave the same.
@AlexanderGriaznov
@AlexanderGriaznov 2 месяца назад
Am I the only one who noticed tiny llama response to “why sky is blue?” was shitty? What the heck rust causing blue color of the sky?
@IanWootten
@IanWootten 2 месяца назад
Others have mentioned it in the comments too. It is a much smaller model, but there are many others to choose from (albeit possibly slower).
@allurbase
@allurbase 5 месяцев назад
How big was the image, maybe that affected the response time? Very cool, although not convinced by tiny-llama or the speed for a 7B model, but still crazy we are getting close. You should try something with more power like a Jetson Nano. THanks!!
@IanWootten
@IanWootten 5 месяцев назад
Less than 400KB. Might try a jetson nano if I get my hands on one.
@zachhoy
@zachhoy 5 месяцев назад
I'm curious why run it on a Pi instead of a proper PC?
@IanWootten
@IanWootten 5 месяцев назад
To satisfy my curiosity - to see whether it's technically possible on such a low powered, cheap machine.
@zachhoy
@zachhoy 5 месяцев назад
thanks for the genuine response :D Yes I can see that drive now. @@IanWootten
@TreeLuvBurdpu
@TreeLuvBurdpu 4 месяца назад
There's lot of videos of people running it on their PC, but if you use it all the time it will hog your PC all the time. There's several reasons you might want a dedicated host.
@marsrocket
@marsrocket 3 месяца назад
What’s the point of running a LLM locally if the responses are going to be nonsense? That blue sky response was ridiculous.
@IanWootten
@IanWootten 3 месяца назад
The response for that one model/prompt may have been, but there are plenty of others to choose from.
@Tarbard
@Tarbard 5 месяцев назад
I liked tinydolphin better than tinyllama.
@IanWootten
@IanWootten 5 месяцев назад
Not tried it out yet.
@pengain4
@pengain4 2 месяца назад
I dunno. It seems cheaper for buy actual second-hand GPU to run Ollama on it than to buy RPi. [Partially] a joke. :)
@IanWootten
@IanWootten 2 месяца назад
Possibly if you already have a machine. This might work out if you don't. Power consumption is next to nothing on the Pi too.
@blender_wiki
@blender_wiki 5 месяцев назад
Too expensive for what it is. Interesting proof of concept but absolutely useless and inefficient in a production context
@arkaprovobhattacharjee8691
@arkaprovobhattacharjee8691 5 месяцев назад
This is so exciting! Can you pair this with a Coral TPU ? and then check the inference speed ? I was wondering if that's possible
@madmax2069
@madmax2069 4 месяца назад
The coral TPU isn't suited for this, it lacks the available Ram to do any good with an LLM. What you'd need is one of those ADLINK Pocket AI GPUs but it only has 4GB Vram.
@arkaprovobhattacharjee8691
@arkaprovobhattacharjee8691 4 месяца назад
@@madmax2069 makes sense.
@BradleyPitts666
@BradleyPitts666 4 месяца назад
I have cpu usage at 380% when ollama2 responding. Anyone else tested?
Далее
I Built a CoPilot+ AI PC (without Windows)
12:50
Просмотров 268 тыс.
Best Upgrade for the Raspberry Pi 5
13:30
Просмотров 23 тыс.
BU KUN | THIS DAY
00:28
Просмотров 4 млн
2000 vs 2100
00:15
Просмотров 16 тыс.
МОЙ БРАТ БЛИЗНЕЦ!
19:34
Просмотров 941 тыс.
Akamakasi dedimi nima dedi🤔
00:25
Просмотров 2,7 млн
All You Need To Know About Running LLMs Locally
10:30
Просмотров 116 тыс.
Raspberry Pi AI Kit - Unboxing and Installation Guide
16:20
Ollama - Local Models on your machine
9:33
Просмотров 72 тыс.
13 Stunning Raspberry Pi Projects for 2024!!!
10:23
Просмотров 203 тыс.
Using Ollama to Run Local LLMs on the Steam Deck
10:50
Просмотров 1,7 тыс.
This is how you destroy Raspberry Pi
9:10
Просмотров 352 тыс.
Raspberry Pi 5 with 2TB NVME SSD Geekworm Shield
15:16
Просмотров 117 тыс.
RetroPie: A Raspberry Pi Gaming Machine
33:03
Просмотров 154 тыс.
Неразрушаемый смартфон
1:00
Просмотров 1,6 млн