Тёмный

Build your own LLM AI on a Raspberry Pi 

WiseCat
Подписаться 897
Просмотров 2,1 тыс.
50% 1

You know, you don't need really expensive hardware to run your own LLM. In today's tutorial, I will be walking you through the process of setting up and running Ollama, an open-source machine learning model, using Open-WebUI on a Raspberry Pi (these things are under $100 - seriously). By the end of this video, you will have a fully functional Ollama installation that can be accessed through Open-WebUI from your home LAN or even through a WiFi network created by the Raspberry Pi itself. (Note if connecting to the Pi via "Pillama-Wifi" using a phone, you may need to turn off your mobile carrier's internet)
Here's what we'll cover in today's tutorial:
Prerequisites - We'll go over the necessary hardware and software requirements for running Ollama on Raspberry Pi.
Setting up your Raspberry Pi - the initial setup of your Raspberry Pi, ensuring it's ready for the installation process.
Installing Docker, and setting up the WiFi using the Ansible playbook - This bit is really easy with Ansible, but you can run the commands manually if you don't have Ansible.
Downloading and building Ollama Docker image - This is one command that makes use of the docker-compose.yaml file in the github repository.
Next, I'll show you how to access it through a web browser, download your first model (tinyllama) and start using the AI.
Accessing Ollama via Open-WebUI - Finally, we'll demonstrate how to access your running Ollama instance using Open-WebUI from the built in "Pillama-WiFi" network (actually no internet required - if connecting to the Pi via "Pillama-Wifi" using a phone, you may need to turn off your mobile carrier's internet).
Please keep in mind that this tutorial assumes some basic familiarity with Linux command line interfaces and SSH. If you need more detailed explanations of any of these concepts, please feel free to leave a comment below, or check out my previous videos on the subject.
Ready to get started? Let's dive right into it! Don't forget to like, share, and subscribe for more exciting content in the future. If you have any questions or encounter any issues during this tutorial, please let us know in the comments below.
Happy learning, and see you in the next video!
(This description was largely written by the very LLM I built - I tweaked it a bit, but if it sounds more RU-vid-y, now you know why...)
Links
Ollama ollama.com/
Ansible www.ansible.com/
(Installation guide) docs.ansible.com/ansible/late...
Raspberry Pi Imager www.raspberrypi.com/software/
Pillama Github repository github.com/adamjenkins/pillama
The SSH Key Authentication Video (needed to run the Ansible playbook) • SSH Key Authentication...

Опубликовано:

 

10 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 10   
@slevinhyde3212
@slevinhyde3212 12 дней назад
Great video, lots of fun here ! Do you think the Pi AI kit could run a bigger model ?
@techtonictim
@techtonictim 20 дней назад
Great video 👌 full of useful information.. thanks 🙏
@Wise-Cat
@Wise-Cat 15 дней назад
Glad it was helpful! Thank you
@rachitrastogi1422
@rachitrastogi1422 20 дней назад
i have created my own llm so how i can deploy it on google cloud and use it on raspberry pi plzz tell me
@Wise-Cat
@Wise-Cat 15 дней назад
Deploying an LLM to a cloud is easy using the docker-compose.yaml file in the Pillama repository, though getting it to work well with the cloud's infrastructure will be very case-by-case depending on how your particular cloud is setup. Using cloud-based GPUs etc will yield better results, so I'd suggest looking through the documentation for your cloud infrastructure provider. Sorry, I don't have a one-size-fits-all answer for this question.
@galdakaMusic
@galdakaMusic 26 дней назад
tinyllama and Coral or Hat AI (Hailo 8L)??
@Wise-Cat
@Wise-Cat 25 дней назад
I would love to try that out someday. I currently don't have that stuff though. Saw some videos on Jeff Geerling's channel that were very interesting though.
@galdakaMusic
@galdakaMusic 25 дней назад
Thanks
@Kyle-Jade
@Kyle-Jade 27 дней назад
That is so painfully slow, doesnt look worth it
@Wise-Cat
@Wise-Cat 27 дней назад
It depends on what your goal is. If you want a blazing fast AI, yeah it's not worth it. On the other hand, if you want to learn more about how these things work and perhaps how you can serve your own AI later on more impressive hardware (or on platforms like AWS and/or Azure) then it's totally worth it. Or you could do it to win a beer bet 😉 In our case, we did it simply to show it CAN be done. To demonstrate to people that AI is not beyond them and to thus empower people. This could be the start of a journey for people who otherwise might be too self-doubting to take their first step. That makes it worth it to me. Oh, and it's cute and fun too...
Далее
host ALL your AI locally
24:20
Просмотров 844 тыс.
Nurse's dream !! 😂😂
00:17
Просмотров 5 млн
Телеграмм-Колян Карелия
00:14
Просмотров 246 тыс.
Using Ollama to Run Local LLMs on the Raspberry Pi 5
9:30
Create Your Own Digital Library
25:30
Просмотров 75 тыс.
Ollama UI - Your NEW Go-To Local LLM
10:11
Просмотров 95 тыс.
I Built a CoPilot+ AI PC (without Windows)
12:50
Просмотров 290 тыс.
Stop using APT
9:56
Просмотров 502 тыс.