Тёмный

Self-Hosted LLM Chatbot with Ollama and Open WebUI (No GPU Required) 

Easy Self Host
Подписаться 4,1 тыс.
Просмотров 5 тыс.
50% 1

Explore the power of self-hosted language models with us on Easy Self Host! In this video, we demonstrate how to run Ollama with Open WebUI, creating a private server-based environment similar to ChatGPT. We'll guide you through setting up Ollama and Open WebUI using Docker Compose, delve into the configuration specifics, and show how these tools provide enhanced privacy and control over your data. Whether you're using a modest setup or more powerful hardware, see the performance firsthand. Don't miss out on our insights on potential applications beyond chat, like note summarization in Memos. Subscribe for more self-hosted solutions and find the configuration files on our GitHub, linked below!
00:04 Introduction
01:07 Tutorial to run Ollama and Open WebUI (Docker Compose)
03:18 Running Docker Compose on the Server
03:38 Start Chatting on Open WebUI
06:01 Integration with Memos (experimental)
🔗 Links:
Docker Compose file for this video: github.com/easyselfhost/self-...
Ollama: ollama.com
Open WebUI: openwebui.com/
My hack on Memos to support LLM: github.com/usememos/memos/com...
Memos video: • Self host your own Not...

Наука

Опубликовано:

 

6 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 24   
@steve-maheshsingh7553
@steve-maheshsingh7553 2 месяца назад
This is great! Thanks for making it easy. Easy self-hosting!
@goodcitizen4587
@goodcitizen4587 3 месяца назад
Very cool! Please consider more videos where you integrate this into other apps like home assistant and others. Thanks!
@easyselfhost
@easyselfhost 2 месяца назад
Yeah I’m brainstorming on how these local models will work in these apps
@MsPlams
@MsPlams Месяц назад
Keep it going!!! Great channel ❤
@Edu2pc
@Edu2pc 3 месяца назад
Excelente! En especial el utilizarlos junto a las notas
@MultiMmsh
@MultiMmsh 2 месяца назад
Can you explain usage of softethervpn site-site configuration,, local to vps
@nexuslux
@nexuslux 2 месяца назад
Would you consider doing self-hosting for transcription to notes in memos? 😊 like faster-whisper, pyannote (or local alternative) to meeting minutes, to summarized notes?
@easyselfhost
@easyselfhost 2 месяца назад
I actually run a whisper api server I built myself on my server for some transcription work like generating captions for my videos. It works pretty well. But to achieve what you describe, there’s more work to be done to bring those together
@AFiB1999
@AFiB1999 2 месяца назад
Nice tutorial, Could you please tell me How can I add or find the tutorial or docker version that have LLM? Thanks
@easyselfhost
@easyselfhost 2 месяца назад
For memos? I’ll start a discussion on their repo. If they agree to have this feature I can probably do the work. I have a branch on my fork that has the feature, but it’s mainly for demo purposes and probably have some bugs. You can find that branch in the video description
@AFiB1999
@AFiB1999 2 месяца назад
@@easyselfhost Thank you
@naveenpandey9016
@naveenpandey9016 2 месяца назад
I have creatednone chatbot on some pdf documents and using mistral but i try to hit multiple query at aame time it gives response in more than 2 to 5 minutes so can i deploy mistral with this methods without GPU and will get response fast?
@easyselfhost
@easyselfhost Месяц назад
yeah you can deploy mistral without GPU with ollama. The performance will depend on how fast your CPU and memory are.
@easyselfhost
@easyselfhost 2 месяца назад
Posted a discussion to see if Memos community wants this feature: github.com/orgs/usememos/discussions/3333
@manprinsen8150
@manprinsen8150 2 месяца назад
Great video! Any links on how to utilize a gpu?
@easyselfhost
@easyselfhost 2 месяца назад
github.com/ollama/ollama/blob/main/docs/gpu.md. If you are using docker compose you might also want to to check docs.docker.com/compose/gpu-support/
@loremipsumamet2477
@loremipsumamet2477 2 месяца назад
Setting ollama with gpu is the hardest one, espesially if you have 4gb vram like me 🥲 so limited model for running smoothly
@tanyongsheng4561
@tanyongsheng4561 Месяц назад
Hi may we know what is the hardware requirements to self host ollama and open webui with llama3?
@easyselfhost
@easyselfhost Месяц назад
here is a great summary: github.com/open-webui/open-webui/discussions/736#discussioncomment-8474297
@y0._.
@y0._. 2 месяца назад
Is it possible to set a folder to place our own pdfs with information so the chatbot can answer based in them?
@easyselfhost
@easyselfhost 2 месяца назад
I don’t know any tool that can do this yet. You can upload docs to openwebui though. I wish there are some llm integration in projects like paperless.
@y0._.
@y0._. 2 месяца назад
@@easyselfhost I've researched a bit, it seems to be called "RAG". I think I'm getting close 🤔
@MsPlams
@MsPlams Месяц назад
Can you please do a video on Immich?
@easyselfhost
@easyselfhost Месяц назад
There is one already: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-h_inF-ypMls.html
Далее
host ALL your AI locally
24:20
Просмотров 937 тыс.
Китайка Шрек всех Сожрал😂😆
00:20
Better Searches With Local AI
8:30
Просмотров 26 тыс.
Ollama UI - Your NEW Go-To Local LLM
10:11
Просмотров 106 тыс.
Two GPT-4os interacting and singing
5:55
Просмотров 2,9 млн
КАКОЙ SAMSUNG КУПИТЬ В 2024 ГОДУ
14:59