Тёмный

Getting Started with Ollama and Web UI 

Dan Vega
Подписаться 60 тыс.
Просмотров 19 тыс.
50% 1

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 35   
@hfislwpa
@hfislwpa Месяц назад
2 videos in 1 day? Woah! Thanks
@user-zk1zm6sm2u
@user-zk1zm6sm2u Месяц назад
Interesting tutorial with Web UI and Ollama, Thanks!!!
@AleksandarT10
@AleksandarT10 Месяц назад
Great one Dan! Keep ups updated on the AI stuff!
@bause6182
@bause6182 Месяц назад
Ollama should integrate a feature like artifact that allow you to test your html css code in a mini webview
@user-ym6tb5xb2v
@user-ym6tb5xb2v 24 дня назад
How can I connect my local ollama3 with webUi, My webUI couldn't find the locally running ollama3
@MURD3R3D
@MURD3R3D 7 дней назад
same problem
@MURD3R3D
@MURD3R3D 7 дней назад
from home page of your webUI localhost3000 in your browser, click on your account name in the lower left, then click settings, then "models", then you can pull llama3.1 by typing it in the "pull" box and clicking the download button. when it completes, close webUI and reopen it. then i had the option to select 3.1 8B from the models list
@user-ym6tb5xb2v
@user-ym6tb5xb2v 6 дней назад
@@MURD3R3D i found that happen due to docker networking.
@vrynstudios
@vrynstudios 29 дней назад
A perfect tutorial.
@lwjunior2
@lwjunior2 Месяц назад
This is great. Thank you
@je2587
@je2587 21 день назад
Love your terminal, which tools do you use to customize it?
@borntobomb
@borntobomb Месяц назад
Note for 405B: We are releasing multiple versions of the 405B model to accommodate its large size and facilitate multiple deployment options: MP16 (Model Parallel 16) is the full version of BF16 weights. These weights can only be served on multiple nodes using pipelined parallel inference. At minimum it would need 2 nodes of 8 GPUs to serve. MP8 (Model Parallel 8) is also the full version of BF16 weights, but can be served on a single node with 8 GPUs by using dynamic FP8 (Floating Point 8) quantization. We are providing reference code for it. You can download these weights and experiment with different quantization techniques outside of what we are providing. FP8 (Floating Point 8) is a quantized version of the weights. These weights can be served on a single node with 8 GPUs by using the static FP quantization. We have provided reference code for it as well. 405B model requires significant storage and computational resources, occupying approximately 750GB of disk storage space and necessitating two nodes on MP16 for inferencing.
@user-br4gt7xu2j
@user-br4gt7xu2j 29 дней назад
and what about 70B? How it could be served? Could some of llama 3.1 be used by simple 16-cores laptop with integrated GPU and 32GB ram?
@isaac10231
@isaac10231 5 дней назад
When you say "we" do you work for meta?
@chameleon_bp
@chameleon_bp Месяц назад
Dan, what the specs for your local machine?
@zo7lef
@zo7lef Месяц назад
Would make a video on how to integrate llama 3 to wordpress website, making chatbot or co pilot
@trapez_yt
@trapez_yt 25 дней назад
Hey, could you make a video on how to edit the login page? I want to make the login page to my liking.
@mochammadrevaldi1790
@mochammadrevaldi1790 6 дней назад
in Ollama Is there an admin dashboard for tuning the model, sir?
@expire5050
@expire5050 18 дней назад
finally setup open webui thanks to you. i'd approached it, seen "docker" and left it on my todo list for weeks/months. I'm running gemma2 2b on my gtx 1060 6gb vram. any suggestions on good models for my size?
@NikolaiMhishi
@NikolaiMhishi Месяц назад
Bro you the G
@khalildureidy
@khalildureidy Месяц назад
Big thanks from Palestine
@ilkou
@ilkou Месяц назад
❤💚🖤
@elhadjibrahimabalde1234
@elhadjibrahimabalde1234 22 дня назад
be safe
@kashifmanzoor7949
@kashifmanzoor7949 22 дня назад
Stay strong
@vikas-jz3tv
@vikas-jz3tv 14 дней назад
How we can tune a model with custom data?
@DrMacabre
@DrMacabre 24 дня назад
hello, any idea how to set keep_alive when running the windows exe ?
@stoicguac9030
@stoicguac9030 Месяц назад
Is WebUI a replacement for aider?
@elhadjibrahimabalde1234
@elhadjibrahimabalde1234 22 дня назад
hello. After installing OpenWebUI, I am unable to find OLLAM under 'Select a Model'. Is this due to a specific configuration? For information, my system is running Ubuntu 24.04.
@vactum0
@vactum0 28 дней назад
my ollama running same model is deadslow, running in laptop i5 11th gen without GPU 26GB Ram. Is it because of no dedicated GPU?
@user-km8rs4tj5w
@user-km8rs4tj5w Месяц назад
Thank you, I tried it but it is very slow, running it on a laptop with 16GB RAM!
@jaroslavsedlacek7077
@jaroslavsedlacek7077 Месяц назад
Is there an integration for Open WebUI + Spring AI?
@shuangg
@shuangg Месяц назад
6 months behind everyone else.
@BnmQwr-e2n
@BnmQwr-e2n 4 дня назад
Davis Michelle Clark Melissa Miller Frank
Далее
Ollama AI Home Server ULTIMATE Setup Guide
26:06
Просмотров 12 тыс.
Ollama UI - Your NEW Go-To Local LLM
10:11
Просмотров 114 тыс.
The Perfect Neovim Note Takers Setup
11:24
Просмотров 14 тыс.
How To Build Web Apps using V0 + Claude AI + Cursor AI
13:32
No More AI Costs: How to Run Meta Llama 3.1 Locally
7:58
Front-end web development is changing, quickly
3:43
Просмотров 878 тыс.
host ALL your AI locally
24:20
Просмотров 1 млн
AI Realism Breakthrough & More AI Use Cases
25:52
Просмотров 124 тыс.