Тёмный

How to Run Ollama Docker FastAPI: Step-by-Step Tutorial for Beginners 

Bitfumes - AI & LLMs
Подписаться 133 тыс.
Просмотров 1,7 тыс.
50% 1

Опубликовано:

 

12 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 18   
@Bitfumes
@Bitfumes Месяц назад
Please subscribe to Bitfumes channel to level up your coding skills. Do follow us on other social platforms: Hindi Channel → www.youtube.com/@code-jugaad LinkedIn → www.linkedin.com/in/sarthaksavvy/ Instagram → instagram.com/sarthaksavvy/ X → x.com/sarthaksavvy Telegram → t.me/+BwLR8bKD6iw5MWU1 Github → github.com/sarthaksavvy Newsletter → bitfumes.com/newsletters
@ano2028
@ano2028 Месяц назад
Thanks a lot! The whole tutorial is really to follow, I have been trying to dockerize and get my fastapi container and ollama container to interact with each for the last two days, you video helps me a lot
@Bitfumes
@Bitfumes Месяц назад
waooo that's nice to know. and thanks for this amazing comment. please subscribe to my newsletter bitfumes.com/newsletters
@NikolaosPapathanasiou
@NikolaosPapathanasiou Месяц назад
Hey nice video man! Since the Ollama is running in the docker container, is it using the GPU ?
@Bitfumes
@Bitfumes Месяц назад
not in my case it uses cpu but you need to specify runtime if you want to use GPU in docker so yes you can
@mochammadrevaldi1790
@mochammadrevaldi1790 Месяц назад
helpfully, Thank man!
@Bitfumes
@Bitfumes Месяц назад
cool cool please subscribe to my newsletter bitfumes.com/newsletters
@shreyarajpal4212
@shreyarajpal4212 4 дня назад
So I can directly host this as a website then right?
@Bitfumes
@Bitfumes 4 дня назад
yes or no yes you obviously can but it's not recommended although you can use AWS ECS to setup docker and then use same application
@karthikb.s.k.4486
@karthikb.s.k.4486 Месяц назад
Nice . May I know how are you getting suggestions in vs code . When you press docker the command suggests are coming in VS CODE what is the settings for this please let me know
@Bitfumes
@Bitfumes Месяц назад
Thanks bhai, bdw I am using GitHub copilot so maybe thats why I get suggestion
@mat15rodrig
@mat15rodrig 4 дня назад
thanks for the video!! Do you know how to resolve this error? ERROR:root:Error during query processing: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/chat (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))
@Bitfumes
@Bitfumes 4 дня назад
make sure your ollama is running properly and your most use 0.0.0.0 for docker host
@iainmclean6095
@iainmclean6095 14 дней назад
Just so you know, this does not work on Apple Silicon.
@Bitfumes
@Bitfumes 14 дней назад
how much RAM you have in your mac ?
@iainmclean6095
@iainmclean6095 14 дней назад
@@Bitfumes 128 GB, M3 Max
@iainmclean6095
@iainmclean6095 14 дней назад
@@Bitfumes 128 GB - M3 Max
@iainmclean6095
@iainmclean6095 14 дней назад
@@Bitfumes I have 128GB Ram and an M3 Max, the error I think is related to Docker and Ollama running on Apple Silicon
Далее
The intro to Docker I wish I had when I started
18:27
Using docker in unusual ways
12:58
Просмотров 445 тыс.
Мужа или парня
00:42
Просмотров 14 тыс.
Ollama AI Home Server ULTIMATE Setup Guide
26:06
Просмотров 13 тыс.
Local GenAI LLMs with Ollama and Docker (Stream 262)
1:06:43
How to run Ollama on Docker
10:37
Просмотров 32 тыс.
This Llama 3 is powerful and uncensored, let’s run it
14:58
Docker tutorial for Beginners
1:30:31
Просмотров 221 тыс.