Тёмный

Ollama + Phi3 + Python - run large language models locally like a pro! 

DevXplaining
Подписаться 4,2 тыс.
Просмотров 3 тыс.
50% 1

Large Language Models are popular these days. In this video I'll cover what is Ollama, how you can use it to pull and run local LLM models like Phi3, Mistral, Llama, or similar ones. We'll also cover the setup, do a bit of model customization, and a little bit of Python programming in the end as well. I'll be doing a follow-up video on Ollama+RAG if there is enough interest.
Let me know how you like this video, and any questions are always welcome! And of course, click the like-button if you like. And if you like a lot, please subscribe to my channel for more, and click that bell icon to get notifications of new content.
Here are the links in video:
- ollama.com/
- github.com/ollama/ollama

Опубликовано:

 

19 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1   
@suraj_bini
@suraj_bini 2 месяца назад
Documents Q&A mini project with Phi3 mini
Далее
😱КТО БУДЕТ ЛЕДИ БАГ А4⁉️ #а4
00:50
вернуть Врискаса 📗 | WICSUR #shorts
00:54
How to Improve LLMs with RAG (Overview + Python Code)
21:41
JupyterLab Cell Output Views
0:16
Просмотров 19 тыс.
Tool Calling with LangChain
7:28
Просмотров 12 тыс.