Тёмный
No video :(

Ollama - Run LLMs Locally - Gemma, LLAMA 3 | Getting Started | Local LLMs 

Siddhardhan
Подписаться 123 тыс.
Просмотров 925
50% 1

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 8   
@devanshgupta2749
@devanshgupta2749 2 месяца назад
bro please complete deep learning course asap...!!!! It will be very helpful.
@user-Rania-n7m
@user-Rania-n7m 2 месяца назад
Excellent video brother!❤, how can I create a bot that takes pdfs as its input and gives answer to our question from those pdfs, but LOCALLY as you did in this video.
@share1231000
@share1231000 Месяц назад
Thanks for all your great training video. However, I don't understand what the purposes of the first three lines in the messages. The lines with the system, user and assistant role. Are these lines required for this gemma LLM? Thanks
@naveennaveen-rx5uw
@naveennaveen-rx5uw 2 месяца назад
Bro, from how many years you are in this field learning??
@sujayjagadale6876
@sujayjagadale6876 Месяц назад
I am getting slow responses from ollama with model llama3:instruct How to improve response speed, it takes around 2-3 minutes to return a response.
@aysham3664
@aysham3664 2 месяца назад
why did you chnge to linux os
@Siddhardhan
@Siddhardhan 2 месяца назад
It's much better to work with python, my pc is more efficient now. Also most deployment servers will be linux in companies. So it's better to get used to it.
@mohitgoswami1168
@mohitgoswami1168 2 месяца назад
Need your help where can I contact you
Далее
娜美这是在浪费食物 #路飞#海贼王
00:20
Woman = best friend🤣
00:31
Просмотров 3,7 млн
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
Create your own CUSTOMIZED Llama 3 model using Ollama
12:55
OpenAI’s ChatGPT: This is Science Fiction!
6:33
Просмотров 327 тыс.
Data Scientist vs. AI Engineer
10:39
Просмотров 171 тыс.
Importing Open Source Models to Ollama
7:14
Просмотров 29 тыс.
Easy 100% Local RAG Tutorial (Ollama) + Full Code
6:50