Тёмный

Get up and running with local ChatGPT/gLLMs with Ollama in R 

Johannes B. Gruber
Подписаться 103
Просмотров 1,9 тыс.
50% 1

Опубликовано:

 

23 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 5   
@danielsaldivia2570
@danielsaldivia2570 6 месяцев назад
Many thanks for this tutorial, Johannes!
@wesleylohoi
@wesleylohoi 4 месяца назад
Hi Johannes, What a fantastic video! I am also a R lover, I got a question after I watched your video. May I know if is it possible for us can turn the LLM model in R as well? such as we got some extract information that we would like to load into it? like academic papers, reports etc. Cheers, Wesley
@JBGruber
@JBGruber 4 месяца назад
I started working on this. What you are talking about is called RAG (retrieval augmented generation) and it's definitly possible in R! I will publish it as a tutorial as soon as it's ready.
@CanDoSo_org
@CanDoSo_org 7 месяцев назад
Hi, Johannes. can we deploy it locally without Nvidia GPU, say on MacBook pro?
@JBGruber
@JBGruber 7 месяцев назад
Yes! At 14:50 I show how you can disable the GPU dependency. It will be much slower though, as I also explain.
Далее
I Analyzed My Finance With Local LLMs
17:51
Просмотров 489 тыс.
That was too fast! 😲
01:00
Просмотров 3,4 млн
Ollama - Libraries, Vision and Updates
17:35
Просмотров 26 тыс.
Software Trends - RxJS
59:48
Просмотров 37
Living off Microsoft Copilot
42:06
Просмотров 24 тыс.
How to Use ChatGPT as a Powerful Tool for Programming
31:08
NixOS Setup Guide - Configuration / Home-Manager / Flakes
3:01:39
That was too fast! 😲
01:00
Просмотров 3,4 млн