Тёмный
No video :(

Using Local Large Language Models in Semantic Kernel 

Will Velida
Подписаться 4,6 тыс.
Просмотров 1,4 тыс.
50% 1

Опубликовано:

 

24 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 10   
@justinyuen1807
@justinyuen1807 Месяц назад
Would love to see how this works as well with the Ollama Embeddings API + Semantic Kernel Memory. ❤
@skaus123
@skaus123 Месяц назад
do you think performance wise that ollama is better than lm studio ? lm studio, while has a nice ui looks like its further away from the metal.
@CecilPhillip
@CecilPhillip Месяц назад
Curious to hear if anyone has been able to get local models working with automatic function calling
@florimmaxhuni4718
@florimmaxhuni4718 Месяц назад
Same will like to see function calling with local LLMs
@vivekkaushik9508
@vivekkaushik9508 Месяц назад
Ayyy it's Cecil from Microsoft. Didn't expect you here. What a small world.
@CecilPhillip
@CecilPhillip Месяц назад
@@vivekkaushik9508 Big fan of the channel. Also left Microsoft a while ago 🙂
@vivekkaushik9508
@vivekkaushik9508 Месяц назад
@@CecilPhillip 😲 Sorry, I didn't know. I hope everything is well.
@CecilPhillip
@CecilPhillip Месяц назад
@@vivekkaushik9508 nothing to be sorry about. It's all good. Still a supporter of a lot of the work going on there
@TheDemoded
@TheDemoded Месяц назад
what hardware did you use?
Далее
Introduction to Memories in the Semantic Kernel SDK
20:47
SPONGEBOB POWER-UPS IN BRAWL STARS!!!
08:35
Просмотров 15 млн
Let's use Ollama's Embeddings to Build an App
8:21
Просмотров 18 тыс.
Run your own AI (but private)
22:13
Просмотров 1,4 млн