Тёмный

Use Ollama to test Phi 3 on your local PC 

Bright Ideas Agency
Подписаться 4,3 тыс.
Просмотров 1,3 тыс.
50% 1

Want to learn more about Copilot for Microsoft 365 extensibility? Register for Nick's new course "Get to know Copilot for Microsoft 365 Extensibility" being held in June and July. Find out more here - bit.ly/3UwP8w1
Need an AI plan for your business? Check out Nick's book, "Who's in the Copilot's Seat?" - bit.ly/49XbRWs
And if you need more personalized help or advice, check out new engagement options that are available too - bit.ly/4bfk5dA
Microsoft's new language model (SLM), Phi 3, is so small you can run it locally on your device. So, in this video, let's learn how to do that, and why you might want a small language model running locally. We'll install Ollama, download the model Phi 3, run it through its paces, and talk about why you might want this option.
Sign up to receive Digital Spotlight, the Bright Ideas Agency monthly mailing mailchi.mp/7f5ac2e809a5/brigh...
Check out our website at www.brightideasagency.com
Connect with Nick on LinkedIn at / nickdc or the Bright Ideas Agency page at / brightideasagency
Links:
Ollama - ollama.com/
Phi 3 announcement - azure.microsoft.com/en-us/blo...
Chapters:
0:00 Introduction
1:33 How to get Phi 3
3:02 Testing Phi 3
3:45 What is the point of using this locally?
Apart from publicly accessible information, all user data or other related information shared in this video is created for demonstration purposes only. User accounts, passwords, or other data used as part of any demonstrations shown in this video have been created specifically for that purpose and are not any individual or company's private information or data.
Bright Ideas Agency is an Ohio Limited Liability Company. The content of this video is for informational and entertainment purposes only. No guarantees are given in connection with the information shared and you should seek independent technical or business advice before implementing anything you see in this video. If you want to hire us for your project, visit www.brightideasagency.com and get in touch.

Наука

Опубликовано:

 

8 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 11   
@DavidWhite-ci2tc
@DavidWhite-ci2tc 2 месяца назад
Hi Nick, I have experimented with local access to LLMs, I have a PC that has a GPU and have set up LM Studio and docker to run models, My aim was to ingest my own documents and enable summarisation etc, reasonable success once I worked out how to get it to read my own docs in dropbox, I'd appreciate more videos on LLM on PC 🙏
@brightideasagency
@brightideasagency 2 месяца назад
Thanks for watching and for your feedback on this. This is certainly an area I find interesting.
@chillzwinter
@chillzwinter 2 месяца назад
Great so now we can install fully trained - uncensored AI, on our local machines and ask it any illegal question, such as: how to build a weapon of mass destruciton. Isn't this exactly what AI developers said they would protect us from being able to do?
@brightideasagency
@brightideasagency 2 месяца назад
This is an interesting point of view. This is certainly not the first open source model, but I think the basic concept of once you have this locally the makers no longer have control is a sound one. I certainly want to explore the implications of using this a little more before commenting specifically on this.
@vulcan4d
@vulcan4d 2 месяца назад
This runs suprisingly well locally
@brightideasagency
@brightideasagency 2 месяца назад
Thanks for watching. I agree.
@debarghyamaity9808
@debarghyamaity9808 2 месяца назад
Hi Nick, Great Video! Just had a question what is the specs of your local PC? Does It have any GPU? I'm asking it to understand thoughtput of this model using CPU only
@brightideasagency
@brightideasagency 2 месяца назад
Thanks for watching. The demo for this video was run on a 12600 with a 6700xt GPU, but in testing it was the CPU rather than GPU that seemed to be under load during use. I intend to play around with this setup a little more to understand it a bit better.
@DavidWhite-ci2tc
@DavidWhite-ci2tc 2 месяца назад
I ran Phi 3 on a non GPU machine to see if it would work ok and it was fine
@debarghyamaity9808
@debarghyamaity9808 2 месяца назад
@@DavidWhite-ci2tc what was the token generation rate? You can check that using --verbose tag
@DavidWhite-ci2tc
@DavidWhite-ci2tc 2 месяца назад
@@debarghyamaity9808 as you run Phi3 in Command prompt I can not run verose tag, I asked the model what was the generation rate and it provided a python script, sorry not very helpful 😊
Далее
AMAZING New Copilot Features from Microsoft Build!
13:12
skibidi toilet 76 (part 1)
03:10
Просмотров 14 млн
Why I Quit the Scrum Alliance
7:58
Просмотров 10 тыс.
Microsoft Recall: Security Risk or AI Game-Changer?
15:59
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
Сравнили apple и xiaomi!
0:21
Просмотров 33 тыс.
S-Pen в Samsung достоин Золота #Shorts
0:38