Тёмный

Try Microsoft's Phi-2 in Colab Notebook 

AI Anytime
Подписаться 32 тыс.
Просмотров 7 тыс.
50% 1

Опубликовано:

 

30 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 26   
@TomBuitelaar-v1b
@TomBuitelaar-v1b 9 месяцев назад
Very insightfull test, inspired by this I also asked the list of words to openhermes2.5 7B and Dolphin 2.6 mixtral 8*7B with the same results. Brings these smaller LLM models back to earth. ChatGPT 3.5 was OK. Thanks
@Jeganbaskaran
@Jeganbaskaran 10 месяцев назад
Thanks for the detailed informative video. Recently many ppl are talking about the new features of new LLM's benchmarking. It will be worth to discuss the real architecture design problems. Many videos explain basic which is really informative however the enterprise ll not play with 2 pdfs and 1 GB of data.. it ll be way beyond that. It will be good if you cover those area as well. Thanks again.
@malleswararaomaguluri6344
@malleswararaomaguluri6344 9 месяцев назад
Hi, How to train with our own documents using Phi-2 locally offline? please make a video on this?
@ChandanKH-kb3xc
@ChandanKH-kb3xc 9 месяцев назад
Yes pls make a video on it
@ramp2011
@ramp2011 10 месяцев назад
Great video. Possible to fine tune this model? Thanks
@henkhbit5748
@henkhbit5748 10 месяцев назад
bad results indeed. Thanks for the test.👍
@lif-cc
@lif-cc 10 месяцев назад
Thanks! I wonder how code is autocompleted?
@AIAnytime
@AIAnytime 10 месяцев назад
Google Colab now has Generative AI capabilities in notebook. It generates code.
@lif-cc
@lif-cc 10 месяцев назад
Great!@@AIAnytime
@eeelllfff.-.
@eeelllfff.-. 10 месяцев назад
What should I do to just get the correct answer without model's example questions?
@757beko
@757beko 10 месяцев назад
You can create your own prompt template using langchain's prompt template. Check that article "Microsoft PHI-2 + Huggine Face + Langchain = Super Tiny Chatbot" (you can lower the temperature and max length initially for faster testing)
@onoff5604
@onoff5604 10 месяцев назад
Many thanks for the video and colab, good Sir! Note and question: I put in "prompt = """Give me a list of 13 words, each of which are 5 letters long.""" and got a decent answer at the start, but then strange output after that. Then I tried "prompt = """If five people give each give you one box, how many boxes do you have?""" And I got back a fragment of a python function which looked decent for giving the asnwer, but then a series of random other python functions. Question: Why is the initial correct (in my case) output followed by nonsense ?
@erichocean8746
@erichocean8746 9 месяцев назад
Because the model is not "instruct" trained, so it doesn't know when to stop.
@anonion747
@anonion747 10 месяцев назад
can we run this locally ... qs might be dumb cause Im new to this AI stuff and learning the basics
@AIAnytime
@AIAnytime 10 месяцев назад
Yes you can. What's the config of your machine?
@anonion747
@anonion747 10 месяцев назад
@@AIAnytime Intel 12500H , 16 GB Ram, about the GPU it's iris Xe ... (ASUS Vivobook S15)
@danpizzytm4157
@danpizzytm4157 8 месяцев назад
You definitely can lol
@danpizzytm4157
@danpizzytm4157 8 месяцев назад
Sorry if I'm late but I think you can try koboldcpp with gguf phi2 by the bloke
@SonGoku-pc7jl
@SonGoku-pc7jl 10 месяцев назад
thanks!
@user4-j1w
@user4-j1w 10 месяцев назад
Thanks bro
@AIAnytime
@AIAnytime 10 месяцев назад
No problem
@mj_cta
@mj_cta 8 месяцев назад
I was hopeful about this model when it was released, the base model is not even worth downloading.
@gamefun2525
@gamefun2525 4 месяца назад
These SLMs are good for nothing but extremely basic tasks.
Далее
ХУДШИЕ ВЫБОРЫ в США
13:20
Просмотров 540 тыс.
Textbooks Are All You Need
13:40
Просмотров 225 тыс.
Fine Tune Phi-2 Model on Your Dataset
51:42
Просмотров 14 тыс.
Is Microsoft's Phi-2 the Best Small Language Model Yet?
10:45
Get started with Microsoft's Phi-2 on your laptop
10:17