Тёмный

Demo: Rapid prototyping with Gemma and Llama.cpp 

Google for Developers
Подписаться 2,4 млн
Просмотров 65 тыс.
50% 1

Learn how to run Gemma locally on your laptop using Llama.cpp and quantized models.
Watch more videos of Gemma Developer Day 2024 → goo.gle/440EAIV
Subscribe to Google for Developers → goo.gle/developers
#Gemma #GemmaDeveloperDay

Наука

Опубликовано:

 

31 мар 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 57   
@ayoubachak2154
@ayoubachak2154 3 месяца назад
I've used gemma for a benchmark in a research project I'm working on, where I compared human results against AI, gemma was the closest after bloom 176B, followed by models like mistral instruct 7Band llama 34B, even the 2b version did pretty well, great work team 👏🏻
@polish4932
@polish4932 2 месяца назад
Hi mate, if you'd like to compare diff models for the same question, you do so on Wordware. Highly recommending it! ;)
@ayoubachak2154
@ayoubachak2154 2 месяца назад
@@polish4932 thank you
@banzai316
@banzai316 3 месяца назад
Very cool, thank you! I like this format with demos. We are developers!
@ser1ification
@ser1ification 3 месяца назад
Thanks for the demo!
@kevinkawchak
@kevinkawchak Месяц назад
Thank you for the discussion.
@arpitkumar4525
@arpitkumar4525 3 месяца назад
Really cool and simple to understand
@flynnmc9748
@flynnmc9748 3 месяца назад
This is a fantastic format for a talk, insightful and engaging for a viewer!!!
@GoogleDevelopers
@GoogleDevelopers 3 месяца назад
Glad you enjoyed this video! 😎
@user-eh7uo8hw2v
@user-eh7uo8hw2v 3 месяца назад
0:21 🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉
@judevector
@judevector 3 месяца назад
Wow this is so cool 😎, developers changing the world
@forrestfeng1098
@forrestfeng1098 27 дней назад
Like it very good sharing.
@zencephalon
@zencephalon 3 месяца назад
Good demo, nice tooling suggestions out of this
@cho7official55
@cho7official55 3 месяца назад
Cool demo, I'll try it
@thesimplicitylifestyle
@thesimplicitylifestyle Месяц назад
I was looking for this! Thanks! 😎🤖
@TheOrator_Ese
@TheOrator_Ese 2 месяца назад
Very nice 👌 cool Demo
@arpitkumar4525
@arpitkumar4525 3 месяца назад
Minimum System Requirements for running a model locally?
@takudzwamakusha5941
@takudzwamakusha5941 3 месяца назад
This is so cool.
@tonydevelopingstuff
@tonydevelopingstuff 2 месяца назад
Very nice!!!!
@zoomatic293
@zoomatic293 3 месяца назад
This is so cool :)
@voidan
@voidan 3 месяца назад
how do you connect the LM Studio to llama.cpp? you used a preset which was probably custom.
@johnkost2514
@johnkost2514 3 месяца назад
Wrapped in the llamafile runtime it is an even better single file .. oh yes!
@KuldeepSingh-in6js
@KuldeepSingh-in6js 3 месяца назад
cool demo
@Daniel-zl7wf
@Daniel-zl7wf 3 месяца назад
At 9:03, Gemma shows some solid satire
@ChrisTrotter-oj9du
@ChrisTrotter-oj9du 2 месяца назад
good, thank you
@digisignD
@digisignD 3 месяца назад
Cool. Will definitely use this soon
@parisneto
@parisneto 3 месяца назад
CODE would be awesome, as well as knowing the SPEC of the notebook as it’s easy to buy a sub1k or 5K+ at apple store depending on so many factors…
@MacGuffin1
@MacGuffin1 3 месяца назад
Great choice of demo app!!
@monamibob
@monamibob 2 месяца назад
Very interesting demo! What kind of extra work would be required to run this without LM Studio? Does Llama.cpp contain the necessarry functions to load models as servers you can interrogate?
@airhead2741
@airhead2741 3 месяца назад
Is this meant to be super accessible? If I have an APU, on a laptop with no GPU or NPU(?), that means I can expect it to run fairly well? Also considerations for a lighter yet usable model?
@erickcarrasco1938
@erickcarrasco1938 3 месяца назад
I tried that in an old APU, very slow generations but the same result.
@user-vq8on7dh1y
@user-vq8on7dh1y 3 месяца назад
Nah, Gemma is just a parot. It is released for fine-tuning, aka research purpose.
@A032798
@A032798 3 месяца назад
How about windows environment? Is LMstudio/Ollama a better choice?
@MyEthan1998
@MyEthan1998 2 месяца назад
If anyone faces an error on Mac about "network error: self signed certificate", close the app and use terminal, run "NODE_TLS_REJECT_UNAUTHORIZED=0 open -a "LM Studio" " This reopens the app and the error should go away. I have no idea where to put this info sooooo...
@bonadio60
@bonadio60 3 месяца назад
Very nice, but what is your computer spec? Memory and chip?
@darthvader4899
@darthvader4899 3 месяца назад
Probably m3 max 128gb
@JJN631
@JJN631 3 месяца назад
Gemma 7b can run on a rtx 4060 laptop
@some1rational
@some1rational 2 месяца назад
Has anyone else tried doing this? I tried following this exactly with LM Studio using the exact model and prompt but I am consistently getting atrocious outputs; the gemma model is just outputting gibberish or incorrectly formatted JSON. I wish there were more details on the presets used.
@indylawi5021
@indylawi5021 3 месяца назад
Very cool demo 👍. Any chance we can get the source code 😀
@awakenwithoutcoffee
@awakenwithoutcoffee 3 месяца назад
where can we learn to set this up ?
@dtmdota6181
@dtmdota6181 3 месяца назад
Anyone notice ram usage of 16.68 GB? What was that?
@tandaramandaraba
@tandaramandaraba 3 месяца назад
wow
@svenkoesling
@svenkoesling 3 месяца назад
Just my two cents: 1. No explanation on how to connect LM Studio to the Llama.cpp, 2. newest hardware required - at least it doesn't work on my M1 with eight performance cores and 32 GB Ram
@andreawijayakusuma6008
@andreawijayakusuma6008 10 дней назад
did gemma should use GPU ? so I wanna try to learn this model, but I didn't want to use GPU
@deeplearningpartnership
@deeplearningpartnership 3 месяца назад
Awesome.
@yubrshen
@yubrshen 3 месяца назад
What’s the required hardware specs?
@learnwithdmitri
@learnwithdmitri 3 месяца назад
damnn its using 15 gb of ram i have an 8gb m1 i dont think it will work for me..
@lorenzo9196
@lorenzo9196 3 месяца назад
You can download a quantized version 8 maybe 4-5 bits
@learnwithdmitri
@learnwithdmitri 3 месяца назад
@@lorenzo9196 okay i will try
@nayzawminnaing2562
@nayzawminnaing2562 3 месяца назад
That's a lot of RAM to run this for me.
@devagarwal3250
@devagarwal3250 3 месяца назад
Pls provide source code also
@AIPeter-dd9hr
@AIPeter-dd9hr 3 месяца назад
game using lm studio, interesting.
@emmanuelokorafor1705
@emmanuelokorafor1705 3 месяца назад
It's cool now, but what if each application starts deploying local models. It'll turn our phones into what data centers were meant for thereby reducing costs for large corporations. Trading a few megabytes for faster and more expensive chips.
@cmoncmona959
@cmoncmona959 3 месяца назад
Please Elaborate. What were data centres meant for? Asides hardware to run inference of worldwide requests. If it’s done locally, surely it’s better for redundant tasks. Also, the data centres use a lot of megabytes and expensive chips.
@Killputin777
@Killputin777 2 месяца назад
never trust google products.
@savire.ergheiz
@savire.ergheiz 3 месяца назад
Just focus on your existing products Google. Which are a mess 😂
@f00kwhiteblackracismwarsh07
@f00kwhiteblackracismwarsh07 3 месяца назад
Google seems to be trying out too many new things. to me thats a turn off and red flag. everyone is different 🙂
Далее
Google Releases AI AGENT BUILDER! 🤖 Worth The Wait?
34:21
Nobody Can Do it🚗❓
00:15
Просмотров 4,5 млн
ТОП 10 Худших игроков Евро-2024
30:22
NVIDIA CEO says Don't Learn to Code ... why?
27:12
Просмотров 155 тыс.
2024 Solution Challenge Demo Day
1:33:21
Просмотров 10 тыс.
All You Need To Know About Running LLMs Locally
10:30
Просмотров 124 тыс.
98% Cloud Cost Saved By Writing Our Own Database
21:45
Просмотров 317 тыс.
What is LangChain?
8:08
Просмотров 166 тыс.
Git MERGE vs REBASE: The Definitive Guide
9:39
Просмотров 90 тыс.
Я УКРАЛ ТЕЛЕФОН В МИЛАНЕ
9:18
Просмотров 75 тыс.