Тёмный

Run Google Gemma 2B 7B Locally on the CPU & GPU 

Nono Martínez Alonso
Подписаться 3,5 тыс.
Просмотров 8 тыс.
50% 1

How to run Google Gemma 2B- and 7B-parameter models locally on the CPU and the GPU on Apple Silicon Macs. Use Gemma Instruct models with the Hugging Face CLI, PyTorch, and Hugging Face's Transformers and Accelerate Python packages.
↓ MY RECORDING GEAR ↓
- ZV-E10 (Camera) geni.us/UA6awhW
- ZOOM H6essential (Audio Recorder) geni.us/RqjX2Q
- SHURE SM7B (Microphone) geni.us/KXnYL
- ATOMOS NINJA geni.us/atomos-ninja-gs
- SIGMA LENS 16mm F1.4 geni.us/sigma-16mm-f14-lens
==================
🎥 VIDEO CHAPTERS
==================
00:00 Introduction
01:05 Find Models in Hugging Face
01:28 Terms
01:57 Install the Hugging Face CLI
02:21 Login
02:55 Download Models
03:51 Download a Single File
04:50 Download a Single File as a Symlink
05:25 Download All Files
06:32 Hugging Face Cache
07:00 Recap
07:29 Using Gemma
08:02 Python Environment
08:47 Run Gemma 2B on the CPU
12:13 Run Gemma 7B in the CPU
13:07 CPU Usage and Generating Code
17:24 List Apple Silicon GPU Devices with PyTorch
18:59 Run Gemma on Apple Silicon GPUs
23:52 Recap
24:25 Outro
==================
💬 Join the conversation on Discord / discord
🧠 Machine Intelligence Playlist: • 🧠 Machine Intelligence
🔴 Live Playlist: • 🧠 Live Streams
🕸 Web Development Playlist: • 🚀 Web
🍃 Getting Simple: gettingsimple.com
🎙 Podcast: gettingsimple.com/podcast
🗣 Ask Questions: gettingsimple.com/ask
💬 Discord: / discord
👨🏻‍🎨 Sketches: sketch.nono.ma
✍🏻 Blog: nono.ma
🐦 Twitter: / nonoesp
📸 Instagram: / nonoesp

Наука

Опубликовано:

 

11 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 34   
4 месяца назад
Code › github.com/nonoesp/live/tree/main/0113/google-gemma ↓ Timestamps! 00:00 Introduction 01:05 Find Models in Hugging Face 01:28 Terms 01:57 Install the Hugging Face CLI 02:21 Login 02:55 Download Models 03:51 Download a Single File 04:50 Download a Single File as a Symlink 05:25 Download All Files 06:32 Hugging Face Cache 07:00 Recap 07:29 Using Gemma 08:02 Python Environment 08:47 Run Gemma 2B on the CPU 12:13 Run Gemma 7B in the CPU 13:07 CPU Usage and Generating Code 17:24 List Apple Silicon GPU Devices with PyTorch 18:59 Run Gemma on Apple Silicon GPUs 23:52 Recap 24:25 Outro Thanks for watching! Subscribe to this Luma calendar for future live events! lu.ma/nono
@tigery1016
@tigery1016 4 месяца назад
Thank you so much! I was able to run gemma-2b-it. great model. love how google is releasing this open source, rather than closed-source (unlike ClosedAI's ChatGPT)
4 месяца назад
Nice! Happy to hear you were able to run Gemma. =)
@tigery1016
@tigery1016 4 месяца назад
im still on monterey, so gpu doesnt work yeah cant wait to update to sonoma and use the full power of the m1 pro@
@nadiiaheckman4213
@nadiiaheckman4213 2 месяца назад
Awesome stuff! Thank you, Nono.
2 месяца назад
Thank, you Nadiia! Glad you found this useful. =)
@bao5806
@bao5806 3 месяца назад
"Why is it that when I run 2B it's very slow on my Mac Air M2, usually taking over 5 minutes to generate a response? But on Ollama, it's very smooth?"🤨
3 месяца назад
Hey! It's likely because they're running the models with C++ (llama.cpp or gemma.cpp) instead of running them with Python. It's much faster, and I'm still to try Gemma.cpp. Let us know if you experiment with this! Nono
@nhinged
@nhinged 3 месяца назад
@ can you link gemma.cpp haven't looked google yet but if you can would be nice
3 месяца назад
github.com/google/gemma.cpp
@cloudby-priyank
@cloudby-priyank 4 месяца назад
hi i am getting this error Traceback (most recent call last): File "C:\Users\Priyank Pawar\gemma\.env\Scripts un_cpu.py", line 2, in from transformers import AutoTokenizer, AutoModelForCausalLM ModuleNotFoundError: No module named 'transformers' i installed transformer but still getting error
4 месяца назад
Hey, Priyank! Did you try exiting the Python environment and activating it again after installing transformers?
@LudoviKush
@LudoviKush 4 месяца назад
Great Content!
4 месяца назад
Hi Ludovico! Thanks so much for letting me know. I'm you found the content useful. =) Cheers! Nono
@ferhateryuksel4888
@ferhateryuksel4888 3 месяца назад
Hi, I'm trying to make an order delivery chatbot. I made with GPT by giving APIs but I think it will cost too much. That's why I want to train a model. What you suggest about this?
3 месяца назад
Hey! I would recommend you try all open LLMs available at the moment and asses which one works better for you in terms of costs for running it locally, speed of inference, and performance. Ollama is a great resource because in one app you can try many of them. Gemma is a great option, but you should look at Llama 2, Mistral, Falcon, and other open models. I hope this helps! Non
@ferhateryuksel4888
@ferhateryuksel4888 3 месяца назад
Thanks, a lot @
@alkiviadispananakakis4697
@alkiviadispananakakis4697 4 месяца назад
I have followed everything. I get this error when trying to run on GPU: RuntimeError: User specified an unsupported autocast device_type 'mps' I have confirmed the mps is available and have reinstalled everything
4 месяца назад
Hey! If you've confirmed mps is available, you must be running on Apple Silicon, right? If you are, and you've set up the Python environment as explained, can you share what machine and configuration you're using? I've only tested this on an M3 Max MacBook Pro.
4 месяца назад
Other people have mentioned the GPU not being available in macOS versions previous to Sonoma. Are you on the latest update?
@alkiviadispananakakis4697
@alkiviadispananakakis4697 4 месяца назад
@Hello! Thank you for your response. Yes i have an M1 MAX with the Sonoma 14.3.1. I also tried all the models in case there was an issue with the number of parameters.
@drjonnyt
@drjonnyt 4 месяца назад
@@alkiviadispananakakis4697 I had the same issue on M2 Pro. I just fixed it by downgrading transformers to 4.38.1. Now my only problem is it's unbelievably slow to run!
4 месяца назад
Nice! The only thing that may be faster is running gemma.cpp.
@maxyan2572
@maxyan2572 4 месяца назад
what is your machine?
4 месяца назад
Hi, Maxyan! I'm using a MacBook Pro M3 Max (14-inch, 2023) with 1TB SSD, 64GB Unified Memory, and 16 cores (12 performance and 4 efficiency). Nono
@maxyan2572
@maxyan2572 4 месяца назад
😍thanks! that is very detailed@
@AamirShroff
@AamirShroff 2 месяца назад
I am trying to run with cpu. I am getting this error: Gemma's activation function should be approximate GeLU and not exact GeLU. Changing the activation function to `gelu_pytorch_tanh`.if you want to use the legacy `gelu`, edit the `model.config` to set `hidden_activation=gelu` instead of `hidden_act`. See github.com/huggingface/transformers/pull/29402 for more details. Loading checkpoint shards: 0%| | 0/2 [00:00
2 месяца назад
Hey, Aamir! What machine are you running on?
@AamirShroff
@AamirShroff 2 месяца назад
@ I am using intel core ultra 5 14th gen.
2 месяца назад
I've only run Gemma on Apple Silicon so I can't guide you too much. Hmm.
@jobinjose3917
@jobinjose3917 4 месяца назад
Can you provide the code, where symlinks is not used. Just downloaded as a repo. How to add that repo in the code. I have copied the folder in the environment and just added like, tokenizer = AutoTokenizer.from_pretrained("./gemma-7b-it")
4 месяца назад
No matter whether you symlink or download the files to the folder, you should be able to load the files in the same way. To download the files (without symlink) you can add the flag --local-dir LOCAL_PATH_HERE and not use the --local-dir-use-symlink flag. Note that even when you don't symlink, the large files, i.e., the models, will still be symlinked because they are often huge files. I hope that helps! =) Nono
Далее
t-SNE Dimensionality Reduction with Scikit-Learn
58:06
Просмотров 1,7 тыс.
Demo: Rapid prototyping with Gemma and Llama.cpp
11:37
Alisha Lehmann joins Juventus Women 🤍🖤
00:16
Просмотров 3,3 млн
DEFINITELY NOT HAPPENING ON MY WATCH! 😒
00:12
Просмотров 21 млн
Run Gemma 2 9B Model on Free Google Colab
8:22
Introducing Liveblocks 2.0
3:00
Просмотров 459
All You Need To Know About Running LLMs Locally
10:30
Просмотров 124 тыс.
host ALL your AI locally
24:20
Просмотров 844 тыс.
Finetuning Gemma 2B (w/ Example Colab Code)
12:13
Просмотров 8 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 439 тыс.
Красиво, но телефон жаль
0:32
Просмотров 284 тыс.