Тёмный

FREE Local LLMs on Apple Silicon | FAST! 

Alex Ziskind
Подписаться 215 тыс.
Просмотров 110 тыс.
50% 1

Step by step setup guide for a totally local LLM with a ChatGPT-like UI, backend and frontend, and a Docker option.
Temperature/fan on your Mac: www.tunabellysoftware.com/tgp... (affiliate link)
Run Windows on a Mac: prf.hn/click/camref:1100libNI (affiliate)
Use COUPON: ZISKIND10
🛒 Gear Links 🛒
* 🍏💥 New MacBook Air M1 Deal: amzn.to/3S59ID8
* 💻🔄 Renewed MacBook Air M1 Deal: amzn.to/45K1Gmk
* 🎧⚡ Great 40Gbps T4 enclosure: amzn.to/3JNwBGW
* 🛠️🚀 My nvme ssd: amzn.to/3YLEySo
* 📦🎮 My gear: www.amazon.com/shop/alexziskind
🎥 Related Videos 🎥
* 🌗 RAM torture test on Mac - • TRUTH about RAM vs SSD...
* 🛠️ Host the PERFECT Prompt - • Hosting the PERFECT Pr...
* 🛠️ Set up Conda on Mac - • python environment set...
* 🛠️ Set up Node on Mac - • Install Node and NVM o...
* 🤖 INSANE Machine Learning on Neural Engine - • INSANE Machine Learnin...
* 💰 This is what spending more on a MacBook Pro gets you - • Spend MORE on a MacBoo...
* 🛠️ Developer productivity Playlist - • Developer Productivity
🔗 AI for Coding Playlist: 📚 - • AI
Repo
github.com/open-webui/open-webui
Docs
docs.openwebui.com/
Docker Single Command
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- - - - - - - - -
❤️ SUBSCRIBE TO MY RU-vid CHANNEL 📺
Click here to subscribe: / @azisk
- - - - - - - - -
Join this channel to get access to perks:
/ @azisk
- - - - - - - - -
📱 ALEX ON X: / digitalix
#machinelearning #llm #softwaredevelopment

Наука

Опубликовано:

 

9 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 251   
@AC-cg6mf
@AC-cg6mf 11 дней назад
I really like that you showed the non-docker install first. I think too many rely on docker black-boxes. I prefer this. Thanks!
@philipo1541
@philipo1541 6 дней назад
Dockers are not a black-box. You can get it in them, and change stuff!!!
@camsand6109
@camsand6109 12 дней назад
This channel is the gift that keeps on giving.
@JosepCrespoSantacreu
@JosepCrespoSantacreu 12 дней назад
Another great video Alex, I really enjoy your videos. And I really appreciate your perfect diction in English, which makes it easy to follow your explanations even for those who do not have English as their first language.
@asnifuashifj91274
@asnifuashifj91274 13 дней назад
Great video Alex! yes please make videos on image generation!
@ChrisHaupt
@ChrisHaupt 13 дней назад
Very interesting, will definitely be trying this when I get a little downtime!
@7764803
@7764803 12 дней назад
Thanks Alex for videos like this 👍 I would like to see Image generation follow up video 😍
@gustavohalperin2871
@gustavohalperin2871 9 дней назад
Great video!! And yes, please add a video explaining how to add the images generator.
@ReginaldoKono
@ReginaldoKono 5 дней назад
Yes Alex, you will help us more if we could learn with you how on how to add an image generator as well. We thank you for your time and colaboraron. Your channel is a must have subscription in it now-a-days.
@aldousroy
@aldousroy 13 дней назад
Awesome thing waiting for more videos on the way
@Ginto_O
@Ginto_O 11 дней назад
Thank you, got it to work without docker
@kaorunguyen7782
@kaorunguyen7782 6 дней назад
Alex, I love this video very much. Thank you!
@AaronHiltonSPD
@AaronHiltonSPD 13 дней назад
Amazing tutorial. Great stuff!
@AZisk
@AZisk 13 дней назад
Thank you! Cheers!
@iv4sik
@iv4sik 9 дней назад
if ur trying docker, make sure it is version 4.29+, as host network driver (for mac) revealed there as a beta feature
@mrdave5500
@mrdave5500 12 дней назад
Woot woot! great stuff. Nice easy tutorial and I now have a 'smarter' Mac. Thanks :)
@matteobottazzi6847
@matteobottazzi6847 12 дней назад
A video on how you could incorporate these LLMs in your applications would be super interesting! Let's say that in your application you have a set of pdfs or html files that provide documentation on your product. If you let these LLMs analyse that documentation, then the user could get very useful information just asking and not searching through all of the documentation files!
@FelipeViaud
@FelipeViaud 12 дней назад
+1
@brunosanmartin1065
@brunosanmartin1065 13 дней назад
These videos are so exciting for me; this channel is the number one on RU-vid. That's why I subscribe and gladly pay for RU-vid Premium. A hug, Alex!
@AZisk
@AZisk 13 дней назад
thanks for saying! means a lot
@RealtyWebDesigners
@RealtyWebDesigners 13 дней назад
Now we need 1TB MEMORY DRIVES (Like the Amiga used to have 'fast ram' )
@MrMrvotie
@MrMrvotie 9 дней назад
@@AZisk Is their any chance you could incorporate a PC GPU Relative Performance Equivalence to each new apple silicon microchip that you review?
@ilkayayas
@ilkayayas 7 дней назад
Nice. Image generation and integrating new chatgpt in to this will be great.
@loveenjain
@loveenjain 10 дней назад
Excellent Video giving it a try tonight on my M3 Max 14 inch model and see what are the results will share probably...
@sungm2n
@sungm2n 11 дней назад
Amazing stuff. Thank you
@erenyeager655
@erenyeager655 12 дней назад
One thing for sure... I'll be implementing this on my menu bar for easy access :D
@johnsummers7389
@johnsummers7389 13 дней назад
Great Video Alex. Thanks.
@AZisk
@AZisk 13 дней назад
Glad you liked it!
@DaveEtchells
@DaveEtchells 13 дней назад
I was gonna spring for a maxed M3 Max MBP, but saw rumors that the M4 Max will have more AI-related chops, so just picked up a maxed M1 Max to tide me over 😁 Really excited about setting all this up, finding this vid was very timely, thanks!
@moranmono
@moranmono 13 дней назад
Great video. Awesome 👏
@WokeSoros
@WokeSoros 2 дня назад
I was able to, by tracking down your Conda video, get this running. I have some web dev and Linux experience, so it wasn’t a huge chore but certainly not easy going in relatively blind. Great tutorial though. Much thanks.
@guyguy467
@guyguy467 13 дней назад
Thanks! Very nice video
@AZisk
@AZisk 13 дней назад
Wow! Thank you!
@youssefragab2109
@youssefragab2109 12 дней назад
This is really cool, love the channel and the videos Alex! Just curious, how is this different to an app like LM Studio? Keep up the good work!
@jorgeluengo9774
@jorgeluengo9774 12 дней назад
by the way, I just joined your channel, I really enjoyed these videos, very helpful, thanks!
@AZisk
@AZisk 12 дней назад
awesome. welcome!
@vadim487
@vadim487 12 дней назад
Alex, you are awesome!
@dibyajit9429
@dibyajit9429 13 дней назад
I've just started my career as a Data Scientist, and I found this video to be awesome! 🤩🥳Could you please consider making a video on image generation (in LLama 3) in a private PC environment?🥺🥺
@bvlmari6989
@bvlmari6989 12 дней назад
Amazing video omg, incredible tutorial man
@AZisk
@AZisk 12 дней назад
Glad you liked it!
@mendodsoregonbackroads6632
@mendodsoregonbackroads6632 6 дней назад
Yes I’m interested in an image generation video. I’m running llama3 in Bash, haven’t had time to set up a front end yet. Cool video.
@OrionM42
@OrionM42 11 дней назад
Thanks for the video.😊😊
@sikarinkaewjutaniti4920
@sikarinkaewjutaniti4920 10 дней назад
Thx for sharing good stuff for us. Nice onec
@BenjaminEggerstedt
@BenjaminEggerstedt 12 дней назад
This was interesting, thanks
@AzrealNimer
@AzrealNimer 12 дней назад
I would love to see the image generation tutorial 😁
@gligoran
@gligoran 12 дней назад
Amazing video! I'd just recommend Volta over nvm.
@shapelessed
@shapelessed 13 дней назад
YO! Finally hearing of a big Svelte project! Like really, it's so much quicker and easier to ship with Svelte than others, why am I only seeing this now?
@AZisk
@AZisk 13 дней назад
Svelte for the win!
@precisionchoker
@precisionchoker 13 дней назад
Well.. Apple, Brave, New York times, IKEA among other big names all use svelte
@shapelessed
@shapelessed 13 дней назад
​@@precisionchoker But they do not acknowledge that too much..
@soulofangel1990
@soulofangel1990 13 дней назад
Yes, we do.
@Dominickleiner
@Dominickleiner 6 дней назад
instant sub, great content thank you!
@AZisk
@AZisk 6 дней назад
Welcome aboard!
@willmartin4715
@willmartin4715 12 дней назад
i believe my laptop has 80 Tensor cores. for starters. This looks like a really good shift for a fri night! thanks.
@davidgoncalvesalvarez
@davidgoncalvesalvarez 13 дней назад
My M1 Mac 16GB be real frightened on the side rn.
@blackandcold
@blackandcold 13 дней назад
I ran 7b variants no problem on my now sold m1 air 16g
@ivomeadows
@ivomeadows 13 дней назад
got macbook with the same specs. tried to run 15b starcoder2 quantized k5m in LM studio on it, max GPU layers, getting me around 12-13 tokens per sec, not good but manageable
@RobertMcGovernTarasis
@RobertMcGovernTarasis 13 дней назад
Don't be, unless you are using other things that are super heavy as well. Llama3 8B(?) takes up about 4.7GB of Ram, with the Silicon's event use of the Nvme and Swap you'll be fine. (I prefer using LM Studio now to Ollama as it has CLI and Web built in, no need for Docker/OrbStack but, Ollama on its own without a WebUI works too)
@martinseal1987
@martinseal1987 13 дней назад
😂
@DanielHarrisCodes
@DanielHarrisCodes 12 дней назад
Great video. What format are LLM models download as? Looking into how I can use those downloaded with OLLAMA with other technologies like .NET
@keithdow8327
@keithdow8327 13 дней назад
Thanks!
@AZisk
@AZisk 13 дней назад
Wow 🤩 thanks so much!
@akhimohamed
@akhimohamed 12 дней назад
As a game dev, this is so good to have. Btw am gonna try this on parallels for my m1 pro
@Lucas-fl8ug
@Lucas-fl8ug 8 дней назад
You mean in windows through parallels? why would it be useful?
@RealtyWebDesigners
@RealtyWebDesigners 13 дней назад
BTW - One of the BEST programmer channels!
@LucaCilfoneLC
@LucaCilfoneLC 11 дней назад
Yes! Image generation, please!
@haralc
@haralc 11 дней назад
Oh you got distracted! You're a true developer!
@Meet7
@Meet7 9 дней назад
thanks alex
@filipjofce
@filipjofce 8 дней назад
So cool, and it's free (if we don't count the 4 grands spent for the machine). I'd love to see the images generation
@agnemedia624
@agnemedia624 13 дней назад
Thanks 👍🏻
@erwintan9848
@erwintan9848 11 дней назад
Is it fast on mac m1 pro too? How many storage used for all instalation sir? Your video is awesome!
@Raptor235
@Raptor235 9 дней назад
Great video Alex, is there anyway to have an LLM execute local shell scripts to perform tasks?
@innocent7048
@innocent7048 13 дней назад
Here you have a super like - and a cup of coffee 🙂
@AZisk
@AZisk 13 дней назад
Yay, thank you! I haven't been to Denmark in a while - beautiful country.
@yianghan751
@yianghan751 11 дней назад
Alex, excellent video! Can my MacBook air m2 with 16G RAM host these AI engines smoothly?
@XinYue-ki3uw
@XinYue-ki3uw 11 дней назад
i like this tutorial, it is computer dummy friendly~
@ashesofasker
@ashesofasker 6 дней назад
Great video! So are you saying that we can get ChatGPT like quality just faster, more private and for free by running local LLM's on our personal machines? Like, do you feel that this replaces ChatGPT?
@ontime8109
@ontime8109 10 дней назад
thanks!
@swapwarick
@swapwarick 13 дней назад
I am running llama, code Gemma on my laptop for local files intelligence. It's slow but damm it reads all my PDFs and give perfect overview
@devinou-programmationtechn9979
@devinou-programmationtechn9979 13 дней назад
Do you do it through ollama and open webui ? I m curious as to how you can send files to be processed by llms
@ShakeAndBakeGuy
@ShakeAndBakeGuy 13 дней назад
@@devinou-programmationtechn9979 GP4All works fairly well with attachments. But I personally use Obsidian as a RAG to process markdown files and PDFs. There are tons of plugins like Text Generator and Smart Connections that can work with Ollama, LM Studio, etc.
@TheXabl0
@TheXabl0 12 дней назад
Can you describe this “perfect overview”? Just curious what you mean by
@swapwarick
@swapwarick 12 дней назад
Yes running open webui for llama and code Gemma llms on windows machine. Running open webui on localhost gives textarea where you can upload the file. The upload takes time. Once it is done, you can ask questions like give me an overview of this document, tell me all the important points of this document etc
@TheChindoboi
@TheChindoboi 7 дней назад
Gemma doesn’t seem to work well on Apple silicon
@toddbristol707
@toddbristol707 11 дней назад
Great channel! I just did a build something similar with lm studio and flask based web ui. I’m going to try this method now. Btw, what was the ‘code .’ command you ran? Are you using visual studio code? Thanks again!
@AZisk
@AZisk 5 дней назад
Thanks! and thanks for joining. I did the flask thing a few videos ago, but it's just another thing to maintain. I find this webui a lot more feature rich and better looking. And yes, the 'code .' command just opens the current folder in VSCode
@thevirtualdenis3502
@thevirtualdenis3502 7 дней назад
Thanks ! Is Macbook air enough for that?
@pixelplay1098
@pixelplay1098 13 дней назад
Amazing stuff as Usual. Now make a tutorial on Automatic 1111
@MohammedAraby
@MohammedAraby 4 дня назад
Well be happ to see a tutorial for automatic 1111 ❤
@jakubpeciak429
@jakubpeciak429 10 дней назад
Hi Alex, I would like to see the image generation video
@113bast
@113bast 13 дней назад
Please show image generation
@99cya
@99cya 13 дней назад
Hey Alex, would you say Apple is in a very good position when it comes to AI and the required hardware? So far Apple has been really quiet and lots of ppl dont think Apple can have an edge here. Whats your thought in general here?
@OlegShulyakov
@OlegShulyakov 8 дней назад
When there will be a video to run LLM on an iPhone or iPad? Like using LLMFarm
@rafaelcordoba13
@rafaelcordoba13 13 дней назад
Can you train these local LLMs with your own code files? For example adding all files from a project as context so the AI suggests things based on your current code structure and classes.
@dmitrykomarov6152
@dmitrykomarov6152 6 дней назад
Yeap, you can then make a RAG with the LLMs you prefer. Will be making my own RAG with llama3 this weekend.
@AlexLaslau
@AlexLaslau 9 дней назад
MBP M1 Pro with 16GB of RAM would be enough to run this?
@jorgeluengo9774
@jorgeluengo9774 12 дней назад
Thank You Alex, amazing video, I followed all steps and I enjoyed the process and the results with my m3 max. I wonder if there is a GPT that we can use from the laptop and have searches online since the cutoff knowledge date of these models seem to be over a year ago or more. For example when I ask questions of what is the terraform provider version for aws or other type of platform, is old and there is a potential to have deprecated code responses. What do you recommend in this case? not sure if you have already a video for that lol.
@AZisk
@AZisk 12 дней назад
that’s a great question. you’ll need to use a framework like flowise or langchain to accomplish this I believe, but i don’t know much about them - it’s on my list of things to learn
@jorgeluengo9774
@jorgeluengo9774 12 дней назад
@@AZisk makes sense, I will do some research about it and see what I can find out to test but I will look forward when you share a video with this type of model orchestration, will be fantastic.
@Mikoaj-ie6gt
@Mikoaj-ie6gt 11 дней назад
very intresting
@MW-mn1el
@MW-mn1el 13 дней назад
I use Ollama with Continue plugin with VSCode. And Chatbox GUI when not code related. Work well with both Mac and Linux with Ryzen 7000 CPU. On linux it's running in a podman(docker) container. But best experience is with MacBook Pro, apple silicon and unified memory make it speedy.
12 дней назад
Is mps available on docker for Apple Silicon already?
@AdityaSinghEEE
@AdityaSinghEEE 12 дней назад
Can't believe, I found this video today because I just started searching for Local LLMs yesterday and today, I found the complete guide. Great video Alex :)
@scorn7931
@scorn7931 7 дней назад
You live in Matrix. Wake up
@Megabeboo
@Megabeboo 12 дней назад
How do I find out about the hardware requirements like RAM, disk space, GPU?
@gayanperera7273
@gayanperera7273 12 дней назад
Thanks @Alex, by the way is there a reason it can only use GPU, any reason not taking advantage of NPUs ?
@bisarothub1644
@bisarothub1644 День назад
Great video. But I think Jan AI is a lot easier to configure and setup for mac users
@faysal1991
@faysal1991 4 дня назад
lets do some image generation please it would be super helpful
@rickymassi
@rickymassi 13 дней назад
Why not doing a deployment with Electron, so you have a desktop application. Btw I love this thing!!!
@jehad4455
@jehad4455 13 дней назад
Mr. Alex Ziskind Could you clarify whether training deep learning models on a GPU for the Apple Silicon M3 Pro might reduce its lifespan? Thank you.
@cookiebinary
@cookiebinary 10 дней назад
Tried llama3 on 8GB ram M1 :D ... I guess I was too optimistic
@zorawarsingh11
@zorawarsingh11 10 дней назад
Yes do images please 🙏🏻
@tyron2854
@tyron2854 12 дней назад
What about a new M4 iPad Pro video?
@abdorizak
@abdorizak 13 дней назад
Alex why M1 Mac getting heated when use like 10 minutes?
@ykimleong
@ykimleong 12 дней назад
Hi, please please, if possible to generate images through ollama webui
@sergey_c
@sergey_c 10 дней назад
Было бы здорово ещё дать краткое описание каждой из моделей и рейтинг популярности или узконаправленности. А то установишь какие-то неизвестные модели себе на мак)
@WilliamShrek
@WilliamShrek 7 дней назад
Yes yes please make a video generation video!!!
@TheMrApocalips
@TheMrApocalips 12 дней назад
Can you make stock trading "AI" using these tools on apple or snap dragon/similar?
@nimahakimi597
@nimahakimi597 13 дней назад
My bro. 9k views 1k likes. Beautiful ❤
@RealtyWebDesigners
@RealtyWebDesigners 13 дней назад
Best channel for REAL computer users - Others are just, look how fast I can compile 'VIDEOS' about doing 'videos' - Retards.
@bekagelashvili2904
@bekagelashvili2904 21 час назад
easy question, if i am not developer, what's the benefit i get from installing LLM in my apple silicon, what's the difference, between free version, or paid version of ai models ?
@truenetgmx
@truenetgmx 13 дней назад
now benchmark it vs mac air :) also wonder how much these are usefull tools and not just toys
@thetabletopskirmisher
@thetabletopskirmisher 10 дней назад
What advantage does this have over using LM Studio that you can install directly as an app instead of using the Terminal? (Genuine question)
@MeinDeutschkurs
@MeinDeutschkurs 13 дней назад
Without a mug, I could not do all of this. Nice to see that you also need a mug while doing all of these tasks. 🎉 #adictedTOmugs
@AZisk
@AZisk 13 дней назад
So true!
@MeinDeutschkurs
@MeinDeutschkurs 13 дней назад
@@AZisk 😆🤗 yeah!
@alexbanh
@alexbanh 11 дней назад
How does the MBP performance compare to Intel x Nvidia when running these local LLM
@justintie
@justintie 11 дней назад
the question is: are opensource LLMs just as good as say chatGPT or Gemini?
@TheBiffsterLife
@TheBiffsterLife 10 дней назад
Will the m4 chips be many times faster still?
@UC1C0GDMTjasAdhELHZ6lZNg
@UC1C0GDMTjasAdhELHZ6lZNg 10 дней назад
Just install LM Studio
@historiasinh9614
@historiasinh9614 6 дней назад
Which model is good for programing on JavaScript no a Apple Silicon 16GB?
@manoharmeka999
@manoharmeka999 13 дней назад
What's system configuration required for a smooth run?
@AZisk
@AZisk 13 дней назад
the sky is the limit here.
@shelby3486
@shelby3486 12 дней назад
​@@AZiskhilarious 😂😂😂😂
@aaronsayeb6566
@aaronsayeb6566 6 дней назад
do you know if any llm would run on base model M1 MacBook Air (8GB memory)?
@alexanderekeberg4343
@alexanderekeberg4343 12 дней назад
should i upgrade from macbook pro 2020 (intel core i5 8th gen quad-core 1.4ghz) to macbook air m3 15 inch for coding?
@safahmehdavi8264
@safahmehdavi8264 13 дней назад
hey alex why dont teach us how to program, start a series in python, or c++ or swift....
@icsu6530
@icsu6530 12 дней назад
Just read on those documentations and you are good to go
Далее
VLOG: ПОДАРИЛА МАШИНУ РОДИТЕЛЯМ
27:46
Mac keyboard shortcuts you need to know.
2:51
Просмотров 8 тыс.
Microsoft FINALLY Fixed This...
10:07
Просмотров 89 тыс.
Mercedes AMG EQE SUV: Ugly but Comfortable!
13:08
Просмотров 339 тыс.
What are Large Language Models (LLMs)?
5:30
Просмотров 215 тыс.
The Fake S24 Ultra is Better Than Ever...?
10:31
Просмотров 19 тыс.
I found an iMac in the TRASH... can we fix it?
20:09
Просмотров 228 тыс.
🤖Вернулись в ПРОШЛОЕ🤪
0:28
Просмотров 109 тыс.