Тёмный

Building My Ultimate Machine Learning Rig from Scratch! | 2024 ML Software Setup Guide 

Sourish Kundu
Подписаться 2,5 тыс.
Просмотров 9 тыс.
50% 1

Join me on an exhilarating journey as I dive into the world of Machine Learning by building my very own ML rig from scratch! This video is your all-in-one guide to assembling a powerful machine learning computer, explaining every component's role in ML tasks, and setting up the essential software to get you up and running in the world of artificial intelligence.
🔧 What's Inside:
Component Selection: Dive deep into the heart of my machine as I explain why I chose each piece of hardware for my ML rig, covering the CPU, GPU, RAM, and more. Understand the critical role these components play in machine learning projects and how they can accelerate your computations.
Parts list w/ cost breakdown: docs.google.com/spreadsheets/...
Building Process: Watch as I put together all the pieces, sharing tips and tricks for assembling your machine learning computer. Whether you're a seasoned builder or a first-timer, there's something for everyone to learn.
Software Installation: The right tools can make or break your ML projects. I'll walk you through installing and configuring key machine learning software and libraries such as PyTorch, TensorFlow, CUDA, local Large Language Models (LLMs), Stable Diffusion, and more. Get insights into each tool's unique strengths and find out how to leverage them for your projects.
📚 Why This Matters:
Building a dedicated machine learning rig can be a game-changer for hobbyists, students, and professionals alike. By customizing your setup, you unlock new potentials in processing speed, efficiency, and the ability to handle complex models and datasets. This video aims to demystify the process and empower you to take your first step into a larger world of AI and ML.
💡 Perfect For:
AI/ML enthusiasts eager to build their specialized hardware.
Anyone interested in the hardware aspect of machine learning.
Viewers looking to install and use major ML software and libraries.
📌 Timestamps:
0:00 - Intro
0:25 - Parts
3:04 - The Build
4:48 - Ubuntu Server 22.04 LTS
7:25 - Tailscale
8:10 - NVIDIA Drivers
8:54 - Ollama
10:01 - Code Server
11:30 - Anaconda
12:07 - CodeLlama
13:47 - PyTorch
14:37 - Stable Diffusion
15:41 - Docker w/ GPUs
16:37 - Isaac Gym
18:23 - Tensorflow & CUDA Toolkit
21:10 - Conclusion
🔗Links & Resources:
Graphics Drivers Issue: askubuntu.com/questions/14663...
Docker Issue: stackoverflow.com/questions/7...
Isaac Gym:
catalog.ngc.nvidia.com/orgs/n... docs.omniverse.nvidia.com/isa...
#MachineLearning #TechBuild #AISoftwareSetup #DIYComputerAssembly #MLRig #ArtificialIntelligence

Опубликовано:

 

6 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 117   
@aravjain
@aravjain День назад
This feels like a step by step tutorial, great job! I’m building my RTX 4070 Ti machine learning PC soon, can’t wait!
@Hamisaah
@Hamisaah Месяц назад
You put so much effort and knowledge into this video! I watched all the way and it was interesting how you demonstrated the whole build from scratch. Keep it up!
@sourishk07
@sourishk07 Месяц назад
Thank you so much for watching! Excited to make more ML videos 🙏
@paelnever
@paelnever Месяц назад
@@sourishk07 Better use llama.cpp instead of ollama, faster and more options including model switching or running multiple models simultaneously.
@sourishk07
@sourishk07 Месяц назад
Thanks for the recommendation! I'll definitely take a look. I like Ollama because of how simple it is to get up and running and that's why I chose to showcase it in the video.
@paelnever
@paelnever Месяц назад
@@sourishk07 You don't seem the kind of people who likes "simple" things. Anyway if you want to run llama.cpp in a simple way also you can do it.
@sourishk07
@sourishk07 2 дня назад
@paelnever I just played around with it and it seems really promising! Definitely want to spend more time looking into it. I appreciate the rec
@Eric_McBrearty
@Eric_McBrearty Месяц назад
This was a great video. I had to pause it like 10 times to make bookmarks to all of the resources you dove into. The I saved it to Reader, and summarized it with ShortForm. Great stuff. You went into just enough detail to cover the whole project and still keep the video moving along.
@sourishk07
@sourishk07 Месяц назад
That was a balance I was trying really hard to navigate, so I'm glad the video was useful for you! Hope you have as much fun setting up the software as I did!
@Chak29
@Chak29 8 дней назад
I echo other comments; this is such a great video and you can see the effort put in, and you present your knowledge really well. Keep it up :)
@sourishk07
@sourishk07 3 дня назад
Wow, thank you!
@DailyProg
@DailyProg Месяц назад
I found your channel today and binged all the content. Please please please keep this up
@sourishk07
@sourishk07 Месяц назад
Wow I'm glad you found my channel this valuable! Don't worry, I have many more videos coming up! Stay tuned :)
@jordachekirsten9803
@jordachekirsten9803 8 дней назад
Great clear nd thorough content. I look forwrard to seeing more! 🤓
@sourishk07
@sourishk07 3 дня назад
Awesome, thank you!
@JEM871
@JEM871 Месяц назад
Great video! Thanks for sharing
@sourishk07
@sourishk07 Месяц назад
Thanks for watching! Stay tuned for more content like this!
@jefferyosei101
@jefferyosei101 Месяц назад
This is such a good video. Thank you, can't wait to see your channel grow so big, you're awesome and oh we share the same process of doing things 😅
@sourishk07
@sourishk07 Месяц назад
I really appreciate those kind words! Tell me more about how our processes overlap!
@akashtriz
@akashtriz Месяц назад
Hi @sourishk07, I had considered the same config as yourself but then changed my mind due to: 1. the unstable 14900K performance due to MoBo feeding the i9 insanely high power. Please Do make sure you enforce intels thermal limitations on the Asus MoBo bios settings. 😊 2. Instead of the NR200P I opted for AP201 case so that a 360mmAIO can be used for the CPU. 3. I went for a used 3090 as much of my focus will be on using the A100 or AMD Mi300x on the cloud. ROCm has made huge progress, noteworthy is the efforts that George Hotz is taking to make ROCm more understandable for the ML community. Overall congratulations buddy, hope you succeed at your goals.
@sourishk07
@sourishk07 Месяц назад
Hi! Thanks for watching the video and sharing your setup. You bring up completely valid points. 1. I personally haven't had any issues with the 14900K stability. I didn't turn on AI overclocking in the BIOS and just left settings at stock (except XMP for RAM). I'm probably more wary with any sort of overclocking after that news came out now though lol 2. The reason I opted for the smaller case was because I wanted to try building in a SFF for the first time. The good thing is that cooling hasn't really been impaired, although a larger radiator never hurt 3. I should've considered a used 3090 as well, but because I wanted to do some computer graphics work as well, I opted for the newer architecture. And while the advancements in ROCm do seem promising, I'm not sure anything will ever take me away from NVIDIA's vast software suite for ML/AI, but maybe one day, we'll see!
@halchen1439
@halchen1439 21 день назад
This is so cool, im definitely gonna try this when I get my hands on some extra hardware. Amazing video. I can also imagine this must be pretty awesome if youre some sort of scientist/student at a university that needs some number crunching machine since youre not limited to being at your place or some pc lab.
@sourishk07
@sourishk07 20 дней назад
Yes, I think it’s a fun project for everyone to try out! I learned a lot of about hardware and the different softwares
@benhurwitz1617
@benhurwitz1617 Месяц назад
This is actually sick
@sourishk07
@sourishk07 Месяц назад
Thank you so much!
@benoitheroux6839
@benoitheroux6839 Месяц назад
Nice video, well done ! this is promising content! Can't wait to see you try some Devin like stuff or test other way to use LLMs.
@sourishk07
@sourishk07 Месяц назад
Thank you so much for watching! It'll be really cool to be able to run more advanced LLMs as they continue to grow in capabilities! Excited to share my future videos
@mufeedco
@mufeedco Месяц назад
This video is truly exceptional.
@sourishk07
@sourishk07 Месяц назад
I'm really glad you think so! Thanks for watching
@archansen8084
@archansen8084 Месяц назад
Super cool video!
@sourishk07
@sourishk07 Месяц назад
Glad you liked it!
@sohamkundu9685
@sohamkundu9685 Месяц назад
Great video!!
@sourishk07
@sourishk07 Месяц назад
Thank you Soham!
@deltax7159
@deltax7159 Месяц назад
Cant wait to build my first ML machine
@sourishk07
@sourishk07 Месяц назад
Good luck! I'm really excited for you!
@Zelleer
@Zelleer Месяц назад
Cool vid! Not sure about pulling hot air from inside the case, through the rad, to cool the CPU though. But really a great video for anyone interested in setting up their own AI server!
@sourishk07
@sourishk07 Месяц назад
Hi! That’s a good point, but from my testing, the max difference in temperature is only about 5 degrees Celsius. Also, keeping the GPU cooler is more important. And because the only place in the case for the rad is at the top, I don’t want to have it be intake, because heat rises and the fans would suck in dust. Thanks for watching though and I really appreciate you sharing your thoughts! Let me know if there were any other concerns that you had. Always open to feedback 🙏
@ericksencionrestituyo1802
@ericksencionrestituyo1802 Месяц назад
Great work, KUDOS
@sourishk07
@sourishk07 Месяц назад
Thanks a lot! I appreciate the comment!
@sergeziehi4816
@sergeziehi4816 Месяц назад
This, is days of work!!! Compile freely in 1 single video. Thanks!! For that.pricesless information here
@cephas2009
@cephas2009 Месяц назад
Relax it's 2hrs hard work max.
@sourishk07
@sourishk07 Месяц назад
Don't worry, I had a lot of fun making this video! Thanks for watching and I hope you're able to set up your own ML server too!
@manojkoll
@manojkoll 20 часов назад
Hi Sourish, the video was very helpful I found the following config on Amazon, how would you rate it. Plan to run some Ollama models and few custom projects leveraging smaller size LLMS Cooler Master NR2 Pro Mini ITX Gaming PC- i7 14700F - NVIDIA GeForce RTX 4060 Ti - 32GB DDR5 6000MHz - 1TB M.2 NVMe SSD
@alirezahekmati7632
@alirezahekmati7632 28 дней назад
GOLD!
@sourishk07
@sourishk07 27 дней назад
Thank you so much!
@alirezahekmati7632
@alirezahekmati7632 21 день назад
​@@sourishk07 that would be greate if you create part 2 about how to install wsl2 in windows for deep learning with nvidia wsl drivers
@sourishk07
@sourishk07 19 дней назад
@@alirezahekmati7632 From my understanding, the WSL2 drivers come shipped with the NVIDIA drivers for Windows. I didn't have to do any additional setup. I just launched WSL2 and nvidia-smi worked flawlessly
@punk3900
@punk3900 25 дней назад
is this system good for inference? Llama 70b will run on this? I wonder whether RAM really compensates for the VRAM
@sourishk07
@sourishk07 25 дней назад
Hello! That's a good question. Unfortunately, 70b models struggle to run. Llama 13b works pretty well. I think for my next server, I definitely want to prioritize more VRAM
@punk3900
@punk3900 25 дней назад
Hi, what is your experience with this rig? Is it not a problem for the temperature that the case is so tight?
@sourishk07
@sourishk07 25 дней назад
The temperature has not been an issue with the same case size
@raze0ver
@raze0ver Месяц назад
am just gonna build a budgeter PC than yours for ML this weekend with 5900x + 4060ti 16GB ( not a good card but enough VRAM .. ) will go through your video and follow the steps to setup everything hopefully all go as smooth as you did ! Thanks dude!
@sourishk07
@sourishk07 Месяц назад
Thanks for watching and good luck with your build! I think for my next server build I want to use GPUs with more VRAM, but 16 GB should serve you fine for a budget build
@raze0ver
@raze0ver Месяц назад
@@sourishk07 do you think those pro card such as A4000 or higher is really necessary for casual ML given their price tags?
@sourishk07
@sourishk07 Месяц назад
@@raze0ver No, probably not. Since those cards are originally targeted at enterprise, they're overpriced. What I should've done is gone for a used 3090 because that's the best bang for your buck when it comes to VRAM or a 4090 if you can afford it.
@RazaButt94
@RazaButt94 Месяц назад
With this as a secondary machine, I wonder what his main gaming machine is!
@sourishk07
@sourishk07 Месяц назад
LOL you'll be surprised at this: my main gaming machine is an Intel 12700K and a 3080 12 GB. ML comes before gaming 🙏
@abhiseckdev
@abhiseckdev Месяц назад
Absolutely love this! Building a machine learning rig from scratch is no small feat, and your detailed guide makes it accessible for anyone looking to dive into ML. From hardware selection to software setup.
@sourishk07
@sourishk07 Месяц назад
Thank you so much!!! I appreciate the support 🙏
@danielgarciam6527
@danielgarciam6527 Месяц назад
Great video! What's the name of the font you are using in your terminal?
@sourishk07
@sourishk07 Месяц назад
Thank you for watching! The font is titled "CaskaydiaCove Nerd Font," which is just Cascadia Code with icons added, such as the Ubuntu and git logos.
@Param3021
@Param3021 2 дня назад
​@@sourishk07 ohh, i was literally finding this font from a long time, will install it today and use it.
@sourishk07
@sourishk07 2 дня назад
@@Param3021 Glad to hear it! Hope you enjoy! It works really well with Powerlevel10k
@hpjs-animation
@hpjs-animation Месяц назад
Thanks for this! I see you went with 96 GB system RAM and a 4080 with 16 GB VRAM. Curious whether the 16 vs 24 GB VRAM (e.g. in 4090) could make a difference for AI/ML, and especially LLM, apps? I realize a 4090 would have set you back another extra $1000 though. And is more system RAM helpful, what I'm reading is that GPU VRAM is more important.
@sourishk07
@sourishk07 Месяц назад
Thanks for the question! Yes, VRAM is king when it comes to ML/AI. Always prioritize VRAM. More system memory will never hurt, especially with massive datasets, but I didn't want to elect for the 4090 because of its price tag. However, on FB marketplace, I've seen RTX 3090's with 24 GB of VRAM for as low as $500, which was an option I should've considered while I was choosing my parts.
@federicobartolozzi680
@federicobartolozzi680 Месяц назад
immagine two of them with nvlink and the cracked version of P2P.​ Too bad you didn't see it earlier, it would have been a great combo.😢 @@sourishk07
@xxxNERIxxx1994
@xxxNERIxxx1994 Месяц назад
@@sourishk07 RTX 3090's is a MONSTER ! fp 16 models loaded with 32k context running at 60 tokens are the future :D Great video :)
@sourishk07
@sourishk07 2 дня назад
@federicobartolozzi680 @xxxNERIxxx1994 Stay tuned for a surprise upcoming video!
@electronicstv5884
@electronicstv5884 17 дней назад
This Server is a dream 😄
@sourishk07
@sourishk07 17 дней назад
Haha stay tuned for a more upgraded one soon!
@novantha1
@novantha1 Месяц назад
I'm not sure if I like the idea of an AIO or water cooling in a server context. If it springs a leak I think you're a lot less likely to be regularly maintenancing or keeping an eye on a server that should be definition be out of sight. I'd also argue that the choice in CPU is kind of weird; I would personally have preferred to step down on the CPU to something like a 13600K on for a good sale or a 5900X personally; they're plenty fast for ML tasks which are predominantly GPU bound but you could have thrown the extra money from the CPU (and the cooler!) into a stronger GPU. The exact price difference depends on the context, but I could see the difference being enough to do something a bit different. I also think that an RTX 4080 Super is a really weird choice of GPU. It sounds kind of reasonable if you're just taking a quick glance at new GPUs, the price to performance ratio is wack. It's in this weird territory where it's priced at a premium price but doesn't have 24GB of VRAM; I would almost say if you're spending that kind of money you may as well have gone for a 4090 if you need Lovelace specific features like lower precision compute or something. Otherwise, I'd argue that a used 3090 would have made significantly more sense, and you could possibly have gotten two of them if you'd minmaxxed your build, and a system with 48GB of VRAM would absolutely leave you with a lot more options than a system with 16GB. You could have power limited them, too, if that was a concern. If you were really willing to go nuts, in a headless server I've seen MI100s go for pretty decent prices, and if you're doing "real" ML work where you're writing the scripts yourself ROCm isn't that bad on supported hardware nowadays, and that'd give you 32GB of VRAM (HBM, no less) in a single GPU, which isn't bad at all. Personally I went with an RTX 4000 SFF due to power efficiency concerns, though.
@sourishk07
@sourishk07 Месяц назад
Thank you so much for all of that feedback! Honestly, I agree with all of it, not to mention a couple other people also have commented similar things. But in my specific use case, my "server" is right next to my desk so maintenance should be pretty easy. Not to mention that I've really never really had any issues with AIOs for the 7 years I've been using them. Sure, a leak is possible but I guess I'm willing to take that risk. I think I might need to potentially switch this computer to be my main video editing computer and convert my current computer be the server because it has two PCIE slots. This was my first time building a computer from scratch solely for ML so I appreciate the recommendations!
@AvatarSD
@AvatarSD Месяц назад
As an embedded engineer I using 'continue' extension directly with my openai api, especially gpt4-turbo for auto-completion. Seems my knowledge not enough for this world..😟 Hello from Kyiv💙💛
@sourishk07
@sourishk07 Месяц назад
Hello to you in Kyiv! I completely understand the feeling. With the field of ML/AI changing at such rapid paces, it's hard sometimes to keep up! I struggle with this often too
@JayG-hn9kf
@JayG-hn9kf Месяц назад
Great video, I never got the Continue extension working in code-server, Is there a step that I may have missed ?
@sourishk07
@sourishk07 Месяц назад
Thanks for watching! And regarding the Continue extension, what is the issue you're running into?
@JayG-hn9kf
@JayG-hn9kf Месяц назад
@@sourishk07 thank you for offering support 🙂I have followed exactly your steps , however I don't get Continue text zone to ask question , not even the drop list to choose the LLM or setup. I tried Continue Release and Pre-release but both did not work. I the fact that I have Ubuntu Server running as a VM under Proxmox with GPU passthrough could have an impact ?
@sourishk07
@sourishk07 Месяц назад
I don't believe the virtualization should affect anything. When you go to install the Continue extension, what version are you seeing? Is it v0.8.25?
@marknivenczy1896
@marknivenczy1896 23 дня назад
I've tried twice to post help with this, but the youtube does not like me adding a url. Anyway, I found I needed to run code-server under HTTPS in order for Continue to run. If you open code-server under HTTP it will issue an error (lower right) that certain webviews, clipboard and other features may not operate as expected. This affects Continue. You can find the fix by searching for: Full tutorial on setting up code-server using SSL - Linux. This uses Tailscale which Mr. Kundu has already recommended.
@sourishk07
@sourishk07 2 дня назад
Thanks for sharing this insight! I probably should've specified that I set up SSL with Tailscale behind the scenes to avoid that annoying pop up message. I apologize for not being clearer!
@notSoAverageCat
@notSoAverageCat Месяц назад
what is the total cost of hardware?
@sourishk07
@sourishk07 Месяц назад
I forgot to mention this in the video, but the final cost was $2.8k pre-tax. Check out the link in the description for a Google Sheets for a complete price breakdown.
@kawag2780
@kawag2780 Месяц назад
Could have started the video with the budget you were targeting. When recommending systems to other people, knowing how much the person can spend can heavily dictate the parts they can choose. Here are some questions I've thought of while looking at the video. Why choose a 4080 over a 3090? Why choose a gaming motherboard or one that is a MITX formfactor? Why choose a "K" SKU for a production focused workload? There's missed commentary there. I know that you have tagged some of your other videos but it could have been better to point out that you already have a NAS tutorial. Linking that video with the introduction of the 1TB SSD would have been helpful. And finally why is the audio not synced up with the video? It's very jarring when that happens. Other than that it was cool to see the various programs that you can use. However I feel that the latter part feels tacked because it's hard to gauge how the hardware you chose has an affect on the software you chose to showcase.
@sourishk07
@sourishk07 Месяц назад
Wow, thank you so much for your in-depth feedback! I sincerely appreciate you watching the video and sharing your thoughts. I apologize that the video didn't initially clarify some of the hardware choices and budget considerations. In retrospect, you're absolutely right, and I'll ensure to include such details in future content. I chose the 4080 Super because it has the newest architecture, along with the fact that I was able to get it at a discount. The extra VRAM from the 3090 would've helped with larger models like LLMs and Stable diffusion, but for a lot of my personal projects such as training a simple RL agent or even some work with computer graphics, the extra performance of the 4080 Super will serve me better. Again, something I should've added to the video. For the "K" SKU, I got the CPU on sale at Best Buy for about $120 off and the motherboard has an "AI overclocking" feature, which I thought would be kinda on brand with the video lol. I didn't really get a chance to touch upon it in the video or even benchmark any potential performance gains the feature might've gained me. Regarding the SFF build, I chose the form factor just because I have a pretty small apartment and I don't have much space. These are things I'm sure the viewers of this video might've been interested to hear about, and I appreciate you inquiring about them. I also agree with your point about my NAS video! I'll keep that in mind the next time I mention a previous video of mine. And regarding the audio, everything seems fine on my end? I've played the video multiple times on my desktop, phone, and iPad. Hopefully, it was just a one-off issue. Also, I suppose the software I installed isn't really too dependent on this specific hardware, but rather its the suite of tools I would install on any machine where I plan on doing ML projects. Thank you once again for such constructive feedback. I'm curious, what topics or details would you like to see in future videos? Your input helps me create more tailored and informative content.
@aadilzikre
@aadilzikre 14 дней назад
What is the total Cost of this Setup?
@sourishk07
@sourishk07 14 дней назад
Hi! The total cost was about 2.8k although some parts I probably should’ve gone cheaper on like the motherboard. I have a full list of the parts in the description
@aadilzikre
@aadilzikre 12 дней назад
@@sourishk07 Thank you! I did not notice the sheet in the description. Very Helpful!
@T___Brown
@T___Brown Месяц назад
I didnt hear what the total cost was
@sourishk07
@sourishk07 Месяц назад
Thanks for the comment. While focusing on the small details of the video, I completely forgot some of the important information haha. The cost pre-tax was $2.8k although components like the motherboard do not have to be as expensive as what I paid. I was interested in the AI overclocking feature, but never got around to properly benchmarking it. Anyways, I've updated the description to include a Google Sheets with a complete cost breakdown.
@T___Brown
@T___Brown Месяц назад
@@sourishk07 thanks! This was a very good video. Thanks
@kamertonaudiophileplayer847
@kamertonaudiophileplayer847 Месяц назад
It's kind of not typical PC building targets something other than gaming.
@sourishk07
@sourishk07 Месяц назад
Yes you’re definitely right! The motherboard was definitely meant for only gamers 😂
@bitcode_
@bitcode_ Месяц назад
If it doesn't have 4 H100s that i cannot afford i don't want it 😂
@sourishk07
@sourishk07 Месяц назад
LOL maybe if I get a sponsorship, it’ll happen 😂😂😂 That’s always been a dream of mine
@bitcode_
@bitcode_ Месяц назад
@@sourishk07 i subbed to see that one day 🙌
@sourishk07
@sourishk07 Месяц назад
Haha I appreciate it! Looking forward to sharing that video with you eventually
@soumyajitganguly2593
@soumyajitganguly2593 Месяц назад
who builds a ML system with 4080? 16GB is actually not enough! Either go 4090 or 3090
@sourishk07
@sourishk07 Месяц назад
Yeah you’re right. Thanks for the comment! Next time I build a server, I’ll keep this in mind!
@xyzrt1246
@xyzrt1246 Месяц назад
Has anyone built this yet based on his recommendation?
@sourishk07
@sourishk07 Месяц назад
Hi! Regarding the hardware side of things, I didn’t really mean to recommend these specific set of parts. I just wanted to share my experience! However, the software are tools I definitely use on a day to day basis and cannot recommend enough!
@marknivenczy1896
@marknivenczy1896 23 дня назад
I built a similar rig, but with a Silverstone RM44 rack-mount case and a Noctua NH-D12L with an extra fan for cooling instead of the water unit. Fitting the GPU in the case required a 90 Degree Angled Extension Cable from MODDIY (type B). I used at ASUS ProArt Z790 motherboard. All of the software recommendations were great.
@sourishk07
@sourishk07 22 дня назад
@@marknivenczy1896 I'm glad you enjoyed the software recommendations!
@user-pf3yv5wd3i
@user-pf3yv5wd3i Месяц назад
Greate video, truly loved it ❤,but you should hide your id 😭😭
@sourishk07
@sourishk07 Месяц назад
Thanks for watching! And do you mean my email id?
@user-pf3yv5wd3i
@user-pf3yv5wd3i Месяц назад
@@sourishk07 sorry i meant to type ip, but I think it's a local ip so no worries ❤️
@user-pf3yv5wd3i
@user-pf3yv5wd3i Месяц назад
@@sourishk07 thanks for your high quality content.
@sourishk07
@sourishk07 Месяц назад
I appreciate it! And yeah, all the IPs in the video were my Tailscale IPs which are only accessible to me, so unless my Tailscale account gets hacked, I have nothing to worry about.
@punk3900
@punk3900 25 дней назад
this is por**graphy
@sourishk07
@sourishk07 25 дней назад
LMAO
@robertthallium6883
@robertthallium6883 22 дня назад
Why do you move your head so much
@sourishk07
@sourishk07 22 дня назад
LMAO idk man...
Далее
The Unreasonable Effectiveness of Linux Workstations
12:47
FREE Local LLMs on Apple Silicon | FAST!
15:09
Просмотров 130 тыс.
How I Built My $10,000 Deep Learning Workstation
22:25
Cheap vs Expensive MacBook Machine Learning | M3 Max
11:06
MiniPC vs Servers in the Home Lab in 2024
11:29
Просмотров 19 тыс.
The ULTIMATE Budget Workstation.
14:57
Просмотров 171 тыс.
PC Hardware Upgrade For Running AI Tools Locally
9:12