Ready to get a job in IT? Start studying RIGHT NOW with ITPro: go.acilearning.com/networkchuck (30% off FOREVER) *affiliate link Discover how to set up your own powerful, private AI server with NetworkChuck. This step-by-step tutorial covers installing Ollama, deploying a feature-rich web UI, and integrating stable diffusion for image generation. Learn to customize AI models, manage user access, and even add AI capabilities to your note-taking app. Whether you're a tech enthusiast or looking to enhance your workflow, this video provides the knowledge to harness the power of AI on your local machine. Join NetworkChuck on this exciting journey into the world of private AI servers. 📓📓Guide and Commands: ntck.co/ep_401 ⌨⌨My new keyboard: Keychron Q6 Max: geni.us/0SGY 🖥🖥My Computer Build🖥🖥 --------------------------------------------------- ➡Lian Li Case: geni.us/B9dtwB7 ➡Motherboard - ASUS X670E-CREATOR PROART WIFI: geni.us/SLonv ➡CPU - AMD Ryzen 9 7950X3D Raphael AM5 4.2GHz 16-Core: geni.us/UZOZ5 ➡Power Supply - Corsair AX1600i 1600 Watt 80 Plus Titanium: geni.us/O1toG ➡CPU AIO - Lian Li Galahad II LCD-SL Infinity 360mm Water Cooling Kit: geni.us/uBgF ➡Storage - Samsung 990 PRO 2TB Samsung: geni.us/hQ5c ➡RAM - G.Skill Trident Z5 Neo RGB 64GB (2 x 32GB): geni.us/D2sUN ➡GPU - MSI GeForce RTX 4090 SUPRIM LIQUID X 24G Hybrid Cooling 24GB: geni.us/G5BZ 🔥🔥Join the NetworkChuck Academy!: ntck.co/NCAcademy **Sponsored by ITProTv from ACI Learning
Awesome video and super easy to follow along. Quick tip: if you forget to run a command as sudo, just type sudo !! and it will run your last command as sudo.
Chuck, I saw the video yesterday on Ollama and I tried it today. I am blown away at how good llama3 is and how fast it is. Running on my i7 linux laptap with a nvidia gpu and it is incredible. Thanks again for your wonderful videos. Keep it up!
I'm 62 years old and a computer techy, I'm no super genius though and I'm really happy to have been able to run a local AI on my PC. Private AI is the way to go for sure. I signed up for your free academy for now, there's enough in there to keep me learning/busy for a while yet! :)
@@projectptube I would be happy with an AI that could actually write fairly entry level code instead of churning out garbage code that: 1) won't compile, and efforts to have AI integrated into the development environment correct issues makes it worse with each iteration 2) doesn't actually meet requirements (regardless of how many iterations made to fine tune the output, by which YOU are training the AI) 3) is poorly structured (leading to maintainability problems) 4) lacks proper error handling (leading to problems with stability and data integrity) 5) fails to follow any type of consistent naming convention (code quality/maintainability issues) 6) randomly include variables which determine type on first assignment 7) creates classes where local data types do not correspond to the columns defined in database tables: 7.a) string data types do not enforce the defined length limits 7.b) numeric variables are of inconsistent types 7.c) the data access layer doesn't handle null values, always storing 0 for numeric data types or zero-length strings for (n)varchar fields 8) thrashes database connections (a problem that connection pooling implemented in the client stack doesn't reliably solve) 9) introduces security vulnerabilities. I could go on, but why bother? The current state of AI for software development is to have companies and sole developers pay to use it while the AI is trained on the well-written source code (or at least better written) the developers end up producing. A packet sniffer will detect that not only is the corrected AI generated code being shared but also proprietary code which has not been authorized for such use.
@@shannonbreaux8442 the ollama get hub has a plug in on how to do this. Also ollama has a python library so you can write your own python scripts to interact with ollama
This video was an absolute gem, thank you so much. I've been struggling with setting up local AI and the majority of videos I've watched have resulted in me having to try and learn concepts while also deciphering a very heavy accent from the narrator, which made it so much harder for me to focus. This was clear, to the point, and covered everything I wanted. Thank you!
Just use LM studio. You will get just that. Also recommendation of models and information if they can run on your machine. Also the models get downloaded authomatically from hugging face.
Just wanna say Huge Thanks to you! Your video inspired me to give another try on my way to local LLMs and I was literally blown away with how fast my RTX 2060 could actually generate with Llama3 and ollama. A year go I tried local Pygmalion and when I saw literally one word per 2 seconds I decided "'Nah, local AI is only for happy guys with 4090 on board". Once again, thank you, you made my life better!
Hello, Chuck! I tried this on my OLD, upgraded to it's max Dell 660s, which I have to date: Intel Core i7 3770, running at 3.40ghz, 16GB ram, Windows 11, and a 1TB SSD.... Followed your tutorial, and didn't expect it to work on my system! "I have NO GPU!" it runs SUPER SLOW, but works! installed llama3 model, gonna try some more!!! LOVE your videos! Greetings from Puerto Rico!!! 😁
Nice project but in my opinion it’s totally useless to run ai on your own server. It’s being on 24/7 ,using tons of energy, and is not so often used. This is typically something that is better off in the cloud. If not for this reason, than it is for training the models and neural networks. Tesla wouldn’t be able to exist if they had gone this route
I want to play with this as well. I wound up with a Best Buy open-box i5-12400, 32gb or ram, and an open-box Nvidia 4060 OC 8GB. So I'm in for about $600 all together. I wanted to start as cheap as I could and be power efficient at the same time, at least to start with. Hopefully I'll start playing with it in the next couple of weeks. One thing I'm curious about though. I wonder how secure these are. Are they really secure, or is it one of those "not too many of them today so nobody is bothering to hack them, yet" situations?
@@Jalan-Api I had to use the ollama serve command on my computer for it to work on WSL, but the windows preveiw works without using the ollama serve command.
Man!!! My boss showed me the last local AI video of yours, introducing me to your channel. Now I feel any video you’re making on similar topics I need to see them! Make more videos on this, exploring what all we can do, in workplaces. This is so interesting and cool! Thanks man!
Hey guys👋🏽 , I've installed this on my pc successfully. Running on my pc(host machine), But I'm accessing it on my another pc (connected in same network) but its not opening. Could you guys plz help me
Over the past year, I've incorporated all your tips into my daily routine, and they've definitely helped me feel more energetic and productive! For a while, I even felt unstoppable! But lately, I've been feeling down and sluggish again. It seems like forcing yourself to do all these things every day can feel like a chore, especially if they're not natural habits. It takes time and effort to build new routines, and it can be frustrating to miss a day. This made me realize that feeling good is more about your mind than your body. When you have a positive outlook, you naturally have more energy. The key is to find activities that make YOU happy. There's no one-size-fits-all solution! You might find great advice from others, but ultimately, you need to discover what works for you. I'm not trying to be negative, just to remind everyone that feeling good and bad is a normal part of life. You might have some fantastic days, but there will also be times when you feel down. Don't let that stop you from doing the important things, even if it's just for a few minutes each day. Consistency is key! Successful people might not always feel amazing, but they make time for what matters most in their lives.
I love these plain simple straight on explanation videos. A suggestion or addition to this would be: - how to add or restrict the knowledge base. For example: - corporate data, pdf's, tables, pictures, statistics etc and how to purely add this info as knowledge. - Ask the AI questions and so that it only searches the corporate data and doesn't get blurred with other data. - let the AI do analysis on the data and pull conclusions on it. This would be a perfect addition.
"- how to add or restrict the knowledge base." Well, he shows exactly that by showing you the system prompt he gives. You can kinda do whatever you want there, like banning words etc. Looking into Ollama, you can also train your model on specific data which can help for your your specific uses cases. There is a lot of documentation/videos on that topic on YT if you want. But that's more relevant of AI training than "easy and fast setup" which was the scope of this video.
Easy mode: 1. Microcenter's RTX 3090TI x2 (24gb VRAM x2) OR get the Tesla K80's (cheaper) . 2. MOBO that supports either x16 x 2 or x8 x 2. 3. Get at least 64gb system ram (GGUF models run on CPU/RAM/ GPU combined). 4. A 850 - 1,000 Watt power supply. Congrats. You have a computer that almost rivals a system with RTX A6000 (5,000$) card.
I m building cheap home server for cloud gaming.. for 4 VM : Dell T7810 (200euro) 2x Xeon E5-2697v3 (50euro), ECC 64GB 2400Mhz in quad channel (70euro) Nvidia Tesla P100 16GB (160euro) and added Tesla M40 12G , second PSU 1000w . I hope Llama will use 2 different GPUs. Now the server will be for cloudgaming and AI, so cool :)
@@randallrulo2109 something you need to know about the K80's is that it is not a normal PCIe cable needed, it uses a 8 PIN CPU plug. you can get an adapter to convert 2 PCIe 8 pins to 1 8 pin CPU connector
@@ToucheFarming Its also a Pita to get working on some workstations like Dell or HP without Rebar. I'd skip the Tesla's TBH. Ive been fooling with 2 P40s for 2 months. Really not worth the trouble they caused me. Its a good option if you have no money but plenty of time on your hands and really want to be a masochist trying to keep them cool enough ect... I ended up getting the 3090's and am much happier. Yeah I lose ECC but whoopty doo, i rather just not be waiting on replies from the model... and to run without compression that's already messing with accuracy. 2x 3090's just end up making more sense for the time/money ratio. I ended up getting the Teslas to work on a Dell 5820 and you have to change the Vbios mode to the GPU with nvflash to be in graphics mode instead of compute. You lose a lot of performance doing it this way though. Cuts it in half. But it will work. Was a week of research to figure that out. I gave up on the Teslas and the dell after finally pulling this off and having to get a windows machine to change the vbois anyway... and just got 2 3090's in a cheap gaming board. Works so so much better. Looking back i wish i had not wasted my time. I hope i save someone else some time by sharing my experience with the Tesla cards.
if you did this alone. be proud of that. don't lessen your achievement. there's enough people out there that will do it as it is. don't help them by doing to yourself.
Chuck, THIS has got to be the most significant video I've seen in ages. Thank you for sharing this information. I LOVE the idea that we can now have this power under our own control. I will definitely have to do this when I can gather up enough money to build my own Terry (if I'm going to do it I want to do it right).
With the example where you asked two models a question: you can clearly see the "2/2" in the bottom left part of the answer, so you have two answers available (one from each model, i guess). :)
I only watched like 4 minutes of your video and I wanted to try asap. Not only did I get it up and running in like an hour but I also configured it to be accessed anywhere in the world I want. Thank you for sparking this fun little piece of technology I can utilize in my own home. This is actually much more useful than I thought because I can have my mother utilize this in her everyday life since I’m all grown up now and out of the house.
One caveat. Using Windows WSL access from the outside is not possible without a lot of hoop-jumping. Though the "--network=host" will sync up Docker on Ubuntu in WSL2, there is a whole lot more hoop-jumping required to get WSL2 to talk to your local network as there is no "bridging" option like there is with VMware or Virtualbox.
Thanks man I noticed this Trying to use Ubuntu for this was quite tasking as I did not know how to install the cuda drivers properly 😅. Ended up breaking the Grub boot loader of the Os😂😂
Thank you for making it simple! I've followed several tutorials for getting these running locally and they all have their own plus points. Your's with its Stable Diffusion addition is a nice added touch!
This tutorial is insane! Many thanks! The steps are so easy to follow and implement. I just finished the tutorial, and currently enjoying the local AI in my laptop.
That's just insane ! I'm simply following your steps to test it on my gaming PC, there are no words for this. I'm eager to start building a case for it. NEW PROJECT UNLOCKED
This will greatly help my daughter in the future as we plan to homeschool especially since private GPT can be loaded with local sources like PDF's of books. Very hyped for this content!
Dude, your videos are so good. I never miss a video from you. Im working on a project analyzing sports data with local AI for work, so its been very interesting going outside the realm of the simple UIs from OpenAI/Anthropic etc.
ugh im trying to install docker so many times in wsl unbuntu and its giving me such a headache. heck even installing unbuntu and ollama gave me ttrouble. I had to use AI to walk me through how to install an AI which is so weird to think about.
This is sweet! Just did this on my spare system and it was faster then I thought it would be. I9-10900 with 64gb and a SFF Quadro RTX A2000 12gb. Thank you Chuck
This is cool but the majority of people don't have a computer nearly good enough for this to be practical. I tried it and the model was both slow and incredibly dumb due to me being limited to llama3 8B. That being said you should still try it out, pretty cool.
Hi @NetworkChuck At 13:25 you explain that if you want someone else use this server on your PC or Laptop, they can access it from anywhere, as long they have your IP Address. How exactly do you do that?
Great video. I've just gone through all of this myself. Looks like they have also added a few more features (LiteLLM, Whisper). Local AI is where it's at!!! Privacy first!!! Hoping they add MemGPT and CrewAI/AutoGen to it.
I've had it running - slowly - on a RaspberryPi 5. Love the imploementation on WSL in Windows 11, **BUT** we definitely need a complete guide for those of us who are running an AMD GPU in Windows. Not everyone had $10K lying around to build a server with TWO $3200CAD Nvidia cards, Chuck...
I have it running via docker using an old radeon 7 and a ryzen 9 with 12 cores 24 threads and 32gb ram and it runs decently fast on gentoo, and downloaded the auto1111 the way he showed how and its not any slower than his shows.
@@BrandonHurt does it actually use your GPU? If so I'd be interested to see what your docker config is exactly. It runs ok on just my CPU (13700k), but would be faster using the GPU from what I can tell.
This was an ABSOLUTELY fabulous tutorial on AI. It was (as others have commented) *extremely* accessible to somebody starting out with self hosted AI, but with a background in Linux and system administration. Well done sir! I will use this to setup my own install on a currently underutilized but reasonably powerful server in my homelab.
Would be great if you could make a video on setting up a local AI language model to be trained on documents that get permanently saved in its memory. Seems like there is potential for that using webAI? I want to use this program to be able to reference a part number and have it give me information on the product or manual for that specific part number in my company.
Check out RAG ( retrieval augmented generation). Essentially use a model to store docs into a vector database which is queried by the AI when sending prompts to use in its context window. Lots of videos on RAG out there
I was experimenting this on my local from Feb 2024. And it was so powerful. I've often used this for calculating some data, convert it into models, and doing some cool stuff like: "Hey, what is gross margin for my local store branch in Jan 2024?" Then the bot give awesome answer with correct data..
Anyone else stuck on the Docker Container part? heres what I get E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value) E: The list of sources could not be read. E: Malformed entry 1 in list file /etc/apt/sources.list.d/docker.list ([option] no value) E: The list of sources could not be read. curl: (22) The requested URL returned error: 404 -bash: /docker.asc: No such file or directory chmod: cannot access '/etc/apt/keyrings/docker.asc': No such file or directory
Cheaper alternatives that can be combined with other nvidia GPUs, solely for running AI, are used Nvidia Tesla P40, (24GBof VRAM) currently about ~200 bucks each on the used market. Otherwise go AMD 6800 or newer/better, (16GB+ of VRAM) which are also supported out of the box.
Are you kidding? These go for 7k new. I can see that there are a lot of these offers for used ones, but did you ever confirm that it is legit? Looks like very obvious fraud. Or are you trying to run a scam, yourself?
@@Brax1982 I have 2 they work (bought used for $175 each) but they aren't that great and were a pita to get working and keep cool enough... Get a 3090 instead.
@@VioFax Thanks, I was not considering it, because how could they be that much cheaper than list price? Are you sure you got the real ones? I would seriously doubt that...even if "something" works. I guess this is one of those things where you have to be a master engineer to get it to work and that's why it's so cheap...
@@jimarasthegod Nahhh the P40s are horrible at FP16, because the GP104 lacks the capability of fast FP16 computation. Well at least it supports DP4a. I would say use something at least from the Turing Generation. At the AMD side I only tested GCN 5.1 Radeon Pro VII GPU, it was ok for basic PyTorch operations
This video should have millions of views. The time value of this video compared to the production value it brings is totally asymmetric. After a week or so I finally figured out that having more than one instance of Linux (WSL & WSL2) running at the same time is really bad for this install. Also you can only have Ollama installed in one place on your machine or Docker will NOT play nice. Finally got it running after just a few minutes of uninstalling and re-configuring and voila! OpenWeb UI has the connection, & all the models can be loaded & used. I am a Wizard.
PS: please support the open source project you use, the devs put in a lot of effort in creating and maintaining them for free, making them accessible for everyone. No pressure tho, enjoy free AI for everyone
Good luck running anything larger than 8B parameters on just the cpu (and even that might be too big for most people) and expecting more than 2 tokens per second A relatively recent 8gb gpu is highly recommended to run up to 8B models at over 50 tokens per second
And not just that.. You need to get to something like 100-400B models to be comparable to the bigger AI services.. Those small LLM models are good for things like roleplay and such but when it comes to factual information and productive tasks, they tend to be quite poor.
@@touma-san91 First time i've seen someone mention the comparison to the larger ones. Never knew nor though of that. I might be doing all this work for nothing lol
I run llama3-70B on CPU only I7-13700K and 64gb ddr5. Is it fast, fast? No, but it runs fine. I can also run it on my 2021 M1 Mac Pro with 64gb of ram. Runs fine there as well.
@@CappellaKeys If you have lot of RAM (Minimum is something like 64 gigs for 70B-models) and good CPU and good GPU with decent chunk of VRAM, you can run these things using GGUF but it will probably take a few minutes to get a response out of the larger models. And you really should use GGUF because that way you can split the load on both the CPU and GPU so it runs tiny bit faster than fully running on CPU.
As always, a great analysis. Newcomers often wonder if it's too late to navigate the financial market, but the market is always unpredictable. Trading has more advantages than simply holding, so it's important to learn before diving in. Active trades are necessary to ride the market's waves. Thanks to Linda Sue Baier insights, daily trade signals, and my dedication to learning, I've been increasing my daily earnings. Keep it up!
It's unexpected to come across her name here. She understands every beginner’s intention and fix you to a trading course that matches your capacity, she knows her stuff! Her advice has been invaluable to my trading journey. Definitely worth giving a shot!
Investing in alternative income streams that are independent of the government should be the top priority for everyone right now. especially given the global economic crisis we are currently experiencing. Stocks, gold, silver, and virtual currencies are still attractive investments at the moment.
Such market uncertainties are the reason I don’t base my market judgements and decisions on rumors' and here-says, got the best of me 2020 and had me holding worthless position in the market, I had to revamp my entire portfolio through the aid of an advisor, before I started seeing any significant results happens in my portfolio, been using the same advisor and I’ve scaled up 950k within a year, whether a bullish or down market, both makes for good profit, it all depends on where you’re looking.
Thanks for this! I teach computer science at a rural high school and have been thinking about how I could help my students get experience with LLMs while also meeting the expectation of public schools to protect students from harm and protect their privacy. This definitely helps me learn. 😁
Unjustified panic mode. If you install anything from the internet there is always risk to it no matter the install method. The beauty of an installer script is just you just can read it and make sure it's not doing anything nasty.
@@_modiX The problem with curl|sh is that a failed download will still get executed. So if the script e.g. had some "rm -rf /tmp/someapp" and the download happened to fail after "rm -rf /", then you can't do anything about it. Or a failed download may cause the partially downloaded script to break and leave you with a broken configuration. So rather just download the script, quickly check it if it didn't fail (maybe even check the download hash) and _then_ execute it in a seperate step.
Could you describe how to do it your recommended way? I.E. copy the prompt, but remove " | sh" from the end, and - after SUCCESSFUL download - enter "sh ollama run" ?
@@nikolai00115 Eh, sorry bro. If someone knows how to 'redirect curl into a file, and then run it', they probably already know the answer to my question.
I have my instance set up in a proxmox LXC. You need to pass the GPU(s) through first which is a tiny bit tricky but there's plenty of instructions to be found online (..if you're using proxmox 7+ make sure you use cgroup2's not cgroups). Once you do that, it's a basically the same instructions. I don't care for docker so I actually set up a conda environment. Really just the same thing, mostly.
@@abitw210 I think you haven't watched the video, or you just didn't understand what it is for. He could give a "self prompted" AI for his daughter with limitations. Can you do the same in the OpenAI? And many companies won't share private, sensitive business documents with a third party AI. I can imagine, it is not for you, but it doesn't mean it is not worth it for anybody.
he should really suspend Terry when it is not being used. Unless used for some automated tasks, a private server like that is going to be sitting idle most of the time. However it would not use much if it only was on for responding to a few prompts daily.
Thanks @NetworkChuck for amazing video. I tried to use my existing PC with an 8GB Nvidia 4060 Ti and a Core i9 9th Gen for my local AI server. While Ollama models worked fine, Stability Diffusion didn't perform as expected and getting "Cuda out of memory..." To address this, I upgraded my setup to: Ryzen 9 7950X3D MSI MAG B650 Tomahawk 128GB Corsair RAM NZXT 1000 PSU NZXT Elite 360 NZXT H9 Elite case 2 x 1TB M.2 Samsung 990 Pro (one for Pop!_OS and one for Windows 11) Nvidia Zotac 4070 Ti Super GPU This new configuration has significantly improved performance and stability for all my AI tasks. Highly recommend the upgrade for anyone facing similar issues!
Cool idea! While Home Assistant doesn't currently offer built-in voice-to-text, there are add-ons like Whisper and local pipelines that can be integrated for voice control. Text-to-speech options like Google Translate are also available. This could create a more Alexa-like experience for home automation. However, it's important to remember that these integrations might require some technical setup and may not be as seamless as commercial voice assistants
Does anyone else find it odd that when you Google search what time is the presidential debate it doesn't give you specifics about it .. For that matter you can't even search the date it's going to be on “today”
I could imagine it would also be helpful, to give your daughters the possibility to use the AI models for language training. I found it very useful to have conversations with an AI to improve my Spanish. For example, you can ask the Model to correct you and give you suggestions (with synonyms) to sound more like a natural speaker and so on.
oooooooooooo... The sound of that keyboard is fire. Had to stop the video to see which keyboard it was. Thanks for the content. Was looking for an intro to local AI and ollama. Thank you!! EDIT: I managed to convince work to allow me to purchase a Keychron V6 keyboard with browns. I do a lot of typing at work so it was life changing and actually made me more productive so it was a win win. Ok, back to the video...
I followed every step and everything seems to work fine, but for some reason I'm not able to generate an image from the 'Generate Image" icon using stable defusion. It keeps giving me an error message that says 'Server connection faild', but I'm able to access it at 127.0.0.1:7860 where it is hosted...any ideas whats wrong?
11:24 - Do you see "< 2 / 2 >"? That means you are seeing the second of two responses generated. The greater than and less than symbols are navigation controls that determine which response is displayed. ChatGPT has similar after you select one of two generated outputs for the same prompt.
4:25 how the video has FLOWED thus far and your SMOOTH TRANSITION in to your sponsor is EXCELLENT. I haven't watched a lot of your videos, but the few I have watched have been fantastic work!
Can't find the model files in my computer... I followed the wsl process, everything works fine, I need to find the location/directory where it was pulled ?
Subject: Inquiry Regarding AI Local Installation and Python Scripting: I'm new to AI projects and Linux, seeking guidance on installing AI Local without Docker, managing Python scripts for module creation/installation, and creating a personal database for AI Local. Appreciate any advice.
You make it look so easy to set up. I spent hours just trying to find causes of errors and how to fix them. I re-installed Docker and Ubuntu several times without luck. Finally re-installed everything and signed up for Open WebUI again to finally see the AI models appear. I suppose it was for the best since I learned so much along the way. lol
hiii, did you experience a GPG error where the key was not available, after the first INSTALL DOCKER command? I'm very stuck and can't figure out wat is wrong
I just found your channel and enjoyed the video, very informational, I would like to see options about personal assistants, where you connect systems like a 3D scanner and different CNC type devices like 3D printer, to basically create a Jarvis like system.
Would you recommend at this point (8/22/2024) is 4090 still the best to invest in now or should there be a special NPU type GPU coming out? Or are they going to be so similar? OR will NPUs be a new type processor you can just add to the computer. So you can put a GPU, CPU, and maybe a future NPU in this computer box?
Kudos to the amazing tutorial! I have attempted to follow many others and yours was by far the most in depth and worked perfectly the first try! Thank you!
I looked for your mentioned guide but it looks like I just wandered into some chat board where nobody answers questions. May I suggest if you mention a link in a RU-vid video, make it easy to find on your website somehow. There is no way to delete a profile created either. I imagine you might find those suggestions helpful.
NetworkChuck thanks for your awesome videos. I bought my own version of Terry, his weaker little brother Jerry. I plan on building a server closer to Terry hopefully next year. I put my local AI on it by following your instructions. Thank you, thank you. I need help. How do I access my local AI remotely?
Another idea would be to SSH to your remote server to use and take advantage of the more computational power of Terry. Is that even possible? I'd love to watch your guide on that. I'm running a cheap laptop at work which certainly not powerful enough to run local AI but would like to SSH to my PC at home but can't seem to make it work because both are using WSL?
Hi chuck, When I try to install docker using the apt commands, I am getting the below error: Package docker-ce is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source. Help me in resolving this error.
This is great, saw your video a while back and it was only command line, decided to skip doing it, now this one shows up on my "For You", love it! Thank you!
That worked beautifully on remote digital ocean droplet! Even though llama2 did not meet install requirements - tiny llama model did. Great straitghforward introduction to the topic - thanks a bunch mate!