This video offers a fascinating dive into TV series analysis with a smorgasbord of AI techniques. From scraping data with Scrapy to crafting character networks and building chatbots, it’s a treasure trove for anyone eager to explore AI's creative potential.
Allah bless you my friend. Your channel is my favorite among my 3 favorite channels on RU-vid. Andrej Karpathy, Umar Jamil and You. How can donate you on the youtube?
im a having a problem with the css selectors I wrote the ones in the video and they dont work. smw-container returns an empty list. what could be the problem. I have checked for typos.response.css('.mw-parser-output .smw-columnlist-container a::attr(href)') all these dont work only think prodint output is response.css('.mw-parser-output a::attr(href)).getall() but produces links I dont need
This is the the for loop line. It's a little different from what you wrote. The code is in the Github repo and I just reran and it's working fine. Maybe just copy paste it from there. for href in response.css('.smw-columnlist-container')[0].css("a::attr(href)").extract():
I haven't worked with genAI with Azure. But there were some new releases in GCP and AWS that I played around with. They make utilizing and deploying LLMs very easy. I'm not sure whether Azure has similar solutions or note yet.
This is the the for loop line. It's a little different from what you wrote. The code is in the Github repo and I just reran and it's working fine. Maybe just copy paste it from there. for href in response.css('.smw-columnlist-container')[0].css("a::attr(href)").extract():
I have a Laptop configuration of 8 GB RAM and Nvidia RTX 1650, I am planning to upgrade the RAM to 16, will I be able to run these LLMs locally or should I try using a less powerful model. kindly, provide your suggestion, please
I think, RTX 1650 have 4GB of GPU memory which is enough for 3 of the 4 models I use. Actually those 3 models will work with the 8 GB of RAM. but Llama is big and requires at least 12 GB of GPU memory. I show you how to run this on Google Colab so you don't have to run it locally.
OSError: [WinError 127] The specified procedure could not be found. Error loading "C:\Users\....\AppData\Local\Programs\Python\Python312\Lib\site-packages\torch\lib\torch_cuda.dll" or one of its dependencies. unfortunately i had to give up with your tutorial as getting pytorch to work is a nightmare. it doesnt seem to work. i've tried different versions and different python kernals. i installed everything using pip. just doesnt work
I get the frustration of installing things. Especially on windows. I use WSL to simulate a Linux environment so that I don't have to deal with this. I recommend you to run it Google colab directly like I'm doing in the tutorial. You will just have to skip running locally. Another longer solution is to download WSL and set up your environment there. But that can be a long process.