Hi guys, I'm a content creator on generative AI and data science, PhD. My goal is to create engaging content to make the latest AI technologies understandable for everyone.
Don't forget to subscribe and turn on notifications so you don't miss new videos.
📌 Top Writer in AI on Medium (10k+subs) 👉 tirendazacademy.medium.com
which version of numpy youre using? ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
Thank you, but could you please hellp me. I get this error : Unfortunately, I was not able to answer your question, because of the following error: 404 page not found when I look at the pandasai.log I get this error : What is the sales in Canada? Variable `dfs: list[pd.DataFrame]` is already declared. At the end, declare "result" variable as a dictionary of type and value. If you are asked to plot a chart, use "matplotlib" for charts, save as png. Generate python code and return full updated code: 2024-06-10 13:31:35 [INFO] Executing Step 3: CodeGenerator 2024-06-10 13:31:35 [INFO] HTTP Request: POST 127.0.0.1:11434/chat/completions "HTTP/1.1 404 Not Found" 2024-06-10 13:31:35 [ERROR] Pipeline failed on step 3: 404 page not found
Hi any steps you can suggest i should learn first? I'm from game development background with c#, I know a little bit of python. I don't know about conda or anything else
Hi great video. I wanted to ask whether prompting is necessary to improve the answer accuracy of the agent. Also how to deal with huge databases? Thanks in advance
What is purpose of using Conda environment for the projects ? I have been following your tutorials for a while and in your implementations you are always creating Conda environment. Due to fact that I am new to python I am curious is there any reason other than those benefits: Isolation, dependency management ?
Hello, I wrote the same code, it works, but it is very slow, for example, when I say bring the first 4 data, it responds in 2 minutes 3 minutes, but it is very very fast in you. It is very fast when chatting normally with llama3. It is very fast in my database, but when I write this code, it works very slowly.
The app answered quickly a response in the video because I used cache. Cache allows PandasAI to store the results of previous queries. My queries took some time. Which graphic card do you use?
@@TirendazAI I use the Amd Rodeon rx580 graphics card. Actually, my goal is to use AI search on my website. Algorithms almost most of the time can not give correct results. I really thought that artificial intelligence would be very useful to think like a human, but it works very slowly.
@@TirendazAI hi, I have use ollama3 with this localhost port : 127.0.0.1:11434. but I am confusing, how to load the model with transformers ? so I can follow your step?
@TirendazAI HELLO, Are you available to maximize your Udemy course's potential! Our tailored marketing strategies ensure greater visibility, increased enrollments, and enhanced student engagement. Ready to see your course top the charts? Let’s connect! @PBG TEAM
I have 64gb ram and 8gb vram, i want to run llama 70B but it doesn't fit. how can i run it on system ram (64gb one) on python . can you make a video for that?
Thank you for the great video. It was really helpful for getting everything set up. If I may ask, I have a 4090 graphics card and I can this maxing out my GPU usage so the cuda should be working correctly. However, my prompts when asked take anywhere between 20s and 2 minutes to return and after a few questions the chatbot stops responding at all and just stays processing. Is this normal?
If you use a local model with Ollama, system requirements depend on the model you'll use. For example, you need to have at least 8GB RAM for the 7B or 8B model version.