Тёмный
Tirendaz AI
Tirendaz AI
Tirendaz AI
Подписаться
Don't just use AI - Learn how to build AI.

Hi guys, I'm a content creator on generative AI and data science, PhD. My goal is to create engaging content to make the latest AI technologies understandable for everyone.

Don't forget to subscribe and turn on notifications so you don't miss new videos.

📌 Top Writer in AI on Medium (10k+subs) 👉 tirendazacademy.medium.com
Gradio APP with the Claude API using Python
9:35
3 месяца назад
How to use GEMMA in PyTorch (Free GPU)
13:14
4 месяца назад
Комментарии
@studyjee5614
@studyjee5614 День назад
OK
@mohammadsbeeh6131
@mohammadsbeeh6131 4 дня назад
for me i didnt get same output as u when debuged prompt in 07:15
@amoahs7779
@amoahs7779 6 дней назад
Thanks so much for this informative tutorial. What keyboard are you using ? It sounds very nice 😊
@vidadeperros9763
@vidadeperros9763 9 дней назад
This is so good! ❤ thank you so much!
@PT-rg2vo
@PT-rg2vo 9 дней назад
Amazing video. Thank you.
@vidadeperros9763
@vidadeperros9763 9 дней назад
Cool. When are you going to make a video about PDF queries ? That would be great!
@kamaleshashok4046
@kamaleshashok4046 9 дней назад
which version of numpy youre using? ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
@farooqhussain638
@farooqhussain638 10 дней назад
Great video, opens so many possibilities. Qn. What are the system requirements for this type of set up?
@groovejets
@groovejets 10 дней назад
Really enjoyed the video, thank you.
@Airsoftshowoffs
@Airsoftshowoffs 11 дней назад
Great video. Thank you for making this easy.
@KumR
@KumR 12 дней назад
Nice..but how is this different from other ask csv videos? whats the special point of agent ?
@Tengreken
@Tengreken 12 дней назад
Udemyde de Türkçesi bulunmuyor. Eğitimleri Türkçe de yapabilir misin? En azından Türkçe altyazı?
@muhammadtalmeez3276
@muhammadtalmeez3276 14 дней назад
wow your videos are amazing. CAn we use any open-source model with PandasAi?
@tofolcano9639
@tofolcano9639 16 дней назад
Very cool! But this cannot be done in a deployed streamlit app for anyone to use... right?
@nguyenttimothy
@nguyenttimothy 17 дней назад
Thanks for the video. Why do you have v1 at the end of the url?
@faizhalas
@faizhalas 18 дней назад
Easy to follow, yet mind-blowing!
@jackgaleras
@jackgaleras 20 дней назад
12:26 Gracias mil, la animación es muy adecuada cuando se intenta estas cosas y se consiguen mil errores hasta que por fin se logra el objetivo ja
@jackgaleras
@jackgaleras 21 день назад
Alguien ya verifico que los numeros esten bien
@Mallubeast69_xd
@Mallubeast69_xd 22 дня назад
How much did it cost
@Deepakbhogle
@Deepakbhogle 22 дня назад
I like when you say there we go😊
@muhaiminhading3477
@muhaiminhading3477 22 дня назад
Thank you, but could you please hellp me. I get this error : Unfortunately, I was not able to answer your question, because of the following error: 404 page not found when I look at the pandasai.log I get this error : What is the sales in Canada? Variable `dfs: list[pd.DataFrame]` is already declared. At the end, declare "result" variable as a dictionary of type and value. If you are asked to plot a chart, use "matplotlib" for charts, save as png. Generate python code and return full updated code: 2024-06-10 13:31:35 [INFO] Executing Step 3: CodeGenerator 2024-06-10 13:31:35 [INFO] HTTP Request: POST 127.0.0.1:11434/chat/completions "HTTP/1.1 404 Not Found" 2024-06-10 13:31:35 [ERROR] Pipeline failed on step 3: 404 page not found
@Deepakbhogle
@Deepakbhogle 24 дня назад
Thanks your videos are detailed and informative
@ClimateDS
@ClimateDS 27 дней назад
I would call this an agent. more like pandas chat bot
@anshulsingh8326
@anshulsingh8326 Месяц назад
Hi any steps you can suggest i should learn first? I'm from game development background with c#, I know a little bit of python. I don't know about conda or anything else
@AshishSingh-ri9rr
@AshishSingh-ri9rr Месяц назад
Hi great video. I wanted to ask whether prompting is necessary to improve the answer accuracy of the agent. Also how to deal with huge databases? Thanks in advance
@gemini22581
@gemini22581 Месяц назад
Can u show how i can use RAG in this framework. I have a .csv file i would want to use for responses. Pls show that
@erolkuluslu3942
@erolkuluslu3942 Месяц назад
What is purpose of using Conda environment for the projects ? I have been following your tutorials for a while and in your implementations you are always creating Conda environment. Due to fact that I am new to python I am curious is there any reason other than those benefits: Isolation, dependency management ?
@souravbarua3991
@souravbarua3991 Месяц назад
Its not working all the time. Its handy but not good. In other way langchain dataframe agents are working better than this.
@TirendazAI
@TirendazAI Месяц назад
If you are using a smaller model like the Llama-8b, sometimes you may need to try a few prompts to get a good response.
@souravbarua3991
@souravbarua3991 Месяц назад
@@TirendazAI I am using same model as shown in video.
@erdemkoraysanl7547
@erdemkoraysanl7547 Месяц назад
Hello, I wrote the same code, it works, but it is very slow, for example, when I say bring the first 4 data, it responds in 2 minutes 3 minutes, but it is very very fast in you. It is very fast when chatting normally with llama3. It is very fast in my database, but when I write this code, it works very slowly.
@TirendazAI
@TirendazAI Месяц назад
The app answered quickly a response in the video because I used cache. Cache allows PandasAI to store the results of previous queries. My queries took some time. Which graphic card do you use?
@erdemkoraysanl7547
@erdemkoraysanl7547 Месяц назад
@@TirendazAI I use the Amd Rodeon rx580 graphics card. Actually, my goal is to use AI search on my website. Algorithms almost most of the time can not give correct results. I really thought that artificial intelligence would be very useful to think like a human, but it works very slowly.
@Alicornriderm
@Alicornriderm Месяц назад
Thanks for the tutorial! Is this actually running locally? If so, how did it download so quickly?
@TirendazAI
@TirendazAI Месяц назад
Yes, the model is running locally and for free. To download the model, you can use ollama.
@muhaiminhading3477
@muhaiminhading3477 26 дней назад
@@TirendazAI hi, I have use ollama3 with this localhost port : 127.0.0.1:11434. but I am confusing, how to load the model with transformers ? so I can follow your step?
@sdplusm3
@sdplusm3 Месяц назад
Nice video. Thank you.
@TirendazAI
@TirendazAI Месяц назад
Glad you liked it!
@PrinBensonic
@PrinBensonic Месяц назад
@TirendazAI HELLO, Are you available to maximize your Udemy course's potential! Our tailored marketing strategies ensure greater visibility, increased enrollments, and enhanced student engagement. Ready to see your course top the charts? Let’s connect! @PBG TEAM
@user-ey7pb5re2i
@user-ey7pb5re2i Месяц назад
why i am getting: llm = ollama(model="llama3") TypeError: 'module' object is not callable
@hashirjadoon9904
@hashirjadoon9904 Месяц назад
ollama (O) is capital.
@TooyAshy-100
@TooyAshy-100 Месяц назад
Thank you,,,
@TirendazAI
@TirendazAI Месяц назад
You are welcome
@HmzaY
@HmzaY Месяц назад
I have 64gb ram and 8gb vram, i want to run llama 70B but it doesn't fit. how can i run it on system ram (64gb one) on python . can you make a video for that?
@TirendazAI
@TirendazAI Месяц назад
I also have 64gb RAM, it worked for me. My system used about 58GB RAM for llama-3:70B. I show RAM I use if I make a video with llama 3:70B.
@HmzaY
@HmzaY Месяц назад
I have 64gb ram and 8gb vram, i want to run llama 70B but it doesn't fit. how can i run it on system ram with python. can you make a video for that?
@TirendazAI
@TirendazAI Месяц назад
I also have 64gb RAM, it worked for me. My system used about 58GB RAM for llama-3:70B. I show RAM I use if I make a video with llama 3:70B.
@suffympm1601
@suffympm1601 Месяц назад
thanks for your video. from indonesia.. but why in my laptop result data is so long.. any idea ?. my spec is asus ryzen 7 gen 5
@TirendazAI
@TirendazAI Месяц назад
Hi, to get a quick response, it is important to have a powerful graphics card.
@dangya3481
@dangya3481 Месяц назад
I use conda 23 + Python 3.10 , it worked. But use conda 24 + Python 3.12, it's not worked.
@TirendazAI
@TirendazAI Месяц назад
This is due to the dependencies of the libraries. I recommend creating a separate virtual environment.
@lowkeylyesmith
@lowkeylyesmith Месяц назад
Is it possible to expand the limit per file? my csv files are larger then 1GB.
@TirendazAI
@TirendazAI Месяц назад
This is possible, but you need to use a larger model, such as llama-3:70b instead of llama-3b:8b.
@thetanukigame3289
@thetanukigame3289 Месяц назад
Thank you for the great video. It was really helpful for getting everything set up. If I may ask, I have a 4090 graphics card and I can this maxing out my GPU usage so the cuda should be working correctly. However, my prompts when asked take anywhere between 20s and 2 minutes to return and after a few questions the chatbot stops responding at all and just stays processing. Is this normal?
@TirendazAI
@TirendazAI Месяц назад
Which large model do you use? If you're using llama-3:70B, I think it's normal.
@PuffNSnort
@PuffNSnort Месяц назад
Great video! What are the dataset size limitations? I get an answer 30% of the time and errors the rest of the time.
@TirendazAI
@TirendazAI Месяц назад
Large models like Llama-3:70b and GPT-4 respond better.
@sebastianarias9790
@sebastianarias9790 Месяц назад
i'm not getting a response from the chat. it keeps "Generating the prompt". what could be a reason for that? Thanks!
@TirendazAI
@TirendazAI Месяц назад
Did you get any error? If yes, can you share this error? I can say something if I see the error.
@sebastianarias9790
@sebastianarias9790 Месяц назад
@@TirendazAI there’s no error my friend. It only takes a very long time to get the output. Any ideas?
@TirendazAI
@TirendazAI Месяц назад
Which large model do you use?
@sebastianarias9790
@sebastianarias9790 Месяц назад
@@TirendazAI llama3 !
@sebastianarias9790
@sebastianarias9790 Месяц назад
@@TirendazAI llama3 !
@focusedstudent464
@focusedstudent464 Месяц назад
First
@PriyanshuSiddharth-ku7ev
@PriyanshuSiddharth-ku7ev Месяц назад
awesome
@ByteBop911
@ByteBop911 Месяц назад
Is it possible to use agents without the pandasapi and use ollama?
@TirendazAI
@TirendazAI Месяц назад
You can use many agents with LangChain.
@stanTrX
@stanTrX Месяц назад
Excel?
@TirendazAI
@TirendazAI Месяц назад
👍👍👍
@suryadiyadi2240
@suryadiyadi2240 Месяц назад
really great 😮
@TirendazAI
@TirendazAI Месяц назад
Thanks 🙏
@varshakrishnan3686
@varshakrishnan3686 Месяц назад
I'm getting the error no module named pandasai.llm.local_llm. Is there any way to solve it?
@TirendazAI
@TirendazAI Месяц назад
llm is a module in pandasai. Make sure pandasai is installed and virtual environment is activated.
@sareythakumar3062
@sareythakumar3062 Месяц назад
👍👌
@mohamedmaf
@mohamedmaf Месяц назад
Thanks a lot , very helpful tutorial, I have one question, what's the specs of the running machine (cpu, gpu, ram, os)?
@TirendazAI
@TirendazAI Месяц назад
If you use a local model with Ollama, system requirements depend on the model you'll use. For example, you need to have at least 8GB RAM for the 7B or 8B model version.