Тёмный

Build Anything with Llama 3 Agents, Here’s How 

David Ondrej
Подписаться 122 тыс.
Просмотров 151 тыс.
50% 1

Опубликовано:

 

22 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 225   
@DavidOndrej
@DavidOndrej 6 месяцев назад
If you're serious about AI, and want to learn how to build Agents, join my community: www.skool.com/new-society
@spicer41282
@spicer41282 6 месяцев назад
I cannot agree more with @milosjovanic803. I / We like your videos? But you might want to think about your offer value @ 77/mth? (Ridiculous!)
@nikozeldenthuis281
@nikozeldenthuis281 6 месяцев назад
I would like to join but i cannot pay using a Credit card. In Holland we use Paypal and Ideal mostly
@Shagbagtv
@Shagbagtv 5 дней назад
@davidondreji with you program can I build my own private agents And what’s the kick off price I asked Chat gpt and it said 5.75 usd to have 7 agents with a over seer
@EccleezyAvicii
@EccleezyAvicii 6 месяцев назад
So the first 60% of this video built up the expectation that we are gonna use offline llama3 py agents, but then at the very end you switch it to using the llama3 available through groq's api. Although you do get the agent working with llama3, its a bit misleading, and it would have been better to straight up say: I havent got Llama3 working offline*, but here is how I got it working through groq's API. *Edit for clarity: llama3 working offline with crewai in the context of this tutorial. *Edit 2: Others have tested the offline ollama3 model recently, and state it now is working. At the time this video was recorded, CrewAI wasn’t collaborating properly with Ollama3 offline-and that was the issue, which should now be fixed.
@ardagunay4699
@ardagunay4699 6 месяцев назад
I agree. Basically llama3 is not %100 open source as far as I know
@EccleezyAvicii
@EccleezyAvicii 6 месяцев назад
@@ardagunay4699I think it has more to do with crewai not being correctly configured with the new llama model. It’s possible to use llama2 offline to do the same project, but if you repeat the same steps for llama3 there is a clear breakdown in the crewai step of the process
@simasjanusas1766
@simasjanusas1766 6 месяцев назад
Thanks saved 12min of my life
@nordiclibertycaphuntersgui4072
@nordiclibertycaphuntersgui4072 6 месяцев назад
you can have llama3 running local,, i used ollama and just got llama3 model, no problem
@greengoblin9567
@greengoblin9567 6 месяцев назад
@@EccleezyAvicii It literally just came out. He probably hasn't done it yet.
@regularComputerGuy
@regularComputerGuy 5 месяцев назад
I also noticed that as of 2024-04-27 the Llama3 (local LLM) does not work with CrewAI. However, you can replace Lllama3 with Erik Hartford's excellent model "dolphin-llama3" and you get the expected result. Dolphin-llama has the additional advantage of being uncensored. Cheers! Keep up the good work!
@justjase1576
@justjase1576 5 месяцев назад
Nice tip! thanks. I will try that.
@Antoine42News
@Antoine42News 5 месяцев назад
Oh thank you !!
@SunilSamson-w2l
@SunilSamson-w2l 2 месяца назад
You need to use it with OllamaLLM from Langchain. Then it works !
@zzzzzzz8473
@zzzzzzz8473 6 месяцев назад
more then anything i appreciate your showing when and where these processes dont work . the trouble-shooting is a critical part of process and the overhype of these systems is most deceitful when the user actually tries to integrate them and runs into all sorts of issues that were hidden by showmen . really excited for llama3 finetunes and more powerful agentic systems , thinking recursive self-debug and finetuning for the generation of the most understandable debugable code , with proofs and tests , could build a solid foundation .
@MeiliAnipa
@MeiliAnipa 4 месяца назад
I have no coding experience and i copied every code you wrote and my crew worked without groq, which is really something! Thanks for the tutorial a lot
@SteamVin
@SteamVin 6 месяцев назад
Thanks. Very helpful. Waiting for 400b model
@DavidOndrej
@DavidOndrej 6 месяцев назад
I'm praying that the 405B model is better than both Claude 3 Opus and GPT-4 Turbo Because if it is, the world will no longer be the same.
@Instant_Nerf
@Instant_Nerf 6 месяцев назад
Why use a Mac if you need a beefy pc? Just curious .. I know Apple has their ai chips besides cpu, gpu.. but a 3090 will smoke it out of the water.
@themaridv2000
@themaridv2000 6 месяцев назад
​@@Instant_Nerfnot really. Arm is just different
@LotsOfBologna2
@LotsOfBologna2 6 месяцев назад
@@Instant_Nerf ​ @Instant_Nerf With any big model, you're not going to be able to make much use of any consumer GPU like the 3090. He can run the 8B parameter model with it, but the most sensible route is cloud computing for stuff that's big, which he is doing with Groq. If you're going to run LLM's an absurd amount of time, sure. Get a rack of GPU's or get a high end server processor with large amounts of fast server memory. But for most people, this is not a good use of money.
@TheThetruthmaster1
@TheThetruthmaster1 6 месяцев назад
Good luck finding a computer that isn't over 15k to do that.
@Simply-AI-Solutions
@Simply-AI-Solutions 6 месяцев назад
Bro I just want to say thank you for making this content. It's always super informative in an easy-to-understand format. Out of the 50 (exaggeration, but there's alot) I always find myself looking for your videos first. I've always had an interest in programming and hacking but didn't do much with machine learning. But know I'm a man obsessed. Mainly because of how critical it is for normal civilians to learn how create and train these things. I truly belive the future of humanity depends on it. If corporations kill us all building agi and it escapes especially Gemini we are so screwed because the odds are against us that it finds any value I'm something(humans)that's killing the thing it exists on or believing we will attempt to shut her down. Or government with runaway military ai because they couldn't wait until all the bugs were out when they deploy it.
@enkor349
@enkor349 6 месяцев назад
If you are not in the new-society - check it out. Based on what you wrote - I think you would enjoy it.
@geneanthony3421
@geneanthony3421 5 месяцев назад
I appreciate the clean simple video. I didn't run into the same issues as you with Llama 3 though. Still it gave me enough to get my head around this though.
@ChristopherFoster-McBride
@ChristopherFoster-McBride 6 месяцев назад
Worked like a charm - amazing and Groq take a bow! Great videos as ever David..you the man!
@DavidOndrej
@DavidOndrej 6 месяцев назад
Thanks 👍
@photomatto
@photomatto 6 месяцев назад
Did you get llama 3 running with CrewAI without using Groq though? Or did I miss that in the video?
@irrelevantdata
@irrelevantdata 6 месяцев назад
Yea, if you still have to pay for some damn service... I'm having issues getting autogen working with llama3:8b-instruct-fp16 and the teachability module(runs at 42+ t/s though!) It almost never decides to flag things as important/worthy of remembering!... But just started messing with that today. If you have a solution to using agents with only localLLM, no api keys, please let us know! TL;DR - it fails to understand it's asked to basically form a question about what it's supposed to store in the db, so that it could be found that way, and the analyzer just keeps asking this same question every time. Probably need a better analyzer?..hm -------------------------------------------------------------------------------- teachable_agent (to analyzer): Imagine that the user forgot this information in the TEXT. How would they ask you for this information? Include no other text in your response. -------------------------------------------------------------------------------- analyzer (to teachable_agent): What is the context or background information mentioned in the provided text that I should be aware of? Can you remind me what important details are missing from the passage and need to be recalled? --------------------------------------------------------------------------------
@PrinzMegahertz
@PrinzMegahertz 6 месяцев назад
For me, it works well with LM-Studio.
@richardchinnis
@richardchinnis 6 месяцев назад
@@PrinzMegahertz I'm interested in your setup. I'm using it with LMStudio and getting the same result ... the Executor kicks off, and the GPU ramps up, but nothing happens.
@supirman
@supirman 6 месяцев назад
@@richardchinnis I tried with crewai and llama3:8b locally on my computer, 15 minutes later, it still is stuck at > Entering new CrewAgentExecutor chain...
@mojo1989be
@mojo1989be 6 месяцев назад
@@PrinzMegahertz i have the same output, like in the video agents going nuts repeating, any idea?
@MrJjuuaann11
@MrJjuuaann11 6 месяцев назад
So this started as a good tutorial, ran into some issues and kinda just ended. I did manage to get the groq to work in the end. I do have Llama running on a Docker container. Now i would like to combine both of those . Thanks for the tutorial
@yansonkhuu7477
@yansonkhuu7477 4 месяца назад
This was very very informative and easy to follow! Great tutorial!
@FabiCriativa
@FabiCriativa 5 месяцев назад
I love your accent and the high level video contents. Thank you ❤
@solaawodiya7360
@solaawodiya7360 6 месяцев назад
Awesome tutorial David. I think GPT4 as a benchmark is quite old now. I wonder what agents with GPT5 will look like.
@JanvanNiekerk0
@JanvanNiekerk0 2 месяца назад
Nothing like claiming one thing in a video title, and then not delivering on it after watching + following for 12minutes
@mattmmilli8287
@mattmmilli8287 2 дня назад
The true scourge on current LLM scene 😂
@jayhu6075
@jayhu6075 6 месяцев назад
I would like to know your thoughts on this: With a limitation that restricts the potential of Llama materials to enhance other major language models. Researchers and developers often want to compare or fine-tune different models to improve their performance or tailor them to specific task? However, due to the restrictions in the licensing terms, they cannot freely utilize the Llama materials to do so unless they specifically use Llama 3.
@SouhailEntertainment
@SouhailEntertainment 3 месяца назад
Introduction to Building AI Agents - 00:00:00 Performance Comparison of Llama Models - 00:00:35 Getting Started: Required Tools and Downloads - 00:01:06 Downloading and Setting Up Llama Models - 00:01:36 Basic Chat with Llama Locally - 00:02:28 Setting Up VS Code and Writing Initial Code - 00:02:56 Installing Required Packages - 00:03:02 Defining and Importing Models and Packages - 00:04:04 Creating the Email Classifier Agent - 00:04:37 Creating the Email Responder Agent - 00:06:25 Defining Tasks for the Agents - 00:07:00 Defining and Running the Crew - 00:07:37 Initial Run and Troubleshooting - 00:08:13 Adding the Gro API for Better Performance - 00:09:26 Final Setup and Testing with Gro API - 00:10:04 Conclusion and Call to Join the Community - 00:12:07
@peralser
@peralser 6 месяцев назад
Great Work!! Thanks for sharing with us.
@cycologist8615
@cycologist8615 6 месяцев назад
Nice work David! I appreciate your effort to get this code out to us so quickly.
@justjase1576
@justjase1576 5 месяцев назад
Nice video and tutorial! Thanks a lot! This gave me the head start I was looking for. Subbed and will def keep watching.
@RABWA333
@RABWA333 6 месяцев назад
Happy you created course , hope you continue . What the difference from CREWAI?
@subashbaskota9948
@subashbaskota9948 6 месяцев назад
Hahaha thanks so much brother. I want to stay on the edge with help of a friend such as you.
@Copa20777
@Copa20777 6 месяцев назад
Thanks david for thw consistency and simplicity.. we still learning about agents, thanks and touch on foundation agents by nvidia dr Jim Fan
@iocel
@iocel 5 месяцев назад
Mr. David you speak so fast like there's no tommorow. But many thanks for this content ❤
@personone6881
@personone6881 6 месяцев назад
GREAT VID! FINALLY THE INSTRUCT!! ...may I just ask a sidebar question Re. Your VS Code editor window behavior? PLEASE?!! How have you set your VS Code preferences so that the longer length strings you've written for the classifier and responder classes (specifically, the strings stored in 'goal' and 'backstory') when they reach the edge of the editor window they wrap to the next line down, WITH THE NEXT WORD CONTINUING FROM THE CORRECT INDENTATION POSITION (Directly beneath the declaration like: To demonstrate/explain here: || = indicated the edge of the editors window Your editor looks like this: responder = Agent( (\t) goal = "qwer|| tyabcdefghijklmnop", ) My editor looks like this: responder = Agent( (\t) goal = "qwer|| tyabcdefghijklmnop", #
@keywordniches2207
@keywordniches2207 6 месяцев назад
the last one with groq is using open ais gpt4 and not llama correct? or do you still needs tokens from openai to get llama working? please explain
@jinsugarbrown
@jinsugarbrown 2 месяца назад
saw that too.. he got a groc API key and suddenly it was a openai key. is that what you also understood
@luismoriguerra669
@luismoriguerra669 6 месяцев назад
good simple examples showing Groq capabilities
@nordiclibertycaphuntersgui4072
@nordiclibertycaphuntersgui4072 6 месяцев назад
@DavidOdrej Can you get any LLM model to understand what a magical square is, how to create it and create an working example.... ? i bet not
@nordiclibertycaphuntersgui4072
@nordiclibertycaphuntersgui4072 6 месяцев назад
would really be cool how you could get agents to understand it and create a working sample.. its weird as its knows what it is, but can calculate simple.. please give it a try or anyone else ai interested
@jumpersfilmedinvr
@jumpersfilmedinvr 2 месяца назад
Big Dawg, 2 questions. If a local ollama 3 model runs slow , can I use an API to run it faster? If so how much is it?
@jinsugarbrown
@jinsugarbrown 2 месяца назад
awesome explainable. super clear.
@maalonszuman491
@maalonszuman491 5 месяцев назад
Nice video do you have a video where you create and use tools ?
@MR_GREEN1337
@MR_GREEN1337 6 месяцев назад
that's some top-tier level shit, keep it up
@KTPDAILY
@KTPDAILY 6 месяцев назад
BRO - YOU ARE NEXT LEVEL - YOUR FUTURE IS BIG TIME
@cucciolo182
@cucciolo182 6 месяцев назад
Now we need to find a way to pack everything so we can sell those personal assistants and install those assistants in any website.
@archonjubael
@archonjubael 6 месяцев назад
Found you today with the Zuck news, and now I like watching you code. 👊
@MaiYoulian
@MaiYoulian 5 месяцев назад
Hi may I know how did you get the auto suggest prompt when you are typing your goal? Mine doesn't seem to have them :')
@nidalidais9999
@nidalidais9999 4 месяца назад
Amazing and excellent job, how to make it serve many clients
@Agesilas2
@Agesilas2 5 месяцев назад
0:05 "even if you have a bad computer" 8:25 "look at the activity monitor" -> +20G in memory 😂
@lauther_27
@lauther_27 5 месяцев назад
I got confused, so you are using 70b model right? 🤔
@PaoloTshiyole
@PaoloTshiyole 6 месяцев назад
Please can you list me the capabilities the PC needs to run this well Nice video 🎉
@aritradaschowdhury5539
@aritradaschowdhury5539 4 месяца назад
Thank you
@grupomexcomer
@grupomexcomer 3 месяца назад
I was looking at the video for a 100% offline solution and then you started using OpenAI. Totally got confused. Why?
@VictorSilva-hv7jg
@VictorSilva-hv7jg 5 месяцев назад
Hey guys! I'm immersed in the study of AI agents and I'm curious: would it be viable to build an agent that prospects customers for freelance professionals? I envision a system capable of exploring Instagram in an automated way, identifying potential customers and even starting conversations to schedule sales meetings. Is it possible to develop such AI agents? If so, do you know of any videos on RU-vid or any mentors that explain how to create AI agents to automatically prospect customers through Instagram? Is there something like this in development or is this an idea for the future of AI?"
@joebazooks
@joebazooks 6 месяцев назад
wouldve been nice to see you successfully set up and use llama3 the first time without the use of an openapi key etc
@MaryRanzGellieAquino
@MaryRanzGellieAquino 3 месяца назад
thanks for this! currently trying to figure out what's missing with my groq connection since I'm encountering this error: openai.error.InvalidRequestError: The model `gpt-4` does not exist or you do not have access to it.
@lionelshaghlil1754
@lionelshaghlil1754 5 месяцев назад
Thanks for sharing
@mahraneabid
@mahraneabid 2 месяца назад
is in the last video you changed with open ai API key that is not free right?
@WaveOfDestiny
@WaveOfDestiny 6 месяцев назад
I want something to experiment with agents and get the handle of it and experiment on how much better it can help me at work, without spending any money or having particular premium keys. I've watched a lot of videos but i still don't understand what agent builders allow free to use agents, even if it's on a daily token limit.
@avishsharma8852
@avishsharma8852 5 месяцев назад
The issue with these third parties is that there is lot under the hood and you will end with a lot of api calls and billing aspect is to be considered.
@cryptolife5756
@cryptolife5756 6 месяцев назад
If I bought your course, do you teach how to make sophisticated ai chatbot?
@mrNashmann
@mrNashmann 6 месяцев назад
I see David berman replicated this today David
@furek5
@furek5 6 месяцев назад
Thanks for the video! One question: any clue why do you set OPENAI_API_KEY with groq api key? I found it a bit confusing. Especially when using openai's API_KEY for authentication. Is OPENAI_API_KEY a placeholder in crewai for groq api key? I know a bit nonsense, so what do I missing? Thanks!
@HakaiKaien
@HakaiKaien 6 месяцев назад
I'm a 3d artist and not a programmer by any stretch of the imagination. Is there a change we could have CrewAI with a nice user interface and an installer?
@AndreiBas
@AndreiBas 4 месяца назад
Man, you’re hilarious 😂😂😂 After 10 minutes of video - we can delete all of this and follow official crewai instructions ) But anyway, thanks, video have some gems, but probably not for those who are looking for entry level video.
@cloudworkconnects
@cloudworkconnects 4 месяца назад
Your API still shows BTW (though I bet you've already changed it) 👍
@chasisaac
@chasisaac 4 месяца назад
If I install just the basic Mama, can I install the 8B model over it?
@ramannegi3729
@ramannegi3729 5 месяцев назад
for creating agent in future also do we need to understand machine learning as well plz reply
@SantiYounger
@SantiYounger 6 месяцев назад
awesome thanks, I wasn't sure if I understood correctly, does downloading the local model work with crewai or only through the API?
@mayorc
@mayorc 6 месяцев назад
I suggest you trying with LMStudio, cause with Ollama and crewai, it seems to be problematic.
@SantiYounger
@SantiYounger 6 месяцев назад
@@mayorc sounds great thank you!
@GameboyMufti
@GameboyMufti 3 месяца назад
Can we link this with teams (microsoft )
@chrisder1814
@chrisder1814 2 месяца назад
M hello I had an idea for automation with image editing software, could you tell me what you think about it
@GodleeThunder
@GodleeThunder 6 месяцев назад
Hey brother, thank you very much for your channel. I’m a single father, I’ve been following this AI stuff closely. I’m also in school, and have so many coals in the fire it’s not even funny lol. Thank you for your posts, because when I get a decent computer I’ll be able to quickly jump on board. I grew up very poor, my son will have a better life. I need to be on top of this. My next pay check I’ll be joining your community. Any advice on how to get my hands on a decent computer? To run this stuff? What should it have? I don’t want to miss the opportunity to provide for my son
@yoganector6859
@yoganector6859 5 месяцев назад
Lama 3 versent or project test of 150 pages project docs quiz test ...how do this Ai perform?
@avishsharma8852
@avishsharma8852 5 месяцев назад
Also, I fail to understand that eventually we have to use OPENAI as well!
@fa8ster
@fa8ster 3 месяца назад
dont know pyhton anf that entire ecosystem unfortunately. can i do the same in node js?
@rainonmiavrai1251
@rainonmiavrai1251 6 месяцев назад
It's possible to create an agent that talk like a novel character? It have to talk in Italian, no English. I wanna know if your curse you explain that
@wilfredomartel7781
@wilfredomartel7781 6 месяцев назад
Looks promissing!
@netric4011
@netric4011 5 месяцев назад
Question: what hardware spec do you need to run this?
@DavidOndrej
@DavidOndrej 5 месяцев назад
to run the 8B model, you need a $1k PC or better to run the 70B model you need a $5k PC or better
@emanuelec2704
@emanuelec2704 6 месяцев назад
When I use llama 3 8B on ollama or LM Studio, it is much dumber than on OpenRouter. Even after resetting all parameters to factory and loading the llama 3 preset. Even with the full non-quantized 8-bit version on LM studio.
@joacosolbes9283
@joacosolbes9283 6 месяцев назад
Which are the bare Minimum specs for my hardware to be able to run this?
@jasonhemphill8525
@jasonhemphill8525 6 месяцев назад
If you are using groq, none of the processing is done locally. So basically any hardware would do.
@degenjannis9301
@degenjannis9301 4 месяца назад
LOOK AT THAT SPEEEED 😀
@maxitube30
@maxitube30 6 месяцев назад
why u need openai key if u using free llama?
@wenhanzhou5826
@wenhanzhou5826 6 месяцев назад
He stored the Groq API key as an OpenAI API key variable with os.environ["OPENAI_API_KEY"], so when llm = ChatOpenAI(model = "some model") is called, it will automatically switch out "some model" against the variable defined in os.environ["OPENAI_MODEL_NAME"], which he set it to be "llama3-70-b-8192". Finally, he had to specify the url from which the model is accessed, so he set os.environ["OPENAI_API_BASE"] to some Groq related url.
@maxitube30
@maxitube30 6 месяцев назад
@@wenhanzhou5826 thx for clarification mate
@mikehoops
@mikehoops 5 месяцев назад
so did this end up using llama or openai at the end...?
@yv4000
@yv4000 6 месяцев назад
How did you download the 40gb model? Do you have that much disk space?
@DavidOndrej
@DavidOndrej 6 месяцев назад
Yes... I think almost all modern computers have more than 40GB of free storage. This isn't your RAM, this is how much space you need on your hard drive.
@yv4000
@yv4000 6 месяцев назад
@@DavidOndrej But if i'm using Groq, I don't need to download it right? I can just run it using the API?
@VitusinX
@VitusinX 6 месяцев назад
Almost burned down my GPU :D but thanks for tutorial
@VitusinX
@VitusinX 6 месяцев назад
ofc solved with Groq :D enjoy everyone this great tutorial
@bensavage6389
@bensavage6389 6 месяцев назад
What Did you even show I missed it😂 . You just kept saying it was broken
@santiagovallejotoro3802
@santiagovallejotoro3802 6 месяцев назад
Can You make some better example of this agents. Something that is really helpfull, you always say thay for questions of time you do something basic, But it will be really fascinating if you spend more time doing something that have some realistic value. Thanks
@johncolbourne1
@johncolbourne1 6 месяцев назад
TLDR: Don't bother with this video if you need to run locally He gets 9:38 in, can't get it working with a local Ollama model so just gives up and switches to a remote model. Really annoying if you're coding along with the video then realise it's useless for your purposes. I hope his premium content is better than this otherwise a bunch of people are getting taken for a ride.
@nordiclibertycaphuntersgui4072
@nordiclibertycaphuntersgui4072 6 месяцев назад
i got it working with ollama run llama3
@naturenurture84
@naturenurture84 6 месяцев назад
What did you achieve ?
@bensavage6389
@bensavage6389 6 месяцев назад
A video pointing to his chorus
@amaoalex
@amaoalex 4 месяца назад
I lost completely when you say: "I don't know what happened..."
@busesart
@busesart 6 месяцев назад
Ur a legend!
@rastinder
@rastinder 6 месяцев назад
I have made state of the art automation scripts for me work and i also added some stealth web scraping methods, how can i train lama model to use my coding methods??
@akj3344
@akj3344 6 месяцев назад
What are system requirements for running this model(8b) locally?
@sadshed4585
@sadshed4585 6 месяцев назад
I have ran it with a nvidia 3070 with like just a few tokens a second at float 16 i believe not sure but using recent branch text generation webui I was able to do it. I was also able to run the 8b model in colab for free using some code I got/came up with on huggingface face model card discussions you can see it. I would say 3090 would run it faster though
@ailxes
@ailxes 6 месяцев назад
Does anyone know if you can run this on in iPad locally and upload documents in order to answer queries? For example if you made an app for allergies on a food menu would you be able to upload ingredients of the food menu into the LLM and have it RAG answers similar to "I have a gluten allergy, can i have the Ceasar salad?"
@gavinknight8560
@gavinknight8560 6 месяцев назад
The easiest way would be to deploy all this stuff on a server or a home PC, expose an end point, then write an ipad app to upload docs and chat with doc via your app.
@thefaramith8876
@thefaramith8876 6 месяцев назад
Anyone who is not ultra-rich will never pay 77 USD just to be in your community. It is simply insane. I suggest you take a different approach. Because it wont work.
@jayhu6075
@jayhu6075 6 месяцев назад
I appreciate the value you offer with your community, but I want to be honest about my perspective. The current membership fee of 77 USD is simply too high for many, including myself. I understand that there are costs associated with maintaining the community and providing value to the members, but I wonder if there is room for a more accessible membership fee. A fee that is feasible for more people and enables them to participate and expand their knowledge.
@moderngames8892
@moderngames8892 6 месяцев назад
Yes. Reduce the price to 5 dollars
@kbystryakov
@kbystryakov 6 месяцев назад
We always have a choice - we can either stick our nose into other people's business and give unsolicited criticism, or we can start with ourselves, like earning more and not making ourselves look like a victim.
@wadetate1
@wadetate1 5 месяцев назад
$77 is stupid. Its just greedy and ridiculous when you can go elsewhere or ask an ai
@cwykidd
@cwykidd 5 месяцев назад
I'd pay it to be in with this crew if I could. Maybe someday but not today. Education is expensive. Life is rough, it's even rougher when your stupid. I appreciate your videos sir.
@Bright-Great
@Bright-Great 6 месяцев назад
Can i make it live for others to use it?
@lovol2
@lovol2 6 месяцев назад
10:00 thanks for being real
@Daniel-qr6sx
@Daniel-qr6sx 6 месяцев назад
You are not doing this in enough detail you go back and forth. Which makes it hard to follow for allot of people
@viranchivedpathak4231
@viranchivedpathak4231 3 месяца назад
Xtra Like Button
@henson2k
@henson2k 6 месяцев назад
What kind of computer can run llama3:70b locally?
@DavidOndrej
@DavidOndrej 6 месяцев назад
My computer
@henson2k
@henson2k 6 месяцев назад
@@DavidOndrej Share specs please
@Sagi56668
@Sagi56668 5 месяцев назад
how can i build an ai agent to work on Canva?
@datadreamsit8514
@datadreamsit8514 2 дня назад
well when you move from local to API mid video then you make it look like a bad case of ADHD
@yagoa
@yagoa 5 месяцев назад
ctrl+D on all OS"s exits anything in the terminal
@cheesiangleow4782
@cheesiangleow4782 6 месяцев назад
is groq api anytime adopt command r plus?
@NeilEvans-xq8ik
@NeilEvans-xq8ik 6 месяцев назад
Is it possible to use agents to build an AI powered question answering system for pdf documented for academic research purposes? I'd like to build my own so I can avoid the costs of those currently available commercially.
@vickihenderson8468
@vickihenderson8468 5 месяцев назад
I am looking to create a local ai tool that will help me reword and spell and grammar check in UK brittish lanagugae that will run locally Windows?
@TheFlintStryker
@TheFlintStryker 6 месяцев назад
Pc recommendations for 70b model?
@sadshed4585
@sadshed4585 6 месяцев назад
quantize it at 4bit to 1bit so that you are running 13b param model with not much accuracy life then buy a 4090 or maybe even a a100 in your pc build from nvidia, honestly don't better to not buy that much hardware so many 1000s of $
@felipemenezes36able
@felipemenezes36able 6 месяцев назад
More videos pls
@DavidOndrej
@DavidOndrej 6 месяцев назад
i'm uploading daily brother
@dashawk
@dashawk 4 месяца назад
can we do this in nodejs?
@roro-mm7cc
@roro-mm7cc 6 месяцев назад
I was going to join and then I saw it was $77!?!? Thats mad.
@DavidOndrej
@DavidOndrej 6 месяцев назад
Soon it will be $97 ;)
@mojo1989be
@mojo1989be 6 месяцев назад
so you didnt get it working locally gg
Далее
Cool Wrap! My Book is OUT 🥳
00:27
Просмотров 930 тыс.
Build Anything with OpenAI o1, Here’s How
1:00:37
Просмотров 173 тыс.
Build Anything with Llama 3.1 Agents, Here’s How
1:00:31
host ALL your AI locally
24:20
Просмотров 1,2 млн
Have You Picked the Wrong AI Agent Framework?
13:10
Просмотров 73 тыс.
Run your own AI (but private)
22:13
Просмотров 1,6 млн