Тёмный

AI Pioneer Shows The Power of AI AGENTS - "The Future Is Agentic" 

Matthew Berman
Подписаться 251 тыс.
Просмотров 355 тыс.
0% 0

Andrew Ng, Google Brain, and Coursera founder discusses agents' power and how to use them.
Join My Newsletter for Regular AI Updates 👇🏼
www.matthewberman.com
Need AI Consulting? ✅
forwardfuture.ai/
My Links 🔗
👉🏻 Subscribe: / @matthew_berman
👉🏻 Twitter: / matthewberman
👉🏻 Discord: / discord
👉🏻 Patreon: / matthewberman
Rent a GPU (MassedCompute) 🚀
bit.ly/matthew-berman-youtube
USE CODE "MatthewBerman" for 50% discount
Media/Sponsorship Inquiries 📈
bit.ly/44TC45V
Links:
HuggingGPT - • NEW HuggingGPT 🤗 - One...
ChatDev - • How To Install ChatDev...
Andrew Ng's Talk - • What's next for AI age...
Chapters:
0:00 - Andrew Ng Intro
1:09 - Sequoia
1:59 - Agents Talk
Disclosure:
I'm an investor in CrewAI

Наука

Опубликовано:

 

8 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 498   
@Chuck_Hooks
@Chuck_Hooks Месяц назад
Exponentially self-improving agents. Love how incremental improvements over a period of years is so over.
@andrewferguson6901
@andrewferguson6901 Месяц назад
I'm expecting deep mind to at any point just pop off with an ai that plays the game of making an ai
@aoeu256
@aoeu256 Месяц назад
When did the information age end and the AI age begin haha. I still think, we need to figure out how to make self-replicating robots (that replicate themselves half-size each generation) by making them out of lego-blocks, and then have the lego-blocks be cast from a mold that the robot itself makes. Once hardware(robots) improves the capabilities of software can improve.
@wrOngplan3t
@wrOngplan3t Месяц назад
@@aoeu256 Oh come on now, you know how that'll end. Admit it, you've watched Futurama :D
@jonyfrany1319
@jonyfrany1319 Месяц назад
Not sure if. I love that
@paulsaulpaul
@paulsaulpaul Месяц назад
It may refine the quality of results, but it won't teach itself anything new or have any "ah hah!" moments like a human thinker. There will be an upper limit to any exponential growth due to eventual lack of entropy (there's a limit to how many ways a set of information can be organized). Spam in a can is a homogenous mixture of meat scraps left over from slaughtering pigs. It's the ground up form of the parts that humans don't want to see in a butcher's meat display. LLMs produce the spam from the pork chops of human creativity. These agents will produce a better looking can with better marketing speak on the label. Might have a nicer color and smell to it. But it's still spam that will never be displayed next to real cuts of meat. Despite how much the marketers want you to think it's as good as or superior to the real thing.
@e-vd
@e-vd Месяц назад
I really like how you feature your sources in your videos. This "open source" journalism has real merit, and it separates authentic journalism from fake news. Keep it up! Thanks for sharing all this interesting info on AI and agents.
@8691669
@8691669 Месяц назад
Matthew, I've watched many of your videos, and I want to thank you for sharing so much knowledge and news. This latest one was exceptionally good. At times, I've been hesitant to use agents because they seemed too complex, and didn't work on my laptop when I tried. However, this video has convinced me that I've been wasting time by not diving deeper into it. Thanks again, and remember, you now have a friend in Madrid whenever you're around.
@stray2748
@stray2748 Месяц назад
LLM AI + "self-dialogue" via reflection = "Agent". Multiple "Agents" together meet. User asks them to solve a problem. "Agents" all start collaborating with one another to generate a solution. So awesome!
@ihbrzmkqushzavojtr72mw5pqf6
@ihbrzmkqushzavojtr72mw5pqf6 Месяц назад
Is self dialog same as Q* ?
@stray2748
@stray2748 Месяц назад
@@ihbrzmkqushzavojtr72mw5pqf6 I think it's the lynchpin they discovered to be a catalyst for AGI. Albeit self-dialogue + multimodality being trained from the ground-up in Q* (something ChatGPT did not have in it's training). Transformers were built on mimicking the human neuron (Rosenblatt Perceptron) ; okay now following human nature, lets train it ground-up with multimodal data and self-dialogue (like humans posess).
@Korodarn
@Korodarn Месяц назад
@@ihbrzmkqushzavojtr72mw5pqf6 Not exactly, Q* is pre-thought, before inference is complete. The difference is with planning if someone asks you a question like "how many words are in your response, you can think about it, and come to a conclusion like to say 'One'" but if you don't have pre-thought, you're doing simple word prediction every time, and the only way to get a simple outcome is if something akin to key/value pairs passed into LLM at some point gives it the idea to try that in one shot. Even if it has a chance to iterate it'll probably never reach that response without forethought.
@enriquea.fonolla4495
@enriquea.fonolla4495 28 дней назад
give it a couple more ai models, like world simulators, a little bit of time...and then something similar to what we refer as consiciousness may emerge of all those intereactions.
@defaultHandle1110
@defaultHandle1110 22 дня назад
They’re coming for you Neo.
@janchiskitchen2720
@janchiskitchen2720 Месяц назад
The old saying comes to mind: Think twice , say once. Perfectly applicable to AI where LLM checks its own answer before outputting it. Another excellent video.
@BTFranklin
@BTFranklin Месяц назад
I really appreciate your rational and well-considered insights on these topics, particularly your focus on follow-on implications. I follow several AI News creators, and your voice stands out in that specific respect.
@samhiatt
@samhiatt Месяц назад
Matthew is really good, isn't he? I want to know how he's able to keep up with all the news while also producing videos so regularly.
@carlkim2577
@carlkim2577 Месяц назад
This is one of the best vids you've made. Good commentary along with the presentation!
@richardgordon
@richardgordon Месяц назад
Your commentary "dumbing things down" for people like me was very helpful in understanding all this stuff. Good video!
@notclagnew
@notclagnew 15 дней назад
Glad I saw this, your additional explanations were incredibly helpful and woven into the main talk in a non-intrusive way. Subscribed.
@JohnSmith762A11B
@JohnSmith762A11B Месяц назад
Excellent video. Helped clear away a lot of fog and hype to reveal the amazing capabilities even relatively simple agentic workflows can provide.👍
@jonatasdp
@jonatasdp Месяц назад
Very good Matthew! Thanks for sharing. I built my simple agent and I see it improving a lot after a few interactions.
@user-en6ot9ju7f
@user-en6ot9ju7f Месяц назад
Thank you so much for all your videos. You are gold. Please never stop!
@narindermahil6670
@narindermahil6670 Месяц назад
I appreciate the way you explained every step, very informative. Great video.
@animalrave7167
@animalrave7167 15 дней назад
Love your breakdowns! Adding context and background info into the mix. Very useful.
@ronald2327
@ronald2327 Месяц назад
All of your videos are very informative and I like that you keep the coding bugs in rather than skipping ahead, and you demonstrate solving those issues as you go. I’ve been experimenting with ollama, LM studio, and CrewAI, with some really cool results. I’ve come to realize I’m going to need a much more expensive PC. 😂
@youri655
@youri655 Месяц назад
Great point about combining Groq's inference speed with agents!
@weishenmejames
@weishenmejames 15 дней назад
Nice share with valuable commentary throughout, you've got yourself a new subscriber!
@AINEET
@AINEET Месяц назад
You upload on the least expected random times of the day and I'm all for it
@matthew_berman
@matthew_berman Месяц назад
LOL. Keeping you on your toes!
@holdthetruthhostage
@holdthetruthhostage Месяц назад
Haha 😂
@NateMina
@NateMina Месяц назад
You are probably my number one source for bleeding edge info and explanations on AI and AI agents keep it up and great job Matthew! you were one of the fleeting influences in my learning AI and basically learning python for that matter now that I can use AI as a personal tutor for free anyone can learn anything now way better than being in a classroom because having an AI tutor way better than human
@JacquesvanWyk
@JacquesvanWyk 19 дней назад
I have been thinking about agents for months without knowing what I am thinking of untill I found videos like crewai and swarm-agent and my mind is blown. I am all in for this and trying to learn as much as i can because this is for sure the future. Thanks for all your uploads
@mayagayam
@mayagayam Месяц назад
Super informative, thank you so much! ❤
@timh8490
@timh8490 Месяц назад
Wow, I’ve been a big believer in agentic workflows since I saw your first video on chatdev and later on autogen. It’s really validating to hear someone of this stature thinking along the same lines
@NasrinHashemian
@NasrinHashemian 21 день назад
Matthew, your videos are really informative. Many thanks to you for sharing such knowledge and update. This latest one was exceptionally good.
@RasoulGhaderi
@RasoulGhaderi 21 день назад
I love this video. In the long run Advances in A.I surely can be debated for the good of AI Agents, though most will argue that only a few will benefit especially to their pockets, at the end, interesting to see what the future holds.
@YousefMilanian
@YousefMilanian 21 день назад
I also agree that it will be interesting, take a look at the benefits of the computing age millions of people were made for life simply because they made the right decisions at the time thereby creating lifetime wealth.
@RasoulGhaderi
@RasoulGhaderi 21 день назад
I wasn't born into lifetime wealth handed over, but I am definitely on my way to creating one, $715k in profits in one year is surely a start in the right path for me and my dream. Others had luck born in wealth, I have a brain that works.
@ShahramHesabani
@ShahramHesabani 21 день назад
I can say for sure you had money laying around and was handed over to you from family to be able to achieve such.
@RasoulGhaderi
@RasoulGhaderi 21 день назад
It may interest you to know that no such thing happened, I did a lot of research on how the rich get richer and this led me to meet, Linda Alice parisi . Having someone specialized in a particular field do a job does wonders you know. I gave her 100 grand at first
@virtualalias
@virtualalias Месяц назад
I like the idea of replacing a single 120b (for instance) with a cluster of intelligently chosen 7b fine-tuned models if for no other reason than the hardware limitations lift drastically. With a competently configured "swarm," you could run one or two 7b sized models in parallel, adversarially, or cooperatively, each one contributing to a singular task/workspace/etc. They could even be guided by a master/conductor AI tuned for orchestrating its swarm.
@kliersheed
@kliersheed Месяц назад
ehem, skynet. :D but i agree
@d.d.z.
@d.d.z. Месяц назад
Thank you. Great analysis
@pengouin
@pengouin Месяц назад
Excellent video my friend , you are my favorite channel , continue your good work! ❤
@saadatkhan9583
@saadatkhan9583 Месяц назад
Matthew, everytjing that Prof Ng referenced, you have already covered and analyzed. Much credit to you.
@AC-go1tp
@AC-go1tp Месяц назад
Great video and valuable clarifications of AN's insights. It will be also great if you are able to make a video that capture all these concepts and notions using CrewAI and/or Autogen. Thank you Matt!
@DragonWolf2099
@DragonWolf2099 Месяц назад
I'm glad we all seem to be on the same page but I think it would help to use a different word when thinking about the implementation of "Agents". What I think was a breakthrough for me was replacing the word "Agent" with "Frame of mind" or something along those lines when prompting an "Agent" for a task in a series of steps where the "Frame of mind" changes for each step until the task is complete. Not trying to say anything different than what has been said thus far but only help us humans see that this is how we think about a task. As humans we change our "Frame of mind" so fast we often don't realize we are doing it when working on a task. For a LLMs your "Frame of mind" is a new LLM prompt on the same or different LLM. Thanks Matthew Berman you get all the credit for getting into this LLM rabbit hole. I'm also working on a LLM project I hope to share soon. 😎🤯😅
@kliersheed
@kliersheed Месяц назад
agens = actor = compartimented entity doing smth. i think the word fits perfectly. its like transistors are simulating our neurons and the agent is simulating the individual compartments in our brain. a frame of mind would be a fitting expression for the supervising AI keeping the agents in check and organizing them to solve the problem perceived. its like the "me" as in consciousness, ruling the processes in the brain. a frame always has to contain smth and IMO its hard to say what an agent contains as its already really specialized and works WITHIN a frame (not being a frame). even if you speak of frames as in relation systems, the agent is WITHIN, not one itself. just my thoughts on the terms ^^
@seanhynes9516
@seanhynes9516 9 дней назад
Awesome thanks for the great perspectives!
@michaelmcwhirter
@michaelmcwhirter Месяц назад
Great video! Thank you for the insights 🔥
@TestMyHomeChannel
@TestMyHomeChannel Месяц назад
I loved this video. Your selection was great and your comments were right to the point and very useful. I like that you test things yourself and provide links to the topics that are discussed previously.
@danshd.9316
@danshd.9316 24 дня назад
Thank you just finished its great that you explained for ones who may not be as techie as ng expected.
@lLvupKitchen
@lLvupKitchen Месяц назад
I saw the original video, but the commentary adds a lot. thx
@CM-zl2jw
@CM-zl2jw Месяц назад
Thank you Matt. I appreciate your explanations, insights and exploration. This is a journey.
@rafaelvesga860
@rafaelvesga860 26 дней назад
Your input is quite valuable. Thanks!
@luciengrondin5802
@luciengrondin5802 Месяц назад
The iterating part of the process seems more important to me than the "agentic" one. If we compare current LLMs to DeepMind's AlphaZero method, it's clear that so far LLMs currently only do the equivalent of AlphaZero's evaluation function. They don't do the equivalent of the Monte-Carlo search thing. That's what reasoning needs : the ability to explore the tree of possibilities, the NN being used to guide that exploration.
@joelashworth7463
@joelashworth7463 Месяц назад
what get's interesting about agentic - is what if certain agents have access to differrent 'experiences' - meaning their context window starts with 'hidden' priorities objectives and examples of what final state should look like. Since Context windows are limited right now this is an exciting area. Of course the other part of agentic vs iterative - is that since a model isn't really 'thinking' it needs some for of stimulus that will disrupt the previous answer - so you either have to use self reflection or external crtiic - if the external critic uses a differrent model (fine tune or lora) and is given a differrent objective you should be able to 'stimulate' the model into giving radically differrent end products.
@jakeparker918
@jakeparker918 Месяц назад
Awesome video. Yeah, this is why I voted for speed in the poll you did, this is the what I was talking about.
@federico-bi2w
@federico-bi2w Месяц назад
...ok I can see its right...having done a lot of "by hand iterations"...I mean i am not using agent yet...but if you think with GPT...you ask something...you test....you adjust...you give it back..and the result is better...and in this process if you do question on the same topic but from different aspect it becomes better...so an agent is basically doing this by itself!. Great video! Thank you :D
@existentialquest1509
@existentialquest1509 Месяц назад
i totally agree - was trying to make this case for years - but i guess technology has now evolved to the point where we can see this as a reality
@DougFinke
@DougFinke Месяц назад
Good stuff, really like the commentary side by side.
@ManolisPolychronides
@ManolisPolychronides 29 дней назад
Really cutting edge! Thanks.
@jets115
@jets115 Месяц назад
Imagine an extensive neural network, except instead of W/B in nodes, each are an agent.
@Tayo39
@Tayo39 Месяц назад
like the internet, where every computer is a node agent/expert ?
@jets115
@jets115 Месяц назад
@NewAccount_WhoDis Don't think of it as a literal NN.. more like expanding the original prompt. If you can ask one researcher, imagine asking 100 with small variations in prompts to each! :)
@icns01
@icns01 29 дней назад
I did in fact like andrew's talk but I liked it even more with your moderation, which was extremely helpful made a big difference in my understanding of the talk. Just Subbed, thank you very much! Off to take a look at your Hugging GPT video🏃‍♂
@marshallodom1388
@marshallodom1388 Месяц назад
I convinced my chat AI that our new mutually conceived idea of think before you speak extremely helpful for both of us
@zaurenstoates7306
@zaurenstoates7306 Месяц назад
Decentralized, highly specialized agents running on lower parameter count models (7b-70b) working together to accomplish tasks is where I think opportunity lies. I was mining ETH back when it was POW with my gaming rig to earn some money on the side. I did the calculations once and the entire eth computation available was a couple hundred exaflops. With more and more devices being manufactured for AI calculation (phones, GPUs, etc) the available computing will only increase
@bobharris5093
@bobharris5093 24 дня назад
this is absolutely fascinating.
@RaitisPetrovs-nb9kz
@RaitisPetrovs-nb9kz Месяц назад
I think the real breakthrough will come when we have user-friendly UI and agents based on computer vision, allowing them to be trained on existing software from the user's perspective. For example, I could train an AI agent on how to edit pictures or videos, or how to use a management application, etc. One approach could be to develop a dedicated OS for AI agents, but that would require all the apps to be rewritten to work with the AI agent as a priority. However, I'm not sure if that's feasible, as people may not adopt such a system rapidly. The fastest way forward might be to let the AI agent perform the exact task workflows that I would perform from the UI. This approach would enable the AI to work with existing software without requiring significant changes to the applications themselves.
@ondrazposukie
@ondrazposukie 24 дня назад
amazingly inspiring and informative video
@TrasThienTien
@TrasThienTien 24 дня назад
Your input is quite valuable
@SuperMemoVideo
@SuperMemoVideo 18 дней назад
As I come from neuroscience, I insist it must the the right track. The brain also uses "agents" which are more likely to be called "concepts" or "concept maps". These are specialized portions of the network doing simple jobs such as recognizing a face, or recognizing the face of a specific person. Tiny cost per concept, huge power of the intellect when working in concert and improved dynamically
@darwinboor1300
@darwinboor1300 Месяц назад
Nice review of the field of agents you have built in your videos over the past few months. Next build a team of agents to build an AI to build, refine, optimize, and validate agents and agent teams for various tasks. Now repeat the process.
@elon-69-musk
@elon-69-musk Месяц назад
awesome analysis
@dhruvbaliyan6470
@dhruvbaliyan6470 Месяц назад
When me realising I realized this over a month ago . And thinking to create virtual environment where multiple agents work together that are especially fine tuned for each use case . So my brain is as intelligent as this person.
@mintakan003
@mintakan003 Месяц назад
Andrew Ng is actually one of the more conservative of the AI folks. So when he's enthusiastic about something, he has a pretty good basis for doing so. He's very practical. As for this video, good point on Groq. We need a revolution on inference hardware. Also, another point to consider, is the criteria for specifying when something is "good" or "bad", when doing iterative refinement. I suspect, the quality of the agentic workflows will also depend on the quality of this specification, as in the case of all optimization algorithms.
@davedave2941
@davedave2941 Месяц назад
Very interesting - coding and work flow and having worked with coders /with Asperger in order to communicate we moved to a very simple process of task and explain to coder thru subject verb, subject verb and so on it smoothly flattered communication thus task to coding workflows.
@CharlesVanNoland
@CharlesVanNoland Месяц назад
As long as we're relying on backpropagation to fit a network to pre-designated inputs/outputs, we're not going to have the sort of AI that will change the world overnight. The future of machine intelligence is definitely agentic, but we're not going to have robotic agents cleaning our house, cooking our food, fixing our house, constructing buildings, etc... unless we have an online learning algorithm that can run on portable hardware. Backpropagation, gradient descent, automatic differentiation, and the like, isn't how we're going to get there. We need a more brain-like algorithm. Throwing gobs and gobs of compute at backprop training progressively larger networks isn't how we're going to get where we're going. It's like everyone saw that backprop can do some cool stuff and then totally forgot about brains being the only example of what we're actually trying to achieve. They're totally ignoring that brains abstract and learn without any backpropagation. Backprop is the expensive brute force way to make a computer "learn". I feel like we're living in a Wright Brothers age right now where everyone believes that the internal combustion powered vehicle is the only way humans will ever move around the earth, except it's backpropagation that everyone has resigned to being the only way we'll ever make computers learn, when there's no living sentient creatures that even rely on backpropagation to exhibit vastly more complex behaviors than what we can manage with it. A honeybee only has one million neurons, and in spite of ChatGPT being, ostensibly, one trillion parameters, all it can do is generate text. We don't even know how to make a trillion parameter network that can behave with the complexity of an insect. That should be a huge big fat hint to anyone actually paying attention that backprop is going to end up looking very stupid by comparison to whatever does actually end up being used to control thinking machines - and the people who are fully invested in (and defending) backprop are most certainly going to be the last ones who figure out the last piece of the puzzle. When you have people like Yann LeCunn pursuing things like I-JEPA, and Geoffrey Hinton putting out whitepapers for algorithms like Forward-Forward, and Carmack saying things like "I wouldn't bother with an algorithm that can't do online learning at ~30hz", that should be a clue to everyone dreaming that backprop will get us where we're going that they're on the wrong track.
@sup3a
@sup3a Месяц назад
Maybe. Though it's fun to hear what people said when Wright brothers and such tried to crack flying: this is not how birds fly, this is inefficient etc. We "brute forced" flying by just blasting shit ton of energy into the problem. Maybe we can do the same with intelligence
@bilderzucht
@bilderzucht Месяц назад
Learning within a Single individual Brain maybe without any backpropagation. But couldn't the whole evolutionary process through billions of brains and ariving at a setup with different Brain Regions be seen as some sort of backpropagation?
@vicipi4907
@vicipi4907 Месяц назад
I think the idea is to get it to an advanced enough stage where it is competent and reliable. so much so it exaperdite the process in researching something that looks more like the the human brain process as a replacement. We might even get it to a point where it self improve there is no reason to think it won't find a different approach thats doesn't involve back propagation. Either we can't deny it has great potential and application to make AI advancement significantly much faster.
@colmxbyrne
@colmxbyrne 26 дней назад
Progression is rarely linear and innovation follows a line of optimism use not the end game. That's why we had the 'stupid' internal combustion engine for over 100 years melting our planet😢
@Mattje8
@Mattje8 23 дня назад
This assumes the goal of AI is to mimic a brain. It probably isn’t, mostly because it (probably) can’t, at least using existing compute approaches and current physics. If consciousness involves quantum effects as Penrose puts forward, current physics isn’t there yet. Or maybe it’s neither quantum nor algorithmic but involves interactions we can’t properly categorise today, which may or may not be deterministic. All of which is to say that I basically agree with you that all of the current approaches are building fantastic tools, but certainly nothing approaching sentience.
@greatworksalliance6042
@greatworksalliance6042 Месяц назад
Im considering delving into this space and curious what your preference is @Mathew Berman between Autogen, CrewAI, and whatever else is most comparable in the current market. What are your current rankings of them are, and optimal current use cases. Might make for a good upcoming video?
@samfurlong4050
@samfurlong4050 Месяц назад
Fantastic breakdown
@d_b_
@d_b_ Месяц назад
<a href="#" class="seekto" data-time="1200">20:00</a> such a good point!
@fernandodiaz8231
@fernandodiaz8231 24 дня назад
It was usefull your explanation after each pause.
@evanoslick4228
@evanoslick4228 Месяц назад
It makes sense to be agents. They are parallelized and can be specificly trained where needed.
@agilejro
@agilejro 26 дней назад
Amazing. Multi agents debating... Exciting
@rupertllavore1731
@rupertllavore1731 Месяц назад
@MatthewBerman what do you recommend I pick out to have a more synergistic Value as i prepare for the near future. Im already using Chat GPt Plus and perplexity pro. But Because of this video i might need to take away one so i can Add in Agent GPT So what do you Recommend i pick out Perplexity Pro + Agent GPT? Or ChatGPTplus + Agent GPt? Your advice would truly be appreciated.
@u2b83
@u2b83 12 дней назад
I've long suspected that iteration is the key to spectacular results, it's like an ODE solver iterating on a differential equation until it stumbles into a basin of attraction. You could probably do "agents" with just one GTP and loop through different roles. Then again maybe multiple agents are a crutch for small context windows lol. However, keep in mind that GPT4 already gives you an iterative solution by running the model as many times as there are tokens.
@mykdoingthings
@mykdoingthings 12 дней назад
GPT 3.5 cognitive performance going from 48% to 95%+ by just changing how we interact with the same exact model is WILD! Are we learning that "team work makes the dream work" is true even for AI? I wonder what other common human sayings will cause the next architectural breakthrough in the field🤔 Thank you Matthew for this walkthrough, first time I learn about agentic workflow, Andrew Ng is amazing but you made it even more accessible 🙏
@EliyahuGreitzer
@EliyahuGreitzer 28 дней назад
Thanks!
@johnh3ss
@johnh3ss Месяц назад
What gets really interesting is that you could hook agentic workflows into an iterative distillation pipeline. 1) Create a bunch of tasks to accomplish 2) Use an agentic workflow to accomplish the tasks at a competence level way above what your model can normally do with one-shot inference 3) Feed that as training data to either fine tune a model, or if you have the compute, train a model from scratch 4) Repeat at step 2 with the new model. In theory you could build a training workflow that endlessly improves itself.
@autohmae
@autohmae Месяц назад
Let's also remember this is what open source tools were already doing over a year ago, but often these got stuck in loops. I'm really interested in revisiting them.
@gotoHuman
@gotoHuman 29 дней назад
Or don't start the pipeline with a bunch of tasks, but rather let it be triggered from the outside when a task appears, e. g. in form of a customer support ticket
@agenticmark
@agenticmark Месяц назад
Something you guys never talk about - the INSANE cost of building and running these agents. It limits developers just as much as compute limits AI companies. The reason agentic systems work is they remove the context problem. LLMs get off track and confused easily. But if you open multiple tabs and keep each copy of the LLM "focused" it gets better results" - so when you do the same with agents, each agent outperforms a single agent who has to juggle all the context. We get better results with GPT 3.5 using this method than you would get in a browser with GPT4. Basically, you are "narrowing" the expertise of the model. And you can select multiple models and have them responsible for different things. Think Mixtral but instead of a gating model, the agent code handles the gating.
@DaveEtchells
@DaveEtchells Месяц назад
I’m really intrigued by your multi-tab workflow, it sounds super powerful, but I’m not sure how it works in practice. Do you have the different tabs working on different sub-tasks or performing different roles (kind of a manual agentic workflow, but with human oversight of each of the zero-shot workers), or are they working in parallel on the same task, or … ? IANAP, but I need to have ChatGPT (my current platform, or it could be Claude or whatever) do some fairly complex tasks like parsing web pages and PDFs to navigate a very large dataset and use reasoning to identify significantly-relevant data, download and assemble it into a knowledge database that I’ll then want to use as test input for another AI system. Ideally I’d use one of the no-code/low-code agent dev tools to automate the whole thing but as I said IANAP, and just multi-tabbing it could get me a long way there. It sounds like whatever you’re doing is exactly what I need to - and likely a boatload of others as well: I do wish someone would do a video on it. Meanwhile, would you be willing to share a brief description of an example use case and what you’d have the various tabs doing for it? (I hope @matthew_berman sees this and makes a vid on the topic: Your comment is possibly the most important I’ve ever encountered on YT, at least in terms of what it could do for my work and personal life.) Thanks for the note!
@japneetsingh5015
@japneetsingh5015 Месяц назад
You don't always need the state of the art models like that. GPT, Gemini, Claude etc. many open source 7B models work just as well for most of the companies.
@DefaultFlame
@DefaultFlame Месяц назад
@@japneetsingh5015 Yeah, llama, mistral, mixtral, the list goes on. If you want something even more lightweight than 7B, stablelm-zephyr is a 3B that is surprisingly capable. Orca-mini is good too and comes in 3B, 7B, 13B, and 70B versions so you can pick whichever you want based on your hardware.
@user-bd8jb7ln5g
@user-bd8jb7ln5g Месяц назад
What You're saying is: Attention is all you need 😁 I do agree that mixing goals will confuse models, as it could people. People however already have learned processes to compartmentalise tasks. We might have to teach agents to do that, apart from constructing them to minimize this confusion.
@DefaultFlame
@DefaultFlame Месяц назад
@@user-bd8jb7ln5g The whole point of multiple agents with different "jobs," personalities, or even different models powering them, is that we can cheat. The point of multiple agents is that we don't **need** to teach a single agent or model those learned processes, we can just connect several that each do each part, each agent taking on the role of different parts of a single functional brain.
@christiandarkin
@christiandarkin Месяц назад
great breakdown as always. I'm a bit scared to play with agents until I can do so on a local llm. I'm afraid the costs will run away with themselves if I do an ambitious project.
@stevencord292
@stevencord292 Месяц назад
This makes total sense
@NOYFB982
@NOYFB982 Месяц назад
With a limited context window, this hits an asymtotic wall very quickly. Keep in mind, I’m not saying the approach is not a big improvement; it is. However, my extensive experience is that it is not able to go nearly far enough. LLMs are still not fully functional of high performing work. That can only still do basics (or high level information recall). Perhaps with a large context window, this would actually be useful.
@baumulrich
@baumulrich Месяц назад
whether we know or not - that is how most of us work. we evaluate the prompt, then we do a first pass, then we reevaluate, then we edit, then we do more, and reevaluate, check against the prompt, edit, do more work, etcetc
@konstantinlozev2272
@konstantinlozev2272 Месяц назад
If you spend some exchanges on brainstorming first with GPT4 a few different approaches and only then give it a task, it is superb. I can see a pair of agents brainstorming in the future instead.
@StuartJ
@StuartJ Месяц назад
Maybe this is what Grok 1.5 is doing behind the scenes to get a better score to GPT4.
@DefaultFlame
@DefaultFlame Месяц назад
I've been thinking that this is the future for a while now, partially from my own experience and experiments with what you can get a model to do with prompting it with it's own output as well as having it reflect on it ever since I got access to GPT-3, partially thanks to everything I've learned about agents from you, Matthew. (I have spent an embarassing amount of money fiddling with AI and figuring out their limits, considering that I'm just an interested layman.)
@peterpetrov6522
@peterpetrov6522 Месяц назад
The future will be agentic. Yes the future will be bananas. Well said!
@mrpro7737
@mrpro7737 Месяц назад
this really good
@Lukas-ye4wz
@Lukas-ye4wz Месяц назад
Did you know that this is actually how our mind/brain works as well? We have different parts (physical and psychological) that fulfill different roles. That is why we can experience inner conflict. One part of us wants this. Another part wants this. IFS teaches about this.
@YorkyPoo_UAV
@YorkyPoo_UAV Месяц назад
I just started learning how to set up AI last month but this is what I thought this is what Multi-Agents or a Crew was.
@Nifty-Stuff
@Nifty-Stuff 29 дней назад
Matt, this is a great video on the use of AI Agents... but I'm left wondering: Why hasn't anybody developed a system/app that takes the API's from the top LLMs, created agents for each, and then have these agents all work together to brainstorm, debate, review, and solve problems? I often get 4 different answers from 4 LLMs, so why not have them all setup as agents "in one room" working together to come up with the "best" solution. I can't find anybody that's tried this... why not? Wouldn't having the "top minds" (LLMs) working together produce better results?
@chrispteemagician
@chrispteemagician Месяц назад
With the way things are going and the power of agentic AI, I'd suggest that Deep Thought would arrive at the number 42 at least three minutes quicker, or within three minutes. There's no way to tell but I reckon Douglas would love this.
@rakoczipiroska5632
@rakoczipiroska5632 Месяц назад
Thank you for your great job. If things go like this, maybe there won't be a requirement for a startup accelerator to include a professional programmer among the founders? It won't be enough if someone is a hobbyist programmer, but professional propmt engineers?
@anonymeforliberty4387
@anonymeforliberty4387 25 дней назад
i bet you are still gonna need a prompt engineer and programmer, but alone he will do the work of a team
@paulblart5358
@paulblart5358 Месяц назад
It's a very good strategy instead of training single long duration models. I do wonder about security, but the technology is very fascinating.
@MrJawnawthin
@MrJawnawthin Месяц назад
This is definitely the future. It’s the same as using chat GPT to brainstorm and Claude to write the first draft and Gemini to critique it- that’s my current work flow. once it becomes repetitive - the algorithm for workflow is an easy model.
@gregkendall3559
@gregkendall3559 23 дня назад
You can actually tell a gpt to break itself into multiple separate personalities. Give them each a goal. One can write code then the next reviews it and have the one chatbot work it all without resorting to a convoluted separate agents system. Tell them to talk to each other to get a task done. Name them.. Bob, Joe and tell it to preference their discussion with their names as each one talks. I tried it and results were very promising.
@matt6288joyce
@matt6288joyce 19 дней назад
As is often the case, education in England is years behind private sector organisations. I’d love to understand how AI can be utilised to help run a school and how a senior leader like myself could learn how to introduce this infrastructure into the functions of a school at large. I’m sure it can be utilised to help with lesson planning but I’m thinking more in terms of the organisation scale processes
@santiagoc93
@santiagoc93 Месяц назад
Agents will be part of the future of LLMs. Just imaging different expert(agents) working in different parts of the app and an agent that's the program manager. Will be able to create app in weeks instead of months
@blijebij
@blijebij Месяц назад
Great video and presentation! I think its true Agentic is the future but it also comes with a cost as probably this way AI usuage cost a bit more money. But maybe iam wrong.
@biskero
@biskero Месяц назад
In a multi agent and multi LLM scenario is key to understand which LLM assign to each agent. I found that there are not enough information about each LLM to make that decision. Maybe the answer is to train each LLM on a specific topic.
@stepanfilonov
@stepanfilonov Месяц назад
It's possible without training, just a simple group of base gpt3.5 already outperforms a single gpt4, it's more about orchestration.
@biskero
@biskero Месяц назад
@@stepanfilonov interesting, so it's a matter of agents supporting it? Still LLM should have more information about their specific training.
@jbavar32
@jbavar32 Месяц назад
I've been using AI for a couple of years now for a creative workflow. (I don't do code) and I've often said Ai is like having the most brilliant collaborator on the planet but it has a slight drinking problem. My question is how does one create an agent so that one LLM can pass its result to other LLM's? In other words how do you engage several LLMs each working on the same problem? It looks like you would need a special code or a custom API.
@RobertvMill
@RobertvMill Месяц назад
thank you so much! Going to be crazy for regulation
@TheStandard_io
@TheStandard_io Месяц назад
Yeah, Sequoia Capital also misled everyone by not doing actual due diligence on FTX. When everyone heard that they invested, no one else did Due Diligence because they assumed Sequoia did. And they did not go to court or get any punishment
@mikesawyer1336
@mikesawyer1336 29 дней назад
So my dilemma as a an IT Director is - how deep do I need to learn this. Should I be developing agentic scripts and using API's or should i just stick with Higher level LLM's and practice agentic workflows in a more powerful model. I am finding it very difficult to keep up and I can't know it all.
@user-xh7xs1hh6w
@user-xh7xs1hh6w 29 дней назад
That obviously seems that the conversation over people involved in GPT chat training displace a few cornerstone and important things. The information analized with it statistics can be easily checked up and qulifies its source purity, sorting listed public sources if they are proudent or not. The set of information passed trough the analisys and valued as not proven gives the opportunity to analyse why it is not proven, or misleading, or just simply faking. So, the discussion was not about agents as well as independent and open press are, but about "mediates" who trying to place uncofident and fakefull information instead. P.S. Obviously, this video was placed by one of the previously told "mediates" in the certain fakefull purposes and could be such example of what was genuinely the conversation about.
@GregoryBohus
@GregoryBohus Месяц назад
Is it possible for say Gemini to iterate itself if you prompt it correctly in your first prompt? Or do you need to build an application to do such? Can you use the web interface to do such?
@michaelcharlesthearchangel
@michaelcharlesthearchangel Месяц назад
10 years ago, I created the data architecture for AI Agent and AI Congress networks.
@saxtant
@saxtant Месяц назад
Agents have been here a while, but they are very expensive, because the zero shot of an llm still has errors. If I could get enough value from my rtx 3090 to run agents that could actually make progress on something I'm not going to throw away, then I'm all over it. Function calls are only one part of empowering an llm. Listeners are just as important... Tools that just operate on your workflow and can make suggestions to you that may include a full multi agent stack to complete a definable task.
@user-qn7iw4ih3d
@user-qn7iw4ih3d Месяц назад
Great videos, thank you! I have a question about this agentic framework that perhaps you answer ... it seems like the iteration process inherent in the likes of Autogen & CrewAI will be built in to the next LLM models (CHatGPT5, Claude 4 etc) - does that make Autogen redundant at that point? Or, am I missing something? Thanks
@gotoHuman
@gotoHuman 29 дней назад
I think there should be more emphasis on the insane efficiency gains achievable when agents are enabled to take actions in connected apps and systems
Далее
Google Releases AI AGENT BUILDER! 🤖 Worth The Wait?
34:21
Мокрые шутки | Gazan и POLI
00:21
Просмотров 1,2 млн
What Is an AI Anyway? | Mustafa Suleyman | TED
22:02
Просмотров 700 тыс.
AI and Quantum Computing: Glimpsing the Near Future
1:25:33
Я бы сделал дешевле - Samsung Flip 4
1:00