Тёмный
No video :(

A Prompt Engineering Trick for Building "High-level" AI Agents 

Data Centric
Подписаться 10 тыс.
Просмотров 10 тыс.
50% 1

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 37   
@nedkelly3610
@nedkelly3610 Месяц назад
Excellent Agent system, please do build on this by adding RAG tool for both local docs, and for fetched internet docs.
@wadejohnson4542
@wadejohnson4542 Месяц назад
Intelligent. Informative. Another addition to an already impressive body of work. Well done, young man. I look forward to your videos.
@leonwinkel6084
@leonwinkel6084 Месяц назад
Awesome, thanks for sharing! For the router, what I do often is to ask “what can be done better here is the question and the response”. Often it gives good suggestions that candidates then be processed in the next loop. Or asking “ on a scale of 0-10 how would you rate this answer” and then ask “what is needed to make it a 10?” It’s quite cool what comes out of it.
@joefajen
@joefajen Месяц назад
Thanks!
@Data-Centric
@Data-Centric Месяц назад
Thank you!
@brucehe9517
@brucehe9517 Месяц назад
Thank you! I really enjoy watching your videos and have learned a lot from them. Please keep posting more videos related to LangGraph and agentic workflows.
@HassanAllaham
@HassanAllaham Месяц назад
Thank you for the very good content you provide. This is one of the best videos I have ever seen. You mentioned 3 important points I would like to comment on: 1- LLM makers are doing a good job but in a very wrong direction. They tend to try to produce a common LLM that can do multi-tasks in multiple domains. This produces a weak LLM (or it may be suitable to call it "stupid LLM). For example, they train LLMs on multiple coding languages (c, c++, PHP, python, javascript, ... etc.). I am very sure this will not produce what is called "expert" (i.e. will not generate a real expert LLM). I wonder what the "expert" level we would have if it is trained only on coding and only in one programming language.. lets say python or javascript.. I believe we will have a real super LLM for such a domain and such programming language even if it is in the low range of parameters (maybe 3-7B). 2- I believe that breaking the job of the master Meta agent itself into more granular and simpler logical jobs would make it better and more capable of using smaller LLM. i.e. creating an agent which only responsible for breaking the task into smaller ones in the form of a list (array), then programmatically looping through that array giving and delivering each element (small task) to a router agent that is only responsible for routing the task to the suitable excuter agent. The executor agent should only be responsible about only one simple task and if it is tooled agent should use only one tool. 3- The history of chat between agents should be held by the executor agent, not the master. Using this way, the master will receive the response from the executor and deal with it only with a short amount of history (This way the Lost-In-The-middle probably will not happen). We may join all the histories just for logging and debugging and not as a chat history for any agent. Answering your question: Yes I would like to see more from you whatever the development you may do on this interesting workflow (i.e. the RAG as memory). By the way, I believe you have to re-try Ollama on some of your older experiments using OLLAMA_NOHISTORY=1 as env var.. I made some trials using your code from older videos and I got a better result when using OLLAMA_NOHISTORY=1. Also, I would like to know why you do not use Google Colab! (In addition to being able to use closed-source big LLMs, you can install Ollama and try many open-source LLMs on the same code base at the same video, + and this will not put your device under heavy load, since you are using other heavy programs for making these wonderful videos). I wonder whether we might be able, or not, to make an agent (or agents) that might be called "agent-generator agent" (or better: tool_selector_agent to select -if any- needed tool from pre-made tools + code_interpreter_agent which might be used to create the need tool if not available in the premade list + system_prompt_generator_agent to create the needed system prompt with the tool description included). This agent or group of agents is/are able to create a new agent with a good name, good system prompt and giving it the suitable tool to do the simple task it is created to do . Again thank you for the very good content. 🌹🌹🌹
@sabitareddy3359
@sabitareddy3359 20 дней назад
superb presentation. great voice.
@PerfectlyNormalBeast
@PerfectlyNormalBeast Месяц назад
That is a powerful concept I'm only a few minutes in, so maybe it's covered later. It's almost like a contracting company - I need this. Ok, we've seen that before, here's a proven worker ||or|| sure, we'll build a worker for that So much of day to day interactions are interface based. Having agents able to define interface, then build agents to work within is such a great idea
@sirishkumar-m5z
@sirishkumar-m5z 13 дней назад
Unlock the potential of high-level AI agents with innovative prompt engineering techniques. SmythOS can take your AI projects to the next level with its advanced capabilities and customization.
@MuhanadAbulHusn
@MuhanadAbulHusn Месяц назад
Thanks a lot, that was amazing in explaining and implementing.
@BradleyKieser
@BradleyKieser Месяц назад
Great thinking and design.
@shokouhmostofi2786
@shokouhmostofi2786 Месяц назад
Thank you for the great content!
@joefajen
@joefajen Месяц назад
I thoroughly enjoyed this video! I'd be very interested in seeing how you might incorporate a RAG aspect into this meta-prompting approach. My use case concerns technical writing work, so I am exploring ways to do text analysis and document generation in stages in relation to a specific body of text content.
@Data-Centric
@Data-Centric Месяц назад
Great suggestion, I'll see what I can do here.
@RBBannon1
@RBBannon1 Месяц назад
Well done. Thank you!
@2008tmp
@2008tmp Месяц назад
Do you have a link to the github repo? looks like you linked to a different project in the description. Great presentation!
@Data-Centric
@Data-Centric Месяц назад
Hey, thanks for letting me know. I've linked the correct repo now.
@2008tmp
@2008tmp Месяц назад
@@Data-Centric Thank you!
@syedibrahimkhalil786
@syedibrahimkhalil786 Месяц назад
Subbed! Thanks for the great insights. Could you please mention some use cases regarding smart city application, on which such "agents" scenario would be fruitful. Looking forward to use LLM's towards automation of crowdsensed data collection and taking out meaningful results. Would really appreciate your input.
@mohanmadhesiya3116
@mohanmadhesiya3116 Месяц назад
make a video on meta prompting with sqllite database where it can answer user query based on the database and internet(web search)
@NirFeinstein
@NirFeinstein Месяц назад
Wow amazing idea and execution, everything is so thoughtful. Can I run it to build and improve itself to build a very capable system? 😬😬😬
@Salionca
@Salionca Месяц назад
Good video. Thanks.
@shiyiyuan6318
@shiyiyuan6318 Месяц назад
If I remember correctly, in your previous video you did not recommend using AI frameworks, but in this video you use langgraph as an example, can you tell me why?
@Data-Centric
@Data-Centric Месяц назад
I had a video where I discussed building custom vs using frameworks like Crew AI and AutoGen. My main gripe with those frameworks is that they have these hidden prompts in the repo to orchestrate your workflows. LangGraph is different, it's more customizable providing just the minimum tools (essentially the graph and state objects) to assist you with building workflows.
@free_thinker4958
@free_thinker4958 Месяц назад
He was talking about crewai and autogen
@CUCGC
@CUCGC Месяц назад
I usually follow along with the code. Could not this time wrong github repo. I like the RAG with an embed model.
@Data-Centric
@Data-Centric Месяц назад
Hey, thanks for letting me know. I've linked the correct repo now.
@vancuff
@vancuff Месяц назад
Is it possible to train an llama 3.1 model on this meta prompting technique?
@j_Techy
@j_Techy Месяц назад
What are some ways I can make money with AI agents or use it in a business model?
@HassanAllaham
@HassanAllaham Месяц назад
Go chat with some "strong" LLMs like gpt4, ... ask them your question. You may add "explain your reasoning in detail" or "think step by step" to some of your prompts so you will understand, evaluate, and maybe correct the LLM "thinking" way by re-directing it, using some prompt modification, to give you the answers you need. Keep in mind that: 1- LLM results are the highest "probability" tokens joined together depending on the datasets used to train the LLM and the number of layers and hyperparameters used to calculate this probability. en.wikipedia.org/wiki/Probability 2- till now, as a result of hallucinations, you can not trust the LLM to do critical dangerous tasks like banking tasks or hospital tasks. 3- LLMs do not execute the functions by themselves, they just choose the suitable functions with the needed parameters and your app (program) executes these functions. 4- to give the agent the ability to do jobs that need muscles you need to use the results of the function it chooses or the results of code it builds to control some kind of a "stupid" machine to make it appear like a very "clever" productive machine. 😎 After starting your AI-based organization remember that you have to pay me a share of its profits since I gave you the best way to do it. 🤑😁
@i_forget
@i_forget Месяц назад
Ive created a novel promising prompting framework. It may be good for the meta agent to agent prompting. Lmk if you would like to learn about it.
@freeideas
@freeideas Месяц назад
I don't understand all the focus on agent swarms. I don't see how agents can do anything difficult besides generation of content. Can they make a real game that is too large to fit into a single script? No, because they can't play the game to see whether it works. Can they make a real command-line program? Not really, because they are mostly unable to realize when their approach is wrong and they need to start over with a different plan. I am complaining here, because I hope someone can tell me I am wrong. I am trying to build my own Ai agent that uses trial-n-error (like a human does) to accomplish things that it doesn't really know how to do until it tries (like most of my own projects). I would love for someone to tell me that this is already done so I don't have to build it.
@J3R3MI6
@J3R3MI6 Месяц назад
They can easily play the game
@freeideas
@freeideas Месяц назад
@@J3R3MI6 Really? They can see the screen and push the arrow keys? Wow that would be big news for me. Can you paste a link to something showing me how to make an LLM operate the UI?
@user-du6zo7zp2k
@user-du6zo7zp2k Месяц назад
@@freeideas fundamentally LLMs are not good at game playing because they lack any complex planning ability where results of 1 move can bring multiple and unknown responses and the LLM has to wait for the response. Real world planning is a key part of current research. See Data Centric video "AI Agents: Why They're Not as Intelligent as You Think" for some insight to this problem
Далее
LangGraph Simplified: Master Custom AI Agent Creation
43:51
The Fan’s Fang Skin🔥 | Brawl Stars Sneak Peek
00:16
New Dyna Skin is OP🥵🔥 | Brawl Stars
00:16
Просмотров 566 тыс.
Agency Swarm: Why It’s Better Than CrewAI & AutoGen
47:43
The Future of Knowledge Assistants: Jerry Liu
16:55
Просмотров 81 тыс.
The moment we stopped understanding AI [AlexNet]
17:38
Просмотров 978 тыс.