I can't remember if you mentioned it, but you can use your crewai crews as nodes in the langgraph ecosystem as well. That means you can create custom workflows between your trusted crews too.
Great video!! Thank you for taking the time! My confusion is…How would I create a multi agent graph where the initial agent asks the user a few questions to determine intent -> based on that it determines what agent to send the user to - this 2nd agent has its own LLM prompt logic -> when this 2nd agent requires feedback from the user … does it communicate with the user directly ? Or does the initial agent only communicate with the user That is where I’m really confused - any guidance would be great! Thank you again!!
It is actually totally up to you. When only the initial agent is talking to the user, you have a so called hierarchial agent. But you can allow every agent to interact with a user. That´s totally up to you. If you make a system with intent classificatio and route to different agents, I would probably let the agent directly talk to the agent, since the first agent is just the classifier and probably only should do that task
Fantastic LangGraph tutorial. Thank you. I'd love to see something more advanced like a chatbot that can give product recommendations using RAG search.
On your Github Site, there is a repository for an "Advanced RAG with Langchain" Course - I cannot find it on Udemy. Is it already live? When can I expect it online?
Thank you so much! Can anyone recommend a workflow for hosting my very simple LangGraph code on my website using the AWS toolsuite? I'd love to have some kind of scalable pay-for-compute that can just grow with my web traffic right from the start, rather than getting everything into a notebook and then having to figure out how to host it. I'm a firmware / Python guy and have no clue what I'm doing when it comes to hosting something like this for a very small business.
Looks impressive and powerful, but I need some time to fully understand it. By the way, I'd like to share my thoughts - perhaps you could consider exploring the LLM OS from Phidata. Their agents are quite powerful, but they are not interconnected, which means you have to run different Python files to use specific tools. I'm not sure if LangGraph can solve this issue or not.
It takes it´s time and is more complex than the alternatives, but in my opinion worth learning it. LLM OS looks quite high level, whats the difference to let´s say CrewAI?
@@codingcrashcourses8533 Yes, it does seem quite similar to CrewAI; however, CrewAI has the capability to manage all the tools without the need to run a specific Python file.
thanks for your amazing content as usual but i have a question in my mind is it possible to take two answers from each node and combine them into the final answer for exemple i have a tools that generate image and an other tool that describe the image i want to get the image and the description as the final answer
@@codingcrashcourses8533 yeah exactly so the idea is the generation of the image then the description of it but i need to get the two answers as the final output
But you don't use the ChatOpenAI package on that example, do you? I mean - you define a model, but where is it used? The decision to stop the cycle comes from the the algorithm, does it? I don't see any AI model involved into that cycle example. Edit: Ah, ok, the decision on the cycle is made by the model!
Great content! (as usual :)) Are you aware of any visual editors (like Flowise, Langflow, etc.) supporting the construction of LangGraph workflows in a drag-and-drop way? Wouldn't that be amazing?
Hey! Thank you for your short but impactful course. I have one question though to ask. Have you ever come across a situation where your agent was stuck in a continuous tool calling loop without exiting and thus providing a concrete final answer? I'm facing this problem right now and I do not know what to do.