Looking forward to this one. Especially with the new features LangChain has been bringing out in the past weeks. Exciting times to be building AI applications, for sure!
langchain have done well with langgraph here -the langgraph I demoed in the first langgraph tutorial was pretty messy, hard to grasp, etc - this version of langgraph + langchain v2 is much better imo
Your presentation is so good, I use it to listen to it in the background when I'm coding or doing other tasks. Then, when I watch the tutorial for good I find that I follow much better than if I would watch and follow right away.
Awesome, this is the video I need. I'll watch this carefully. I've tried langgraph, and it felt really complex to manage the tools and workflows. Giving the description to trigger the tool is hard if you have similar steps in your workflow. Making the caller to use the exact response from the tool was also really hard. And thinking of scalability, like adding steps in between in the long term in a project with multiple people, feels it's going to be hard. I tried also adding multi agents by passing a workflow as a node, and again, it was a bit hard to make everything work as expected. I was just following a simple flowchart with a few steps and forks. Even though I was able to make it work, it felt like I was doing everything so wrong. Anyway, I hope your video put some lights on my way to understand this better
I always enjoy your video. I have a better understanding of LangGraph. As with your video, all of the demonstrations I have watched are one sentence or requests to the LLM. Here are two examples of Requests. I'm interested in what the best type of dog is for child. My daughter is five years old. We live in Minnesota, which is quite cold, so we need a dog that is good for cold weather. Please provide me with a few suggestions. I am deeply intrigued by the various reasoning approaches to building my report writer research agent. I have discovered two approaches: Tree of Thought and Chain of Thought. I am eager to gain a good understanding of each. Please provide me with a report that defines each strength and weakness for each. Make a recommendation as to which one would be good for building my research agent.
I love LangGraph as well, looking forward solidifying my knowledge. Cheers James. ps . Would you be interested doing video/research on meta-data creation from a hierarchical perspective : Parent children ? I think we could greatly enhance RAG quality if we build a hierarchical structure of meta-data. Problem here is how to create and label this data. From my testing with "GliNER" it doesn't nearly capture what id like and needs fine-tuning (but maybe we could train it on our client specific data..). Cheers!!
Thanks for this video, great as usual. One thing I don't understand is what is breaking the "loop"/further gathering of information. Is it when the "Oracle" thinks it has "plenty of information" as stated in the system prompt? Or when does it stop?
I always just use langgraph now - maybe if langchain gives you exactly what you need out of the box it might be better, but I really prefer building with langgraph nowadays
@@jamesbriggs Do you have a particular scenario where LangChain would give you what you need out of the box compared to LangGraph? It seems as if the LangGraph interface gives us all the power of LangChain but in a more controlled environment (and more).
I used this approch but in my use case I used two tools for rag search and i want both tools run parellel and output of tools want to combine and pass to Oracle agent to generate final response but I facing issue of its calling always one tool and generating final output anyone can help for this scenario
I tried this approach to build agent for my use case with bedrock LLM, instead of Open Ai model i just introduced Bedrock LLM and keep all remaining things same but facing issue as : Error raised by bedrock services : messages : final assistant content can not end up with white trailing space May be this error is due to custom scratchpad can you guide me to resolve this error?
Thanks a lot for sharing. The topic looks complex but you made it as neat as possible! The combination of video, code and article is really helpful. Video is good in the sense that it is more interactive, but I do need the article to get a more straight-forward sense of the whole idea. Some questions: 1) "input": "tell me something interesting about dogs" became 'interesting facts about dogs' in the output. Is this the result of the step langchain_core.tools? 2) in rag_search_filter, top_k=6; in rag_search, top_k=2. Does this mean return the top_k answers? I asked this because one was doing search within one article, the other was searching in indexes (I assume the index was one article one index?) 3) graph.add_node("oracle", run_oracle) graph.add_node("rag_search_filter", run_tool) graph.add_node("rag_search", run_tool) graph.add_node("fetch_arxiv", run_tool) graph.add_node("web_search", run_tool) graph.add_node("final_answer", run_tool) Will all of them be forced to execute? I see from the result rag_search, web_search, final_answer were invoked. Then how does this graph determines which tools to invoke? Order seems to matter too. The subsequent tools will be affected by the previous tools' results, right? Then how is order decided?
1) the rewrite is made by the "oracle" which is an LLM generating the text that decides which tool is to be used, in this case the LLM decided to use the rag tool with that query, so it would have generated something like "{'tool': 'rag_search', 'query': 'interesting facts about dogs'}" 2) yes, it means return the top_k answers, so `rag_search_filter` returns the 6 most relevant records from the arxiv paper search, whereas `rag_search` returns the 2 most relevant records from the arxiv paper search 3) not all are forced to execute, the oracle and it's generations (described in (1) above) are what decides which next step to take - if it decided to, it could go straight to the final answer and not use any tools