When you use LangChain or Semantic Kernel one of the most interesting part is the process used by the library to solve your question orchestrating your Tools/Plugin. This process is heavily based on a LLM that is capable to tell you "What to do next" using the ReAct framework.
In this scenario it is really important to choose the right LLM to use as orchestrator, to have a good balance between capability and cost. This is important especially because the final result is usually obtained with multiple calls so you can indeed consume lots of tokens.
In this video I'll focus on LangChain but the discussion is perfectly valid also for Semantic Kernel.
You can also view this other video of mine that discuss how to intercept call made by Semantic Kernel to debug the interaction of the orchestrator.
• Intro to Semantic Kern...
previous video on the subject
• An introduction to Lan...
▬ Contents of this video ▬▬▬▬▬▬▬▬▬▬
00:00 - Introduction to Large Language Models
00:14 - Using GPT 3.5 for Orchestration
01:02 - Limitations of GPT 3.5 in Task Orchestration
01:55 - Switching to GPT 4 for Improved Orchestration
02:09 - Improved Task Orchestration with GPT 4
02:56 - Importance of Correct Orchestration in React Framework
04:19 - Final Thoughts on Large Language Model Selection
29 июл 2024