Leon, are you planning on doing a video on LangGraph as well? Developing complex agents using LangChain, LangSmith, and LangGraph would be very interesting!
Did the way this works change? Followed the instructions, found the moved button, got the API key. Yet, I can't see my project in the projects section. TIA.
Leon. I did exactly as you did but I'm using OpenAI assistant node and custom tool. I get the following error when I try to send a message. it doesn't seem to be coming from the AI bur rather from a default message from flowise. API key must be provided when using hosted LangSmith API. I have added the correct langsmith API and analysis is turned on. Any idea why this error could be happening?
Amazing video-tutorial as always ❤. Maybe you could go deeper on Datasets and How to launch-execute Evaluations? On new video Langsmith - part 2? 😊 That would be much apreciated.
When I use PDF in RAG , I want the user to ask, "In which page number did you get this data ?" the model responds to him with a page number , How can I do it ?
Hi, I installed flowise ai recently and I am not getting the option “analyze chatflow” like you showed in this vedio. How can I connect lamgsmith to analyze chat flow ?
This is so good. I love your detailed, well paced, and easy to understand instructions. I hope this series goes on for dozens more episodes, getting deeper and deeper, exploring more and more use cases with even greater integrations. I want to use the skills you are helping me develop in business scenarios
I have a problem! When i try to activate the 'analyze flow' function by entering my Langsmith API credentials, after saving, the chat stops to work. With 'chat stops to work' i mean that i can't input anything when i press the chat icon. The entirety of the input field is gray. Nothing else seems to have changed in my chain and prompt... I'm running the exact same canvas setup as you by the way.
Brilliant work! I am in need of this detail. Are there other things we should know about debugging with Flowise? Such as log files? Or do you think that langsmith tyoe tools provide all that’s needed.
how are they going to charge people? i don't get it. I am using their system(flowise) for a few simple bots on my servers. how will lanchaing charge me??? what am i not undersatnding?
I think you are referring to LangSmith. Langchain and Flowise is free to use. You do not have to use Langsmith though. It's an optional tool that simply assists with debugging.
@@leonvanzyl thank you for explaining, i was just amazed , like what technology do these guys have that is going to charge me when i dont even have a bank linked. i understnd that its if u ink the api then they will
Using DigitalOcean, would you use Docker to build Flowise, Lamma 3, and then LangSmith? Flowise already uses LangChain, so no need for that? Then in another docker you could run LangGraph, and maybe another for ActivePieces? Now it's getting complicated. Oh yeah, you would also need another for something like a PostGreSQL. smh... Then take some tylenol, create a agent to sing you a lullaby, and take a nap. lmao... Thanks for all of the videos brother! You are a beast and a legend!
@@iokinpardoitxaso8836 Is there a way to personalize the default message in the chat window? The default message is "Hi there! How can I help?". Any way to change that?
Thank you. Great work . Can you please make another detailed video about evaluating LLM response using Langsmith? And how we can enable the chatbot users to give feedback about the response in the resulting chatbot? And how to make this feedback using 3rd party apps .... For example python applications?
Leon love, love, love these videos. When I try to upload a file from S3, I keep getting the error "Unable to load file in the unstructured loader" I am using their paid version and its a hosted version of flowise.
Hi there. The S3 loader is definitely a bit more complicated than the other loaders. You need access to an unstructured.io API (either self hosted, or with unstructured.io cloud). You also need to set up access to the S3 bucket. This probably requires a dedicated video.
First of all, I want to thank you for your work in creating video guides. I'm using Flowise and it's giving me very good results, but I have a small issue that I'm not quite sure how to solve. When I use agents with custom tools, the agent sometimes doesn't use the tools. I work a lot in the prompt to make them use the tools (to bring and modify information from a database), but as the context grows, the use of the tool ends up being lost and the agent ends up inventing information. I'm not sure if there's any solution to this with chain nodes since I can't use custom tools with chains. It would be great if you could make a video at some point about using agents and chains together or if there's any strategy to this issue I'm having. Best regards.
Thank you! Agents video will release soon. Hopefully that will help. All you can really do is lower the temperature, to 0 if need be. Use GPT-4 or greater. Add instructions to the system message to tell the model to use the tool.
Thanks Leon! It's powerful. I can analyze Chatflow using LangSmith with localhost:3000. However, the 'Analyse Chatflow' option disappeared after I deployed Flowise with Render.
Ensure you deploy the latest version of Flowise to Render. In the latest version of Flowise they moved the Analyze chatflow option to a nested option under settings.
Shows how why my token cost exploded with "pseudo memory" feature simply passing entire chat each time. It will be interest to see how this, and more importantly, explicit RAG may be anachronistic in a years time. Here's to a 10MM token context window next year!
In the final LLM response, how do you assemble a common response from several sequentially constructed LLMChains if each previous response is an invitation to the next LLMChain? I use Prompt Chaining from the marketplace (Use output from a chain as prompt for another chain)
All the outputs from the previous chains can be added to the final prompt. Simply create placeholders / variables for each of the previous chains in the final prompt template and map their values. You can then give the prompt and instruction to format all these values in whatever way you want. For interest sake, what's the use case?
@@leonvanzyl Thanks for the tip. I was just guessing that's how it's done. I wonder if it will result in re-spending tokens in the final build. The variant in which I use it is the preparation of marketing research related to the identification of pain points in the audience and further development of this idea in the text of advertisements, which takes into account these pain points.