Thank you a lot for this video! There is quite some content about Mem0 on RU-vid already - but as it is with most hyped or promising frameworks, most creators unfortunately do not take the effort of actually looking at the codebase and explaining how it works, but rather just scrape the surface by showing off the usage examples they found in the docs in the "quickstart" section. Contributions like yours are what open source actually needs, so thanks again!
I’m curious, how much better are the results than if you just issue a prompt to summarize directly to GOT-4/o/o1? The results would need to be objectively better in order to justify the agent complexity.
Agree, this is something you may need to experiment and see if this approach gives better result especially when you have large transcripts with domain nuance. In that case you may have to have a fine tuned model for the hints
Thank you Sir for the video, but how is this different from adding few shot examples to the prompt. Do the few shot examples we pass in a prompt are same as the hints here? Please help understand, Thanks again!
Nice, Speed scroll capture the images and then send the images in one parralllel batch inference and restitch for faster speed. Gemini flash 1.5 is multimodal and allows 15 free rpm at <128K token window and can batch process images easily withing this limit.
Hi can you help to learn?(Not laws i mean your other content) I would like to reach your level and learn from you please, algorithms but also application side like karparthy,fe fei li to be able to opensource and be able to do something (like healthcare, entertainment,finances)etc
@@user-wr4yl7tx3w some use case may need choreography some orchestration. So both patterns need to be considered. This is no new concept, microservices 101
@@kitranet I am using it through langsmith. You need to have at least plus membership of langsmith. There is a desktop version of studio which works on certain Mac machines only
Hi Rajib, I am in a dilemma, New to Langchain agent framwork. What i like is the abstractions and length of features it provides to build a true agent. What i fear is that it is too abstract and there are icebergs underneath which i dont have any clue. What if something goes wrong? Difficult to troubleshoot and may be i do not have the flexibilty enough to quickly do a work around. So i am thinking of devloping a very basic functionality which i will have full control. Any thoughts or your experience of taking Langchain to production? Your feedback is appreciated. THanks
Its easy to see that the physics teacher agent was built after the math agent since the description in the physics teacher agent still references to mathematics and the instructions in the moderator related to the physics teacher agent mention maths not physics. The typical error of a copy & paste.
Nice video. Would like to see the next step for expanded Supervisor Agent capability. I appreciate that you are doing this without abstraction as i want to understand this before looking at a langgraph version, but would like to see that too :)
what about the prompt templates? Does this video only cover handling the response in the lambda function? What exactly are the prompt templates? How are those passed to the LLM?
Hi @rajibdeb4059, @EccleezyAvicii, do you know how to implement partial response? like if the Tool takes time to provide response, I will show temporary message to the user? Your prompt reply will be greatly appreciated.
Great Explanation! However if we have to build a KB for prod, would it not be better to create a separate lambda function where we can have more control over how we create KB using Langchain and upload it to Vector DB? And then create a seprate RAG Agent with Lambda Function and Open API?
Hi Rajib, great work. What should we do if we want to create index and use Vertex AI Matching Engine to index the multimodal embeddings ( including for example text and image embedding for each item)? how should this be done? Does matchingengine index support that?