Brilliantly explained with clarity and insight, thank you! Also really pleased you point out that RAG emerged from IR ideas and wasn't brand new: when I saw it I was like, haven't people seen Facebook's DrQA from 2017?!? And even that wasn't out the blue, there's a long established history with IR 👍
thank you. I agree, in most of the case, we are reinventing the wheel and giving old approaches with new names. Interestingly enough a simple keyword based search (BM-25) will still out perform dense embeddings in most cases!
This is exactly what I've been trying to find for the last couple of days. Simple instructions on how to do this with pure python and local LLM. Thank you!
Problem with RAG solutions is they don’t hold up with bigger amounts of unstructured data. I wish there was a solution that includes long term memory for chat agents so that they get smarter about your context as you chat with them
Nice I've been wanting to start in C# for RAG... Any tips or guidance for a newbie? I was using KoboldCPP's webui for LLM generation... but have NO idea where to go. None of these videos even hint at anything with C#... let alone Kobold.
As a newbe im hooked on this channel. Im about to take your RAG course, the issue have is, everytime ive been trying to use Langchain i get crazy errors about upgrades and in compatibilities with Python versions. How do you address this issue? Frustrating to resolve if at all.
My recommendation is to stick to a version of langchain and don't use the latest version. You can fix that in the requirements.txt. you don't need to latest version in most cases. For Python, use 3.10. Hope this helps
Hello! I’ve a doubt. The similarities is a way to reduce the number of tokens that is sent to the openAi api? So basically when you make a query to the llm you are not sending the entire text of the wikipedia page? I ask it because of tokens cost, to know exactly what openai will charge us. Your content is probably the best on youtube! Really appreciate all your videos
Probably. He used a Wiki page but you may have a 1000 pages pdf that will cost a lot to process and maybe most of it is irrelevant to what you want. When you break the text, and then get the 'n' most relevant chunks you get what you want faster and cheaper.
Yes, there are two parts as mentioned by @luizemanoel. First the document can contain a lot of irrelevant info. You only want to provide what is relevant to the query to the LLM. This will improve the responses. And the added benefit is reduced tokens which means less cost as well.
Hello sir! I want to build a question answering chatbot which gives answer form provided knowledge base in pdf or text format with python language. I'm working on this since last 10 days but failed to do till now! Can you please guide me through this project sir?
Hi, could you convert complex PDF documents (with graphics and tables) into an easily readable text format, such as Markdown? The input file would be a PDF and the output file would be a text file (.txt).
What are the best ways of importing documents into the RAG system From corporate systems, such as Google Docs or Confluence or Notion without asking your IT? I have actually done a few things manually, but they are very labour-intensive and manual for example using scraping tools and chrome extensions but is there something that is a bit more streamlined?
You are looking for data connectors in this case. Each of these services will have their own APIs or you can use data loaders from langchain (python.langchain.com/v0.2/docs/integrations/document_loaders/). This is one aspect where i would recommend using a framework.
it should work with open models. For bigger corpus, you will need to think about latency in retrieval. You might want to look into Quantized embeddings in that case.
could you please make a video on a a chatbot that can interact with pdf files and answer questions with recent tech ? I'm having the most difficulties with outdated tutorials. It would be a great help!
Thanks for this great video. I tried to run your juypter notebook. When calling the line "from google.colab import userdata" I get the error: ModuleNotFoundError: No module named 'google'. and somewhere I see pkg_resources is deprecated as an API Is python 3.12.3 too new? OK, I replaced the google part. There are other ways to create an OpenAI client ! Now it works !