Тёмный

Understanding Embeddings in RAG and How to use them - Llama-Index 

Prompt Engineering
Подписаться 159 тыс.
Просмотров 31 тыс.
50% 1

In this video, we will take a deep dive into the World of Embeddings and understand how to use them in RAG pipeline in Llama-index. First, we will understand the concept and then will look at home to use different embeddings including OpenAI Embedding, Open source embedding (BGE, and instructor embeddings) in llama-index. We will also benchmark their speed.
CONNECT:
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Support my work on Patreon: Patreon.com/PromptEngineering
🦾 Discord: / discord
▶️️ Subscribe: www.youtube.com/@engineerprom...
📧 Business Contact: engineerprompt@gmail.com
💼Consulting: calendly.com/engineerprompt/c...
LINKS:
Google Colab: tinyurl.com/mr2mf65n
llama-Index RAG: • Talk to Your Documents...
How to chunk Documents: • LangChain: How to Prop...
llama-Index Github: github.com/jerryjliu/llama_index
TIMESTAMPS:
[00:00] Intro
[01:21] What are Embeddings
[03:58] How they Work!
[05:54] Custom Embeddings
[08:30] OpenAI Embeddings
[09:33] Open-Source Embeddings
[10:45] BGE Embeddings
[11:42] Instructor Embeddings
[11:57] Speed Benchmarking

Наука

Опубликовано:

 

9 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 54   
@syedhussainabedi476
@syedhussainabedi476 7 месяцев назад
An exemplary and crystal-clear exposition of a complex subject that leaves no room for confusion.
@-blackcat-4749
@-blackcat-4749 5 месяцев назад
That was a steady reverberation. Another typical 📚 day
@ayushyadav-bm2to
@ayushyadav-bm2to 5 месяцев назад
I am a med student, I've made rag, on the best books i can get, and to be honest now use it, more than google to understand a topic, used mistral and chroma db btw
@-blackcat-4749
@-blackcat-4749 5 месяцев назад
The reported repetitive 🔑 achievement. It stayed the same
@xd-mk3by
@xd-mk3by 7 месяцев назад
This is an absolutely OUTSTANDING video that summed up so many things so well. Thank you for it!
@s.moneebahnoman
@s.moneebahnoman 6 месяцев назад
Its one thing to be good at what one does but being good at teaching it to someone else is next level. The best playlist for fully understanding the *essence* of what's happening. amazing!
@sivi3883
@sivi3883 7 месяцев назад
Thanks! This is a great video for understand RAG. I agree chunking and embedding play a major role. Per my experience so far, most of the times the LLM (GPT4 in my case) is answering the questions well from my data if the quality of chunks are good! The challenges I face so far in chunking: 1) My PDFs contain a lot of contents with complex tabular structures (with merged rows and columns) for product specifications. Chunking break the relationship between rows and columns. 2) same kind of contents that replicate across different PDFs for different products. Unfortunately the pdfs are not named by products. Therefore the vector search is returning contents from an a wrong product not asked in the user query. 3) Sometimes with in the same PDF (Containing multiple products), the contents repeat with different specification per product. If I ask for input voltage for a product A, it might return product B since the context is lost while chunking. Looking for smarter ways to chunk to retain the contexts across the chunks.
@Vermino
@Vermino 8 месяцев назад
Amazing tutorial. I barely understand this stuff, but you walking me through this helps me understand it a bit more. Plus i am able to recreate your process and test things out myself.
@engineerprompt
@engineerprompt 8 месяцев назад
Glad to hear that!
@ilianos
@ilianos 8 месяцев назад
Great educational value! I'm really looking forward to the comparison video for different embeddings.
@MikewasG
@MikewasG 8 месяцев назад
The video is awesome! Can't wait for the next one!
@uhtexercises
@uhtexercises 8 месяцев назад
Thank you for sharing. I really like your structured approach and explanations. Very well done
@engineerprompt
@engineerprompt 8 месяцев назад
Thanks, glad it was helpful
@RichardGetzPhotography
@RichardGetzPhotography 8 месяцев назад
Excellent content!! Thank you!
@gsayesh
@gsayesh 4 месяца назад
Superb Lesson... Thank you! ♥
@mayurmoudhgalya3840
@mayurmoudhgalya3840 8 месяцев назад
Good thing this showed up on my feed. You got you another subscriber.
@engineerprompt
@engineerprompt 8 месяцев назад
Welcome aboard!
@li-yq7rc
@li-yq7rc 8 месяцев назад
Need an index of all topics of prompt engineering to understand where to begin.
@AccioLumas
@AccioLumas 8 месяцев назад
Yes
@patrickblankcassol4354
@patrickblankcassol4354 8 месяцев назад
Thank you for awesome content
@chandrakalagowda3129
@chandrakalagowda3129 8 месяцев назад
Very useful video on the topic
@engineerprompt
@engineerprompt 7 месяцев назад
Glad you liked it
@livb4139
@livb4139 8 месяцев назад
Thanks, embedding was like black magic to me
@42svb58
@42svb58 8 месяцев назад
awesome video!
@joxxen
@joxxen 8 месяцев назад
as always, you will never find a bad quality content by this guy
@ramesh_a
@ramesh_a 8 месяцев назад
🎯 Key Takeaways for quick navigation: 01:23 📚 Embeddings are multi-dimensional feature vectors that represent words or sentences in a semantic space, preserving their meaning. 04:06 🧩 Embeddings are crucial in retrieval augmented generation systems to find the closest text chunks based on user queries. 05:44 🚀 The choice of embedding model is vital as it directly impacts the performance of the response generation in document-based chat systems. 08:45 🔄 OpenAI embeddings can be used for document retrieval but come with a cost, while various open-source embeddings provide alternatives. 13:31 ⏱️ Local embedding models like BGE and Instructor are faster for computations compared to remote OpenAI embeddings, which involve server calls. Made with HARPA AI
@timtensor6994
@timtensor6994 6 месяцев назад
thanks for the summary. Been trying to follow your videos. Have you tried to run llama index with mistral 7B model and instructor embedding ? Is there already a colab notebook and video?
@killerthoughts6150
@killerthoughts6150 6 месяцев назад
persist and you will go very big, rooting for you
@engineerprompt
@engineerprompt 6 месяцев назад
Thank you 🙏
@Megh_S
@Megh_S 6 месяцев назад
Please make a video on alternatives for Open AI llms as well..
@mioszdaek1583
@mioszdaek1583 4 месяца назад
Great video. Thanks for sharing. Just a little comment about analogies by vector arithmetic. In the 2:44 you said that if you subtract a vector man from vector king you would get a vector woman where in fact the resultant vector would represent royalty I think. Thanks
@ayansrivastava731
@ayansrivastava731 Месяц назад
The only problem lies in this question : 1. when an LLM model is trained ( not by us, by the ones who made it ) - the input embedding matrix is already fixed 2. What then, is the need of creating external embeddings in the first place? is there a way we can reuse the model's embeddings ? and if not, why? we could very well tokenize our input text, and model would take care of looking up the corresponding embedding. I know it wont fit with RAG Logic , but then counter question is - how are you sure performance wont be affected if using custom embeddings ( for storage in vectorDB) vs kind of embeddings that have been generated per token within LLM itself. is using RAG embeddings means we are bypassing LLM embeddings?
@andrewandreas5795
@andrewandreas5795 8 месяцев назад
Awesome video! one question, does this give better result than your standard Pinecone/Chorma approach?
@engineerprompt
@engineerprompt 8 месяцев назад
Pinecone/Chroma are vector store for storing the embeddings. What type of semantic search you use will have an impact.
@wuzhao8605
@wuzhao8605 8 месяцев назад
Is Llama index better than langchain? If so, what are the contributors of the improved performance?
@Drone256
@Drone256 4 месяца назад
You started off creating an embedding for a sentence but it appears each chunk is more than a sentence. Would love to know more how you decide how to chunk the data.
@paultoensing3126
@paultoensing3126 Месяц назад
When you click on the next sequence of code lines, how is it that that stuff magically materializes? Do you copy and paste it from someplace else as obviously you don’t take the time to write it all out like a mortal. Where do these lines of code come from?
@hernandocastroarana6206
@hernandocastroarana6206 8 месяцев назад
Excellent video. Thank you very much. I will wait for the next one on the subject 🦾
@arkodeepchatterjee
@arkodeepchatterjee 7 месяцев назад
please make the video comparing different embedding models
@haroonmansi
@haroonmansi 8 месяцев назад
Thanks! any idea how "chunking" or embeddings will be different if we are dealing with python code instead of English language? Like for example I want to use RAG method with Code Llama or CodeWizard for my github repo containing Python code.
@engineerprompt
@engineerprompt 8 месяцев назад
Check this out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-aD-u0gl93wM.html
@vinven7
@vinven7 7 месяцев назад
The section on Emebeddings with all the cool visualizations, where is that from? If it's from somewhere else, could you please share the link?
@engineerprompt
@engineerprompt 7 месяцев назад
that's my own :)
@uwegenosdude
@uwegenosdude Месяц назад
Great video. Thanks a lot ! Could you perhaps recommend a good free embeddings model for German language documents?
@engineerprompt
@engineerprompt Месяц назад
The models from mistral.ai/ supports German, they even have their own embeddings which I think supports German.
@GaneshKumar-jw9ml
@GaneshKumar-jw9ml 8 месяцев назад
Can you make a video on chat element in streamlit which uses prompt template.
@KokahZ777
@KokahZ777 2 месяца назад
You didn’t explain how to compare query embeddings with dataset embeddings
@yth2011
@yth2011 8 месяцев назад
what is the difference between embedings and lora
@zedcodinacademibychinvia9481
@zedcodinacademibychinvia9481 5 месяцев назад
How can i perform the same task with Gemini
@engineerprompt
@engineerprompt 5 месяцев назад
Watch my latest video :)
@zedcodinacademibychinvia9481
@zedcodinacademibychinvia9481 5 месяцев назад
Which one sir
@paultoensing3126
@paultoensing3126 Месяц назад
How do you pick your embedding models? Most of us have no context for understanding what’s valuable in a model or what the criteria would be. So you slide over that as if we should know how to pick an embedding model.
@paultoensing3126
@paultoensing3126 Месяц назад
I recommend that you just put code in your thumbnail and see how that sells. That’ll make it super uncool.
Далее
Talk to Your Documents, Powered by Llama-Index
17:32
Просмотров 75 тыс.
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Просмотров 67 тыс.
RAG But Better: Rerankers with Cohere AI
23:43
Просмотров 51 тыс.
Semantic Chunking for RAG
29:56
Просмотров 15 тыс.
Странный чехол из Technodeus ⚡️
0:44
Nvidia Titan
0:48
Просмотров 159 тыс.
iPhone 12 socket cleaning #fixit
0:30
Просмотров 26 млн
Apple watch hidden camera
0:34
Просмотров 57 млн