Тёмный

DSPy with Knowledge Graphs Tested (non-canned examples) 

Diffbot
Подписаться 4,7 тыс.
Просмотров 7 тыс.
50% 1

The DSPy (Declarative Self-improving Language Programs in Python) framework has excited the developer community with its ability to automatically optimize and enhance language model pipelines, which may reduce the need to manually fine-tune prompt templates.
We designed a custom DSPy pipeline integrating with knowledge graphs. The reason? One of the main strengths of knowledge graphs is their ability to provide rich contextual information from interconnected data, which can potentially enhance the system’s retrieval and reasoning abilities.We tested both the vanilla DSPy RAG pipeline and our customized one to see how they work, and you'll find out in the video that more surprises happened than expected. 🙃
Github repo: github.com/leannchen86/dspy-w...
#rag #knowledgegraphs #dspy #diffbot
0:00 Intro
0:30 DSPy introduction
1:03 Load knowledge source and build knowledge graph of Elon Musk
2:15 Vanilla DSPy RAG Pipeline
3:26 Custom DSPy pipeline with knowledge graph
4:51 First test question
5:06 1st test question with vanilla DSPy RAG
5:28 Fact-cheking with knowledge graph
6:12 1st test question with DSPy RAG + Knowledge Graph
6:24 2nd test question with vanilla DSPy RAG
8:07 Structured outputs of knowledge graphs
9:42 2nd test question with DSPy RAG + Knowledge Graph
10:54 Outro

Наука

Опубликовано:

 

28 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 27   
@kenchang3456
@kenchang3456 3 месяца назад
Not having experimented with DSPy, and looking at your struggles in at least understanding what DSPy is doing it seems like it should wait while the use of knowledge graphs has had a chance to figure itself out. So thank you for sharing. I intuitively like the concept of knowledge graphs so I'm hopeful.
@lckgllm
@lckgllm 3 месяца назад
Thank you for the encouragement! 😊 There's still a lot to explore and find out combining LLMs with knowledge graphs, but will definitely share the journey along the way with you all!
@kefanyou9928
@kefanyou9928 2 месяца назад
Great video~ Very interested in KG' adaption in LLM. Kindly reminder: hide your api key in the video😉
@vbywrde
@vbywrde 3 месяца назад
Thank you for this video! Really great. I'm also a bit new to DSPY, but am having a great time learning it. This is really the right way to explore, imo. You set up comparative tests and then look carefully at the results to think about what might be going on in order to find the best methodology. Yep. That's the way to do it. Some thoughts come to mind. 1) take the same code and question, and try it with different models for the embedding. What I've noticed is that the selection of models can have a significant influence on the outcomes. 2) perhaps try creating a validator function for when you take the data and convert it into English as a way to have the LLM double-check the results to make sure they are accurate. I've been doing that with a code generator I'm working on, and it seems pretty helpful. If the LLM determines the generated code does't match the requirements, then it recursively tries again until it gets it (I send the rationale of the failure back in on each pass to help it find its way -- up to five times max) 3) I'll be curious to see how much optimization influences the outcome! Anyway, fun to follow you on your journey. Thanks for posting!
@lckgllm
@lckgllm 3 месяца назад
This is such well-rounded and constructive feedback! Thank you so much! 🫶 You're right that I can set up some more testing and validation mechanisms, which are great suggestions that I'm taking to improve the pipeline. Really appreciate your effort writing this down, and I just learn so much more from the community with you all :) Thanks for the encouragement too!
@ScottzPlaylists
@ScottzPlaylists 2 месяца назад
I haven't seen any good examples of the Self-improving part of DSPy yet. Is it ready for mainstream use❓
@HomeEngineer-wm5fg
@HomeEngineer-wm5fg 2 месяца назад
You got a topic I exactly was looking into.. Now subscribed....I will follow on X.
@diffbot4864
@diffbot4864 2 месяца назад
That’s very kind of you! Thanks! May I know if you’re more interested in knowledge graphs or DSPy with knowledge graphs? Would appreciate your feedback 😊
@HomeEngineer-wm5fg
@HomeEngineer-wm5fg 2 месяца назад
@@diffbot4864 I'm in industry. A middle weight engineer trying to early adapt machine learning in a production environment. I see the same thing you are, but you are well ahead of me. Use case for integrating AI with BI. RAG is the natural progression and KG is a latecomer in my industry. Digital Thread things....
@fromjavatohaskell909
@fromjavatohaskell909 2 месяца назад
10:38 Hypotheisis: what if providing additional data from KG does not override knowledge ("false or hallucinated facts") already inherently present in LLM. I wonder what would happen if you change labels of knowledge graph to same abstract names like Person1, Person2, Company1, Company2 etc. and try to run the exact same program. Would it dramatically change result?
@nk8143
@nk8143 2 месяца назад
I agree on that. Because misconception "everyone knows that Elon co founded every company" was most likely present in training data.
@jankothyson
@jankothyson 3 месяца назад
Great video! Would you mind sharing the notebook?
@rohitgupta7146
@rohitgupta7146 3 месяца назад
If you expand the description mentioned below the video, you will find the github link to notebook
@bioshazard
@bioshazard 3 месяца назад
Which language model did you use? Llama 3 and Sonnet might offer improvements to recall over RAG context.
@diffbot4864
@diffbot4864 3 месяца назад
Next video is about testing llama3. coming out soon 😉
@CaptTerrific
@CaptTerrific 3 месяца назад
What a great demo, and idea for a comparison of techniques! As for why you got those weird answers about companies Musk co-founded... You chose a very interesting question, because even humans with full knowledge of the subject would have a hard time answering :) For example, while Musk bought Tesla as a pre-existing company (and thus did not co-found it), part of his purchase agreement was that he could legally call himself a co-founder. So is he or isn't he a co-founder? Murky legal/marketing vs. normal understanding, polluted article/web knowledge set, etc.
@plattenschieber
@plattenschieber 2 месяца назад
Hey @lckgllm, could you also upload the missing `dspy-requirements-2.txt` in the repo? 🤗
@fintech1378
@fintech1378 3 месяца назад
please do one for vision model too. AI agent for web navigation? like RAG based but it also acts not just retrieves, saw your past videos bout those, but this is based on LLM
@diffbot4864
@diffbot4864 3 месяца назад
Thanks for the suggestion! Will do our best to to make a video on this :)
@real-ethan
@real-ethan 3 месяца назад
This could be a hallucination of LLMs. The implementation of RAG(KG) itself is good enough. I think the reason is that the information about Elon Musk you asked about has already been trained into the model. We hope to use RAG to solve factual content that LLMs are not aware of, but what if the LLM thinks it already knows the question or information?😂
@lckgllm
@lckgllm 3 месяца назад
That's a fair point. It's possible that language models carry knowledge from pre-trained data, and this hypothesis needs to be tested out for this particular example. But regardless, hallucination in RAG system can stem from both the model's pre-existing knowledge and the retrieval and generation process, which means RAG still doesn't guarantee hallucination-free outputs, and that's why we are seeing research such as Self-RAG or Corrective RAG trying to improve quality of retrieval/generation components. Anyways, I think one of the main points I wanted to get across in this video is that LLM-based apps still need to be grounded with factual information, and that's what knowledge graphs can be powerful at, but for some reason, the pipeline did not want to follow the knowledge (even if it's provided with the info) in the second question.
@jeffsteyn7174
@jeffsteyn7174 3 месяца назад
Rag fall over if you ask a question like how many projects was completed in 2018, 2019, 2020. If there is no link between the chunk and a date that might have been further up the page the query can fail. The knowledge graph is a good way to improve the quality. Not perfect but better than plain old rag. Regarding your last question the openai api has gotten better at that. It's far better at following instructions like only use the context to answer the question. But you must give it an alternative, ie if the context does not answer the question respond with I can't answer this. If you really want fine grained control. Tell the llm to analyse the question and context and categories the context as directly related, related or tangently related. Only answer the question if it's directly related to the context, otherwise respond, I can't this question. You might want to look into agentic rag for really complicated questions.
@real-ethan
@real-ethan 3 месяца назад
@@lckgllm Knowledge graphs are indeed very effective and powerful, and this is also the reason why I became interested in your channel. I have always believed that the old-fashioned way of RAG is not a particularly good implementation. In my understanding, the difficulty of RAG has always been not G, but R. In order to optimize R, I tried to summarize and extract keywords from the text content, and then vectorize and retrieve it. This did improve the retrieval results. Combined with your implementation of the Knowledge Graph, I am more convinced that in order to achieve better RAG, the efforts and costs we need to pay will be more on how to organize and index our data. For the example in the video, I think we can do a simple test: change the Elon Musk Node to a different name, while keeping the other Nodes/relationships unchanged. Perhaps the final generated answer will not have any hallucinations. If this test can produce correct results without hallucinations, then perhaps all we need to do is a little more prompt engineering to make the LLM summarize and answer solely based on the results from R.
@lckgllm
@lckgllm 3 месяца назад
@@real-ethan @jeffsteyn7174 Thanks for both your fruitful feedback, which really helped me learn and think! 😊I'll do some deeper probing and potentially sharing the new experiments in the next video. ;)
@paneeryo
@paneeryo 2 месяца назад
Music is too annoying. Please dial it down
@diffbot4864
@diffbot4864 2 месяца назад
Will be more aware of the volume! Thanks for the feedback
@MrKrtek00
@MrKrtek00 2 месяца назад
It is so funny how tech people do not understand why ChatGPT was a hit: exactly because you can use it without programming it.
Далее
GraphRAG: LLM-Derived Knowledge Graphs for RAG
15:40
Просмотров 97 тыс.
ФОКУС С БАНАНОМ🍌
00:32
Просмотров 370 тыс.
Поём вместе с CLEXXD🥵 | WICSUR #shorts
01:00
Understand DSPy: Programming AI Pipelines
28:21
Просмотров 3,8 тыс.
Fine-tuning or RAG?
8:05
Просмотров 643
Reliable Graph RAG with Neo4j and Diffbot
8:02
Просмотров 14 тыс.
The Future of Knowledge Assistants: Jerry Liu
16:55
Просмотров 43 тыс.
Convert Any Text into a Knowledge Graph
30:52
Просмотров 9 тыс.
5 Good Python Habits
17:35
Просмотров 455 тыс.
Easiest Tutorial to Learn DSPy with LLM Example
11:22
Просмотров 2,9 тыс.
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00