Тёмный

Fixing RAG with GraphRAG 

Vivek Haldar
Подписаться 7 тыс.
Просмотров 8 тыс.
50% 1

Опубликовано:

 

27 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 16   
@wayneqwele8847
@wayneqwele8847 2 месяца назад
Thank you for the video, that was a great paper to go through. I find RAG research techniques have so much insight to how we can develop and identify our own cognitive impediments to our own judgement. The Comprehensiveness, Diversity of perspective, Empowerment and Directness is such a good mental model to use in our own human judgement.
@rafikyahia7100
@rafikyahia7100 3 месяца назад
Excellent content summarizing cutting edge approaches, thank you!
@dawid_dahl
@dawid_dahl Месяц назад
Thanks so much, great content.
@themax2go
@themax2go 2 месяца назад
very well "ragged"... both on the local domain (details) and global domain (overview of pros-cons) 😉😎
@awakenwithoutcoffee
@awakenwithoutcoffee 3 месяца назад
great presentation Vivek. Some questions: - is graphRAG production ready ? if not, would it be difficult to upgrade RAG methods once we are in production ? - is there a RAG provider/stack that you prefer ? (datastax, pinecone, weaviate + a bunch of others who are all competing for attention) - what are your thoughts on LangChain vs LangGraph ?
@jordycollingwood
@jordycollingwood 3 месяца назад
Really great explanation, I’m currently struggling to decide on my own KG structure for a 2000 medical pdf corpus, so this was very helpful
@awakenwithoutcoffee
@awakenwithoutcoffee 3 месяца назад
same here brother. There are so many techniques, everyday I learn something new which is both good and terrifying ha. What stack are you thinking of using ? We are researching DataStax, Pinecone, Weaviate and are learning to build agents with LangGraph.
@ashwinnair5803
@ashwinnair5803 2 месяца назад
Why not just use RAPTOR instead?
@wanfuse
@wanfuse 3 месяца назад
wouldn't it cut to the chase to train an llm on your own data? theres your graph use one of these OpenAI's GPT-3/4 Hugging Face Transformers (e.g., GPT-2, GPT-3 via third-party providers) Google's T5 (Text-to-Text Transfer Transformer) Meta's BART and BlenderBot Anthropic's Claude every week update the llm summarization is the death of real data, better off one level of summarization? Just a thought!
@mccleod6235
@mccleod6235 3 месяца назад
Maybe you don't want to send all your valuable business data to third party companies.
@wanfuse
@wanfuse 3 месяца назад
@@mccleod6235 thats true but its not necessary, there are models that are open source you can train air gapped from a jetson
@bohnohboh676
@bohnohboh676 3 месяца назад
"every week update the llm" yeah no way unless you have tons of cash, compute, and time
@wanfuse
@wanfuse 3 месяца назад
maybe maybe not, let you know! your probably right, will see if my idea pans out
@sasha297603ha
@sasha297603ha 4 месяца назад
Very interesting paper, thanks for covering!
@brandonheaton6197
@brandonheaton6197 3 месяца назад
can you pontificate on the combination of upcoming transformer inference ASICs with deep agentic workflows employing GraphRAG style strategies? Seems like we will be close to our personal assistants writing a PhD thesis in the background whenever we ask a question. SOHU is reporting 500,000 tokens per second with Llama3 70B....
@fintech1378
@fintech1378 2 месяца назад
super excellent video
Далее
Fine-tuning or RAG?
8:05
Просмотров 1 тыс.
Principles of Beautiful Figures for Research Papers
1:01:14
Graph RAG: Improving RAG with Knowledge Graphs
15:58
Просмотров 63 тыс.
GraphRAG: The Most Incredible RAG Strategy Revealed
10:38
Chat With Knowledge Graph Data | Improved RAG
13:00
Просмотров 5 тыс.
Programming with LLMs: A Rambling Rant
13:30
Просмотров 1,5 тыс.
Graph RAG with Ollama - Save $$$ with Local LLMs
12:09
Reliable Graph RAG with Neo4j and Diffbot
8:02
Просмотров 19 тыс.
GraphRAG - Will the Dust Settle?
31:46
Просмотров 2,4 тыс.
10 weird algorithms
9:06
Просмотров 1,2 млн