Тёмный

Can LLMs Effectively Reason over Large Inputs ? 

New Machina
Подписаться 1,4 тыс.
Просмотров 874
50% 1

Опубликовано:

 

23 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 13   
@RodneyShen
@RodneyShen 2 месяца назад
Very insightful and concise video. I was just looking at this paper yesterday. Great work
@NewMachina
@NewMachina 2 месяца назад
Glad it was helpful! Thanks for you feedback.... Good timing on checking out paper and then seeing video :)
@AndreasJansson2010
@AndreasJansson2010 2 месяца назад
Excellent video and article. Thanks for sharing!
@NewMachina
@NewMachina 2 месяца назад
Glad it was helpful! Totally agree, excellent research paper…… thanks for reaching out and sharing your feedback …..
@aritzolaba
@aritzolaba 2 месяца назад
Interesting ideas covered here. I am struggling with the task of "teaching" an agent on how to process an initial prompt so that it can extract the correct questions needed to first scrape the web and provide context along the topic and it's a really hard task, I would say that is a task that could lead to a whole new startup or business. So much work and things to learn yet, but this is a key process, good prompting will always be something valuable, good prompt and good data sources well formatted for the LLM to understant, and the more concise the better, also compressed to keep tokens low. Thanks again for this great content!
@NewMachina
@NewMachina 2 месяца назад
I am dealing with something similar ... I have a small side project where I am getting public weather data, and I have been iterating on some Prompts to an LLM... and its been quite a learning process. Still need some iterations but I think I will get it to a good place over the next few weeks.. Prompt Engineering its own specialized area ... :)
@verylazycoders
@verylazycoders 2 месяца назад
Thank You .Keep up the Good Work
@NewMachina
@NewMachina 2 месяца назад
Thank you for your feedback... focused on making each video better and better ...
@RodneyShen
@RodneyShen 2 месяца назад
Can LLMs parse large inputs? 0:06 intro Lost in the middle (LITM) research paper 0:38 Does order of information affect LLM accuracy 1:06 Similar phenomena seen with humans 1:20 What is a context window? 2:16 Key observations from LITM. 2:30 Serial position effect of free recall research paper 2:56 How this affects a RAG workflow? 4:05 Recommendation from LITM #1 5:15 Recommendation from LITM #2 5:41 Note for second recommendation 6:09 Recommendation from LITM #3 6:16 Acknowledgements 7:00 Closing remarks 8:04
@ryanmcgrath7069
@ryanmcgrath7069 28 дней назад
Great channel
@NewMachina
@NewMachina 27 дней назад
Thank you for your feedback … if you ideas for videos you would like to see please let me know… 🙏
@Leto2ndAtreides
@Leto2ndAtreides 2 месяца назад
Missed Gemini with million tokens.
@NewMachina
@NewMachina 2 месяца назад
let me know if you would like to see some examples with the Google LLM's... working on putting together some pythong based LangChain and LlamaIndex examples...
Далее
What are LLM Hallucinations ?
7:59
Просмотров 1,9 тыс.
Agentic AI: Redefining How We Interact with Technology
48:00
Why Large Language Models Hallucinate
9:38
Просмотров 200 тыс.
What is RAG? (Retrieval Augmented Generation)
11:37
Просмотров 158 тыс.
What are Generative AI models?
8:47
Просмотров 1 млн
How to Build an LLM from Scratch | An Overview
35:45
Просмотров 253 тыс.