Interesting ideas covered here. I am struggling with the task of "teaching" an agent on how to process an initial prompt so that it can extract the correct questions needed to first scrape the web and provide context along the topic and it's a really hard task, I would say that is a task that could lead to a whole new startup or business. So much work and things to learn yet, but this is a key process, good prompting will always be something valuable, good prompt and good data sources well formatted for the LLM to understant, and the more concise the better, also compressed to keep tokens low. Thanks again for this great content!
I am dealing with something similar ... I have a small side project where I am getting public weather data, and I have been iterating on some Prompts to an LLM... and its been quite a learning process. Still need some iterations but I think I will get it to a good place over the next few weeks.. Prompt Engineering its own specialized area ... :)
Can LLMs parse large inputs? 0:06 intro Lost in the middle (LITM) research paper 0:38 Does order of information affect LLM accuracy 1:06 Similar phenomena seen with humans 1:20 What is a context window? 2:16 Key observations from LITM. 2:30 Serial position effect of free recall research paper 2:56 How this affects a RAG workflow? 4:05 Recommendation from LITM #1 5:15 Recommendation from LITM #2 5:41 Note for second recommendation 6:09 Recommendation from LITM #3 6:16 Acknowledgements 7:00 Closing remarks 8:04
let me know if you would like to see some examples with the Google LLM's... working on putting together some pythong based LangChain and LlamaIndex examples...