Legendary video Mark! I was a bit overwhelmed by the api documentation in terms of how broad it is. This video absolutely helped me in gaining that necessary overview to design my applications software architecture. I love how you just show a working example with all components loosely integrated to demonstrate how all of the components work together. You probably saved me a day of work going back on early uninformed choices in a 30 min video. Subscribed!
Honest, Straight to the point, insightful, that's what we like, good job mate ! By the way i'm curious about the scrapping assistant for documentation, do you have any tutorial about this ?
My pleasure! Appreciate your appreciation I use FireCrawl to help me with a lot of documentation scraping - if it misses anything I get my VA army to help me with the rest
Hi Mark. Thanks a lot for this great video. I played around some months ago with the v1 of the assistants API. With this background your video helped me a lot to understand what has changed. I liked the idea of using Perplexity API to get information about recent information which is not part of OpenAIs huge LLMs.
this was very helpful, Thank you so much. Can you also dive into the portion where you define the actions in custom GPT to talk to replit microservice? (The OpenAPI schema you used to make that connection)
Say Mark would you be so kind as to update with the link to the documentation scrape? That would be so useful. Or if you could let us know what you used to scrape the documentation, that would work too. Thanks!
Hi Joe! RU-vid doesn't let me attach Google Doc links, but I managed to get around it by adding bitly links: 1) Part 1: bit.ly/3VDXC4Y 2) Part 2: bit.ly/3xeFCVz I had my VA do this, but if you wanted to try to do it in an automated fashion, I would recommend trying FireCrawl and entering the URL. Will be making a video on this in a few weeks.
Hey Mark, so I was making the assistant of the ChatGPT website itself but when I run it with my large documents that are in the vector search, it reaches a token limit and I wont be able to ask any more questions until I clear or restart the assistant. Im confused. Using Chat GPT is allows for long conversations. Doing the process you explained, would this allow me to better communicate with the assistant without it hitting a wall? Maybe Im going about this wrong or not being as efficient.
Hey Rob! The assistants API has a per document limit of around 5 million tokens; I notice that even at the 3-4 million mark, the indexing slows down quite a bit. Because of that, I convert all PDFs into txt files, then split those text files using a text file splitter online. So if I have a document with 5,000,000 tokens, I’ll break it up into 1,500,000 token documents so that it can properly index them.
That would be a prompt engineering component - make sure a certain set of criteria or threshold needs to be met to trigger the tool. I would designate a whole portion of the prompt for this.