Excellent Tutorial! Do you know how to UPDATE the file uploaded? And if it is possible then will it create new embeddings and update vector store automatically? I think this functionality is not supported yet!
Thanks! This functionality is indeed supported in V1 and V2 as well. In V1 of assistant, you would have to upload all files again with the updated one (which was stupid but I guess it was required because it would create embeddings all over again) . You can check this out here within the update assistant documentation of openAI
Sure, I have a video on function calling which you can find here ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-nKz6aTIg47s.html, its a two part video but explains the usage of function calling step by step used for assistants.
Hi, my use case is to create a python code based on the conditions which are available in the file. This file is uploaded to vector store but i am not getting the exact code from the ai after uploading file to vector store and updated the same to assistant. I have used the same code provided by you. Can you help me in this.
Did you try adding instructions within the system prompt? If its not producing the same result then you need to direct the model in a specific way through run instructions and the system prompt itself
@@thecodecruisethrough the API calls? I know there is a way to add files on the dashboard on platform. Once a storage is created is there a way to change the chunk and overlap size? additionally, the store id remains the same correct?
Is there any chance to load the file (CSV in this case) to a custom GPT, so when I share my custom GPT on the openai tools platform, I can update the CSV in the background with the latest data. Thanks
You can provide a CSV to custom GPT but you can't programmatically update the CSV behind the scenes, instead, you would have to update your file an reattach to the custom GPT every time. Although OpenAI Assistants can help you achieve this
@@thecodecruisethanks! Because I have very large pdfs that produce a lot of tokens and I thought if I get rid of all the pictures I can reduce the amount of input tokens.
@DavldLangner that would be a good idea, will these LLMs ans their tendency to hallucinate its always good to use images separately for any analysis but yet again not all models do very well
This creates new vector stores and files every time i run the script. How can i modify it to make it use one specific vector store and not create a new one every time?
Once your vector store is created, files have been put in the vector store and your assistant is created. Treat this as a separate script, you can store your vector Id and assistant Id, thats all you will need. Next separate this part of code as a second script gist.github.com/AwaisKamran/b10053108b3d3fe3c2745d2b33adf55f all you need here is your assistant Id.
Thanks, Trying to do better 🙂 Yes, its possible, the idea of vector embeddings is to store anything as vectors and you can derive answers from it but instead of building it from scratch I would recommend that you try Gemini 1.5 pro, I have two videos on this which summarize audio calls and extract details from the video, you can check them out below: 1.ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qE36eBfYeR8.htmlsi=pvPM8jZKm2kU8ixc 2. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qd-6kuyvc8s.htmlsi=2ucm-fmxrLanHc8L
@@thecodecruise Thanks for replying! I want to store CSV files in vector store and ask openai to generate a content based on a CSV file. But openai has to go through all CSV files for that. Is this achievable?
@@LeeYeon-qv1tz This is achievable, you can tweak your openai assistant to just use one CSV file. I would also recommend using pandasAI for this task, check the documentation here docs.pandas-ai.com/
Sure, here's the script for creating the assistant gist.github.com/AwaisKamran/eeab9a2b4f6ae9a9ca2fdc7298dd90fb and here's the script to run the assistant gist.github.com/AwaisKamran/85072f344477ce8ed55eb0b7a491585d