On the vector_index variable I keep getting Error embedding content: 504 deadline exceeded error, same even after reducing the chunk size to 2000, hoe can I fix this
i have a pdf of 1000 page +, i want that the model analyze the pdf for answering questions..... also can i upload multiple pdf at once to the ai to analyze them all and give the best answer ?
Why have you choose the chromadb and not FAISS as you did with the "Langchain: PDF Chat App (GUI) | ChatGPT for Your PDF FILES | Step-by-Step Tutorial"? , thank you and happy new year!
I am getting some authentication error in vector_index = Chroma.from_texts(texts, embeddings).as_retriever() eventhough i integrated the api key and installed all the necessary dependencies.
Is there any LLM to use for blog post writing? I mean a model that follows instructions and we can use for free to write blog post outlines with long +3000 words text it would be great if you could suggest a model or make a comparison video for this
You might want to look at the new Mixtral-8x7B model or Dolphin-Mixtral-8x7B models. They have 32k context window so will be best for writing long text.
HI, i was doing a project using gemini and gpt4 vision, but each time I am getting different answer , how to solve this ? I mean how to get same answer everytime ?
I want to make a pdf chat app, but i want my a response from the document only i don't want open ai toh influence mt response and modify it i want the exact content ras response which is in my doc what should i do?
hello man, i have tried to follow the instructions and steps in the colab notebook but the last step(prompt = [ "What is Mixture of Experts?", ] response = model.generate_content(prompt)) is continously failing and returning an error