Welcome to CompuFlair's RU-vid Channel - Your Gateway to the Future of Data Analysis WorkFlows!
At CompuFlair, we are dedicated to revolutionizing the world of biomedical sciences through advanced biomedical informatics and cutting-edge AI technologies. Based in Houston, Texas, our channel offers an exciting look into how we turn complex life science data into groundbreaking insights and solutions.
**What We Do**:
🤖 AI & Machine Learning
💡 ChatGPT Integration
🧬 biomedical informatics
🔬 Educational Content
🌐 Join our community. Engage with our content, share your thoughts, and collaborate with us.
Subscribe to our channel and hit the notification bell to stay updated.
This is one of the best series of videos I've come across for learning LangChain and putting it to a practical everyday use with OpenAI. The example RAG topic of using a detailed (downloadable) pdf to add missing content to the response from an off-the-shelf LLM is covered very nicely and is very easy to follow. Well done!
Awesome work. A hands-on video on how to generate spliced and unspliced counts for RNA velocity using kallisto and bustools (or the Python wrapper kb-python) will be much appreciated. please
Hi there. Here are the versions: langchain==0.1.11, langchain-community==0.0.27 (for FAISS)
2 месяца назад
@@CompuFlair Have you ever experienced this error with the retriever? pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain retriever instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
Could you please provide the github link? Thank you so much. The channel is really interesting. For absolute beginners it is even more beneficial as the site tells step by step details with reasoning why a particular approach is used. The ONLY DRAWBACK is that there is no github repository link provided to practice along with the tutorial. Few people asked in their comments, but no direct answer provided. Please help.
Sir how many questions and answers from the history to be passed during the follow-up question? Will followup work just passing only the previous question and answer?
@@CompuFlair , In that case will the number of tokens increase with each subsequent followup question? To save cost how can we restrict the tokens without compromising the context?
Nice, you really break it down perfectly. I like how you explain why each step is necessary, for example why its necessary to use a prompt template and what we need to import. In many tutorials you'll see a list of imports that you don't know their function until much later in the code. I look forward to the series