Hi, I am one of the data enthusiast like you ! On this channel, I teach data science as well as recent AI trend (LLM) stuffs in the most simplest manner possible.
Currently, video is one of the most important and go-to content type online. I aim to make Data Science Basics a go to RU-vid Channel for videos surrounding data science stuffs in a practical way.
If you find the content helpful then consider subscribing.
For business inquiries email at: basicsdatascience@gmail.com 💼 Consulting: topmate.io/sudarshan_koirala
it seems a bit beside the point of using sparse/hybrid search on the meta-data if we already know exactly which to filter out: - isn't there a way to attach meta-data on top of the embedding instead of embedding the meta-data itself ? Until we do I doubt its efficiency. - would love to see more meta-data creation techniques (maybe using cheaper/specific models). This subject is not talked about nearly enough due to it being difficult and relatively expensive.
@author If I run this notebook, and I have some dataframes created. Then I run this notebook from my current notebook using the query above, will the dataframe be available in the current notebook??
You are welcome. You don’t need to start over the notebook, just ask question. But yes, once you are out of the colab notebook and runtime is not active, you need to run the notebook again.
Thank you Sudarshan. Could you please consider to make a video guide to rung GraphRag on local LLM such as Ollama to ingest any type of documents and not just one document?
You should explain the code in more details. For example; why you have written that line of code. That would be helpful for us to understand & help you to get more likes & subscriptions.
Hello Sudarshan, Welcome back, and thank you for your efforts. Could you consider making a video covering the following topic: Groq using Llama 3.1: Agents & LangChain with ReAct (Reasoning and Acting) for Question Answering OR Summarization.
hi, 1. how can i use embeddings = OllamaEmbeddings(model="nomic-embed-text") model directly from hugging face. Not from locally installed instance ? 2. if i am using local installed instance how can i publish it to huggingface space
RAG - I am working on Form 10K HTML doc RAG, where AI agent act as a financial analyst, read form 10K of company and create graph, chart , do risk analysis , calculate PE ratio etc instead of just extract written text from document. Goal is to replace financial analyst with LLM+RAG hence RAG should be robust and can do all what an expert can do. I tried with Langchain , Llama-Index but no luck
Hello, How to generate data when there are two tables and having relationship PK, FK? Does the model is capable enough to generate such data with relation?
Thanks for sharing these videos. These are really helpful. I have one question though. How can I install poppler in windows system? There I am facing some challenges. I am getting the following error in Windows system: "Unable to get page count. Is poppler installed and in PATH?"
is the langsmith part important ? it is not mentioned in the quivr github documentation.However, I'm having installation issue so was wondering whether langsmith will ease the process
How can i host a RAG model on hugging face ,where teachers can upload pdf content with text images ,and also audio and video content and then , students can interact with the model with text,speech to text(groq whisper v3),text to speech(eleven labs)
I want to finetune the llama 3 but I need to crate the special_tokens_map.json as follows: { "bos_token": { "content": "<|begin_of_text|>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false }, "eos_token": { "content": "<|im_end|>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false }, "pad_token": { "content": "<|end_of_text|>", "lstrip": false, "normalized": false, "rstrip": false, "single_word": false } } How can I do? Moreover I want to have a ollama run the model to have a chat with the model.
looks like we cannot handle below code within Azure open AI , as that of OpenAI langchain.prompts import FewShotPromptTemplate, PromptTemplate #from langchain.chat_models import AzureOpenAI from langchain.pydantic_v1 import BaseModel from langchain_experimental.tabular_synthetic_data.base import SyntheticDataGenerator from langchain_experimental.tabular_synthetic_data.openai import create_openai_data_generator, OPENAI_TEMPLATE from langchain_experimental.tabular_synthetic_data.prompts import SYNTHETIC_FEW_SHOT_SUFFIX, SYNTHETIC_FEW_SHOT_PREFIX
Hi, I have docker, ollama and webui installed, now I'm trying to update the open webui (it shows I'm a few versions behind now and the "Lets go" button doesn't work), and I'm trying to add the web search feature, some say also that duckadoo has a free api, but I am not a tech person, to read the docs is confusing for me, I only managed by miracle to get my stuff working thanks to this step by step video. Can you please make a video on how to update the webui and how to get the web search feature installed and working? Thanks.
Is this possible in the community edition (according to my googling it should be).. but cant seem to make it work -- BTW Good job with these videos. Greatly appreciated
Hello, you can quickly use Azure OpenAI by importing Azure OpenAI feom LangChain. For ref here is the link -> python.langchain.com/v0.2/docs/integrations/llms/azure_openai/