Тёмный
No video :(

Ollama Llama 3 - RAG: How to create a local RAG system with LLAMA 3 using OLLAMA 

Siddhardhan
Подписаться 123 тыс.
Просмотров 4,5 тыс.
50% 1

This video is about building a local RAG system with LLAMA 3 using OLLAMA.
Join this channel to get access to perks:
/ @siddhardhan
Code file link: drive.google.c...
Ollama Getting started: • Ollama - Run LLMs Loca...
RAG - concept: • Retrieval Augmented Ge...
All presentation files for the Machine Learning course as PDF for as low as ₹200 (INR): Drop a mail to siddhardhans2317@gmail.com
Machine Learning Course in 60 Hours: • Complete ML course in ...
Machine Learning Projects Playlist: • Machine Learning Projects
Download the Course Curriculum File from here: drive.google.c...
LinkedIn: / siddhardhan-s-741652207
Telegram Group: t.me/siddhardhan

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 21   
@kumarrao-g2k
@kumarrao-g2k 7 дней назад
Hi Siddhardhan, thank you, this is very useful. One has to do a lot of troubleshooting though. maybe you can update this to latest version. For example - UnstructuredFileLoader is no longer available and instead UnstructuredLoader is used. Also please include content on how to troubleshoot for libmagic, text splitter, any modules not found kind of errors. Will be very helpful.
@Donttouchthesnow
@Donttouchthesnow 5 дней назад
The amount of troubleshooting ive had to do just to import the dependencies is insane, please fix this or update it in the description
@revanthbhuvanagiri9177
@revanthbhuvanagiri9177 2 месяца назад
@Siddhardhan Hi Sid, I'm fan of your work, Could please create a video how to deploy the RAG System, It would be really great if you teach us this aspect. Once again your videos are informative ,Thank you!
@PAVANSIDEAS
@PAVANSIDEAS 2 месяца назад
hi sir please complete the ml remaining modules we are waiting for your videos. We have completed watching all the modules and your way of plan to explain ml is quite different from others. So i prefer only your videos sir Please start making the ml modules
@javiecija96
@javiecija96 2 месяца назад
I've trying to make a llama agent with tools. More precisely a custom version of the tavily search tool modifying the _run method to access all the API parameters. Can you do a video on that?
@amventures1
@amventures1 Месяц назад
Please also make a video about function calling using the same Ollama and model
@amventures1
@amventures1 Месяц назад
Can I load the model from locally or hugging face using Ollama? And complete the rest of what you done. This is very important to my project. Please answer me. And thanks for help
@user-nq5fm1bm5u
@user-nq5fm1bm5u Месяц назад
HI sir, what should i do to insert more than 1 pdf? thanks.
@mlTS7626
@mlTS7626 2 месяца назад
Thanks for the tutorial Even for a single page pdf, with chunk size 1000 and overlap 200 It is taking me more than a minute 1.25 minute to answer a single query, is this normal? ((sys config: Ryzen5 5600H 16gb RAM no GPU))
@KumR
@KumR 2 месяца назад
Thanks Sid.. Can you please help in expanding this with a streamlit UI and also include an listening and speaking ability?
@Siddhardhan
@Siddhardhan 2 месяца назад
Hi! Sure
@angela-tsai-cjt
@angela-tsai-cjt 2 месяца назад
ValidationError Traceback (most recent call last) Cell In[24], line 1 ----> 1 llm = Ollama( 2 model = "llama3:latest", 3 temperatue=0 4 ) File ~/Applications/anaconda3/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__() ValidationError: 1 validation error for Ollama temperatue extra fields not permitted (type=value_error.extra) Hello, do you know how to fix this error?
@Arunak13203
@Arunak13203 2 месяца назад
question = "What is the document about?" response = qa_chain.invoke({"query": question}) print(response["result"]) ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused')) how to fix this error sir
@Siddhardhan
@Siddhardhan 2 месяца назад
Hi! Was the model loaded from ollama?
@Arunak13203
@Arunak13203 2 месяца назад
@@Siddhardhan Yes sir but it was llama3:latest model
@Siddhardhan
@Siddhardhan 2 месяца назад
@@Arunak13203 check if you can run it from terminal with ollama run command & if you are getting any response
@Arunak13203
@Arunak13203 2 месяца назад
@@Siddhardhan I ran the Llama3 model in terminal and it genrates the output without any error but here i got error import os from langchain_community.llms import Ollama from langchain_community.vectorstores import FAISS from langchain.chains import RetrievalQA from langchain.embeddings import HuggingFaceEmbeddings from langchain.document_loaders import UnstructuredFileLoader from langchain.text_splitter import CharacterTextSplitter llm = Ollama( model="llama3:latest", temperature=0 ) loader = UnstructuredFileLoader("ML Python.pdf") doc = loader.load() splitter= CharacterTextSplitter(separator='/n', chunk_size=800, chunk_overlap=100) chunks= splitter.split_documents(doc) data= FAISS.from_documents(chunks, HuggingFaceEmbeddings()) qa_chain = RetrievalQA.from_chain_type( llm, retriever= data.as_retriever() ) question = "What is the document about?" response = qa_chain.invoke({"query": question}) print(response["result"])
@user-fw5bi8kj7w
@user-fw5bi8kj7w 2 месяца назад
hi mr sidharthan. i do appreciate your help to implement this via your video. But i have a small probelm. At the seventh (7th) cell of the jupyter notebook an error was raised as follows....... " Could not import faiss python package. Please install it with `pip install faiss-gpu` (for CUDA supported GPU) or `pip install faiss-cpu` (depending on Python version)." and i tried installing faiss gpu via pip but it said "ERROR: Could not find a version that satisfies the requirement faiss-gpu (from versions: none) ERROR: No matching distribution found for faiss-gpu" by the way i have a rtx 3060 .....im trying to take advantage of my gpu......so if you dont mind can you help me. im just a fresher by the way so pls dont mind my scarce knowledge.
@Siddhardhan
@Siddhardhan 2 месяца назад
Hi! It's cuda version issue. It would work in python 3.10.12. check if you can create a env with this version in conda or pip
@user-fw5bi8kj7w
@user-fw5bi8kj7w 2 месяца назад
@@Siddhardhanthank you sir for the immediate acknowledgement. But sir I'm using python 3.11 and I'm already using a brand new environment named "work". So should i step down my python version to 3.10
@know1374
@know1374 2 месяца назад
....
Далее
host ALL your AI locally
24:20
Просмотров 1 млн
Python RAG Tutorial (with Local LLMs): AI For Your PDFs
21:33
Introduction to LlamaIndex with Python (2024)
39:57
Просмотров 11 тыс.
Fully local RAG agents with Llama 3.1
20:04
Просмотров 41 тыс.
Create your own CUSTOMIZED Llama 3 model using Ollama
12:55
Agentic RAG: Make Chatting with Docs Smarter
16:11
Просмотров 13 тыс.
GraphRAG: The Most Incredible RAG Strategy Revealed
10:38