In this video, we'll build a RAG app using Ollama and an embedding model locally and for free. We'll track this app in LangSmith.
00:01 Introduction
01:08 Create a virtual environment
01:38 Installation
03:17 Initialize the local model
04:23 Enter LangSmith
07:28 Load data
09:14 Split data
11:50 Create a database
13:56 Retrieve data
15:35 Generate the output
20:23 Summary
🚀 Medium: / tirendazacademy
🚀 X: x.com/tirendaz...
🚀 LinkedIn: / tirendaz-academy
▶️ LangChain Tutorials:
• LangChain Tutorials
▶️ Generative AI Tutorials:
• Generative AI Tutorials
▶️ LLMs Tutorials:
• LLMs Tutorials
▶️ HuggingFace Tutorials:
• HuggingFace Tutorials ...
🔥 Thanks for watching. Don't forget to subscribe, like the video, and leave a comment.
🔗 Notebook: github.com/Tir...
🔗 LangGraph Blog: blog.langchain...
🔗 LangSmith: smith.langchai...
🔗 Rag Prompt: smith.langchai...
#ai #langgraph #generativeai
1 окт 2024