Тёмный

Get Started with LangChain in Node.js 

Developers Digest
Подписаться 24 тыс.
Просмотров 18 тыс.
50% 1

Опубликовано:

 

26 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 95   
@DevelopersDigest
@DevelopersDigest Год назад
Building A Chatbot with Langchain and Upstash Redis in Next js ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-gpXXIvfSCto.html
@MarioZamoraMusic
@MarioZamoraMusic Год назад
Happy you did a node version 👍
@DevelopersDigest
@DevelopersDigest Год назад
Thanks Mario!
@MarioZamoraMusic
@MarioZamoraMusic Год назад
Love the idea of having conversations with your favorite author by going back and forth with chat via the book text. Great share 👍
@DevelopersDigest
@DevelopersDigest Год назад
Thank you Mario 🙂
@blarvinius
@blarvinius Год назад
Wow so clear! Really appreaciate the multi-level numbered comments. Really helps a novice like me to see what you are doing, follow along, and code. A good practice anyhoo!
@DevelopersDigest
@DevelopersDigest Год назад
Thanks!
@SatadruChique
@SatadruChique Год назад
Underrated video. Not enough content yet on this. You jumped straight to the point. Props for using those comments for high level understanding.
@DevelopersDigest
@DevelopersDigest Год назад
Thank you so much Satadru, what concepts would you like to see covered in Langchain?
@SatadruChique
@SatadruChique Год назад
@@DevelopersDigest I'm still new at this, so I can't comment yet what parts you haven't covered. In your channel you covered most of the basics already and the quality of the videos are great. I'm learning a great deal out of it. I like your videos because you start with actual running examples, well commented, instead of a lot of unnecessary explanations. Learning as you practice is much easier like in your videos. You encouraged us to "do it" instead of just watching.
@SatadruChique
@SatadruChique Год назад
I would like to know the performance implications of running LangChain in a production Nodejs environment, but not sure if that's in your scope. Mostly on the internet people show how to run LLM locally, but I'm interested in using this actually on production nodejs servers, inside an express app for example.
@nashonsagate6607
@nashonsagate6607 Год назад
after a week long of search, this has been my saviour. Great job.
@DevelopersDigest
@DevelopersDigest Год назад
Thanks Nashons! A lot more Langchain content planned for this week, stay tuned !🙂
@ojikutu
@ojikutu Год назад
Finally a nodejs version without pinecone. Thanks.
@DevelopersDigest
@DevelopersDigest Год назад
Thanks for watching Leke! 🙂
@coinheadz1942
@coinheadz1942 Год назад
just dropping by to say thanks. It took me forever to find a tutorial that works. I just wish I found you a week ago, but thanks again, keep them coming.
@DevelopersDigest
@DevelopersDigest Год назад
Thanks CoinHeadz! I love to hear that 🙏
@yiraqcic1348
@yiraqcic1348 11 месяцев назад
0:42 The way you held that Fffff is absolutely legendary 🤣🤣 Thanks a bunch for the video 🙏
@DevelopersDigest
@DevelopersDigest 11 месяцев назад
Haha - Putting an introverted dev in front of a mic 🎙️ has proved to have some with some side effects 😅 Thanks for watching, cheers!
@onar1261
@onar1261 Год назад
the only one video that made me learn. FCK pinecode and all shit, broken those days... Making me unable to learn. THANKS a lot, dunno why everyone is doing all the guides for python :( U are neat thanks man
@DevelopersDigest
@DevelopersDigest Год назад
Thanks for watching Onar!
@killerwick5586
@killerwick5586 Год назад
thanks a lot dude this helped me complete my internship assignment
@DevelopersDigest
@DevelopersDigest Год назад
That is great! Thanks for watching 🙂
@TheDelineacion
@TheDelineacion 3 месяца назад
Hi amazing!, I have some question. Using OpenAI model and APIKEY has cost??? IT´s posibble to create the vector index and uploading for example into s3 bucket and then use this url? Thanks!!
@kacper6442
@kacper6442 Год назад
Thank you, works :) May I have a question: I want to tell my model to behave in a certain way, the way I want to. I simply don't know where I can do this to pass the instructions to him. If you know the answer I will be grateful too see it. Thanks.
@DevelopersDigest
@DevelopersDigest Год назад
You can always pass the results within a query for an LLM. So the basic examples you could try is first passing in the results from your vector store to the LLM, then you could add instructions before and or after. Alternatively you could add a system message to the OpenAI payload which are weighted higher when past. There are also a handful of langchain ways you could accomplish this but I found using the code model api as a good place to learn!
@kacper6442
@kacper6442 Год назад
Thank you for your answer. For now I simply changed the prompt in node_modules -> dist -> chains and so on. It works, because there is a template with {question} from User and {context} from Retriever. It's kinda a system message.
@stefanomartell8413
@stefanomartell8413 Год назад
Great tutorial! Can you please explain in detail why didn’t you use Pinecone? And could a production app don’t use it as well? Thanks
@DevelopersDigest
@DevelopersDigest Год назад
For simplicity. Rationale is that this was a quick way to show someone to get started with Langchain without having to dive into Vector databases. I consider this my introduction to Langchain video. I have a video going in depth on how to do something very similar to this with pinecone which I recommend as a next step after this if you are interested. Depending on your requirements and scale you require you could use something like this in production, it would really depend on the use case.
@stefanomartell8413
@stefanomartell8413 Год назад
@@DevelopersDigest Thanks for your answer. Right... I'd assume it's pretty much the same as hosting your own mongodb database instead of using something as Atlas o Scalegrid. For a static database as the one in this video (and I'd assume it's the case for pretty much every other PDF or Doc), I still haven't figured out a case in which a cloud DB like Pinecone is a must. For dynamic documents, as in the MongoDB example, they're useful because of their snapshots, security, version control and so on.
@Hrsk174
@Hrsk174 Год назад
Great Video! Helped me test things locally. Can openAI read and use the embeddings that we generate? If I use a pdf that I bought to generate embeddings, am I violationg any laws?
@DevelopersDigest
@DevelopersDigest Год назад
Great question! Digging into OpenAI's privacy policy and your regional laws would be a first step I would take a look at. The copyright question surrounding LLMs as a whole is an interesting area of discussion right now!
@hass2588
@hass2588 Год назад
Thank you for this amazing tutorial 🙌. I just have a quick question, is there a way to reference multiple text files? Or should all text be in one file? Thanks
@DevelopersDigest
@DevelopersDigest Год назад
You can have multiple text files that convert into multiple vector store with this approach it does become a bit more involved to retrieve the data from multiple vector stores however. Alternatively, if you want one vector store for all text files, you could read all the txt files before embeddings and concatenate them to be within one vector store. Hopefully this helps!
@divyasrik1734
@divyasrik1734 Год назад
Very very Useful. THANK YOUU❤
@DevelopersDigest
@DevelopersDigest Год назад
Thank you for watching!
@MonarchofSouls
@MonarchofSouls Год назад
What a great video! After trying it out myself i was quite impressed, how easy it seemed to integrate my own data as an embedding. However i was wondering, if you can combine your embeddings and the trained chatGPT Model? As an example i've provided some sauces as an embedding with their ingredients and then tasked the AI to provide me some recepies based on these, but it said it didnt know any. However if i do the same one GPT4 or GPT3.5 even it does provide me with a good answer. Is there any way to combine your data with the trained model?
@DevelopersDigest
@DevelopersDigest Год назад
If querying your vector store isn’t working as well as you would have expected you could look at fine tuning a model. OpenAI does have some good documentation on how to start fine tuning. I can also make a video on fine tuning if that helps! platform.openai.com/docs/guides/fine-tuning
@DevelopersDigest
@DevelopersDigest Год назад
Also thank you for the kind words, thank you for watching! 🙂
@brainexception
@brainexception Год назад
Loved the tutorial, really help get up to speed with something I am building, I want to know what can we do if we have a website FAQs that need to be embedded as a chatbot for the any website?
@DevelopersDigest
@DevelopersDigest Год назад
Thanks Salman, check out Databerry, I have a video on my channel, it’s an open source project that can accomplish what you are looking for, alternatively you could look into organizing your data in a vector database on pinecone. I have a handful of langchain and pinecone videos coming out if you are interested. Stay tuned !
@brainexception
@brainexception Год назад
@@DevelopersDigest Thanks would be waiting.
@StanleySeow
@StanleySeow Год назад
great video, may I know the hardware requirements to run lang-chain ? Thanks
@DevelopersDigest
@DevelopersDigest Год назад
Depends on what exactly you are looking to do with it but say for embedding and using hosted inference endpoints. You would be able to use most computers, it becomes a different question if you want your computer to start to do more of the inference locally though!
@EmaSuriano
@EmaSuriano Год назад
Great video! If it's possible, can you link to the repository? I would love to fork it and start playing with my own books :)
@DevelopersDigest
@DevelopersDigest Год назад
Absolutely. I will post a repo to the description of the video tonight. I’ll also respond to this comment with the repo link so you will get an alert when it’s up!
@DevelopersDigest
@DevelopersDigest Год назад
Repo link: github.com/developersdigest/Get_Started_with_LangChain_in_Nodejs 😀
@blarvinius
@blarvinius Год назад
OH NOOOO! I'M getting the dreaded "Package subpath './dist/text_splitter' is not defined by "exports" " As in all LangChain attempts! Have been all over GH and the docs and still not solved. Please someone help...
@reillywynn4005
@reillywynn4005 Год назад
Love the video, it was super helpful and helped me get a server running so I can make requests to it from my website I'm creating. Am I able to also use the base chatgpt knowledge base on top of the custom data? It's very good at answering any question in the data but just says "I dont know" for any other question.
@DevelopersDigest
@DevelopersDigest Год назад
Yes, check out my video here. It should have a similar implementation to what you are describing. m.ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-EFM-xutgAvY.html
@reillywynn4005
@reillywynn4005 Год назад
@@DevelopersDigest awesome, thanks. I'll check it out. Your page is a gold mine. You really should have more subscribers and views
@DevelopersDigest
@DevelopersDigest Год назад
@@reillywynn4005 thank you that means a lot!
@MarioZamoraMusic
@MarioZamoraMusic Год назад
Second time watching it. Getting closer to understanding this whole thing 😅
@DevelopersDigest
@DevelopersDigest Год назад
💪
@hchentw9154
@hchentw9154 Год назад
thank you. I enjoy wathcing this. :-)
@DevelopersDigest
@DevelopersDigest Год назад
Thank you for watching!
@rindtier7287
@rindtier7287 Год назад
Could you please make another tutorial to expose this as an API so that we can deploy it and let each user have their own session with a chat? ❤
@DevelopersDigest
@DevelopersDigest Год назад
Great idea. I have an authenticated application with chat, langchain, and more that will be coming soon! Stay tuned!
@hmtbt4122
@hmtbt4122 Год назад
sir, the quality of tutorial is really good, can you just post it somewhere after video upload
@DevelopersDigest
@DevelopersDigest Год назад
Absolutely. I will post a repo to the description of the video tonight. I’ll also respond to this comment with the repo link so you will get an alert when it’s up!
@hmtbt4122
@hmtbt4122 Год назад
@@DevelopersDigest sir that's really nice of you. It's something really helpful that you're doing 😊
@DevelopersDigest
@DevelopersDigest Год назад
​@@hmtbt4122 Repo link: github.com/developersdigest/Get_Started_with_LangChain_in_Nodejs
@hmtbt4122
@hmtbt4122 Год назад
@@DevelopersDigest thanks sir
@igorronaldo
@igorronaldo Год назад
is there a way to substitute the OpenAiEmbeddings with HuggingfaceEmbeddings?
@DevelopersDigest
@DevelopersDigest Год назад
You can swap out for many different embeddings types yes!
@mikerobin8410
@mikerobin8410 Год назад
Well done, will implement this in a project i have
@DevelopersDigest
@DevelopersDigest Год назад
Thanks Mike! Glad to hear it
@tecomAGS
@tecomAGS 6 месяцев назад
is it possible to include RAG with mongodb?
@MACGamings
@MACGamings 11 месяцев назад
How to return customized answer like " i don't know" when the context not match
@geava3199
@geava3199 Год назад
Very useful, thanks for getting me into this with a simple video 😄
@DevelopersDigest
@DevelopersDigest Год назад
I love to hear that! Thank you 🙏
@hass2588
@hass2588 Год назад
Hi, is there a way to do this but make the AI remember the previous text as well?
@DevelopersDigest
@DevelopersDigest Год назад
Great question and absolutely, there would be a handful of different approaches to accomplish this. I will be creating more Langchain and OpenAI content on the coming weeks and I will make sure to cover this in an upcoming video! 🙂
@hass2588
@hass2588 Год назад
@@DevelopersDigest Alright thank you very much!
@holdinvestors6935
@holdinvestors6935 Год назад
i can not run it, seemly the issue happened on storage " await vectorStore.save(VECTOR_STORE_PATH)" to fetch the .env code; and showed the authorization bearer the code is not the code in .env file. please help me find the solusion. thanks.
@DevelopersDigest
@DevelopersDigest Год назад
Did you add your api key to the env? Cheers
@Venkatesh-vm4ll
@Venkatesh-vm4ll Год назад
Sir what if 1 million user using the api, that time many file will be created for cashing,storing one million file is scalable, this is how work in real world or any other solution
@DevelopersDigest
@DevelopersDigest Год назад
I have a video on Pinecone and langchain that is a scalable version of something very similar to this you might like checking out. This is intended to be an introduction :)
@Venkatesh-vm4ll
@Venkatesh-vm4ll Год назад
​@@DevelopersDigestthank you
@Hemantsharma-wp3fm
@Hemantsharma-wp3fm Год назад
sor this is the updated code as the method shown is degraded now. thanks import { OpenAI } from 'langchain/llms/openai'; import {RetrievalQAChain} from 'langchain/chains'; import { HNSWLib} from "langchain/vectorstores/hnswlib"; import { OpenAIEmbeddings } from 'langchain/embeddings/openai'; import { RecursiveCharacterTextSplitter} from 'langchain/text_splitter'; import * as fs from 'fs'; import * as dotenv from 'dotenv';
@IuriiNovoselov
@IuriiNovoselov Год назад
Sir you are Batman. All works on Ukrainian long text
@DevelopersDigest
@DevelopersDigest Год назад
Love to hear it!
@sheeraz_
@sheeraz_ Год назад
It won't run getting an error with some headers object: /Get_Started_with_LangChain_in_Nodejs/node_modules/langchain/dist/util/axios-fetch-adapter.js:234 const headers = new Headers(config.headers); ReferenceError: Headers is not defined
@DevelopersDigest
@DevelopersDigest Год назад
The first thing I would check would be to see which version of nodejs you are running and update it to at least version 18 if need be 🙂
@D.horiz0n
@D.horiz0n Год назад
i tried to use loadLLM to use my local gpt4all-lora-quantized-ggml.bin model. i failed to do so as it says BaseLanguageModel(return from loadLLM) does not have methods cal() or generate(). send help
@DevelopersDigest
@DevelopersDigest Год назад
I feel like that Langchain bird could be a great bird to deliver a set of debugging instructions😆 I will be looking into gpt4all model later this week, if I find anything I will circle back!
@D.horiz0n
@D.horiz0n Год назад
@@DevelopersDigest quick update, i managed to use loadLLM for langchain.js. it was my fault for trying to convert the file path of model into file uri whereas loadLLM takes model path only. However it seems loadFromFile has a memory limit of 2GB and it cant load my 3gb gpt4all model. The error is code:'ERR_FS_FILE_TOO_LARGE'. It seems a limitation of the 'fs' node module itself.
@hmtbt4122
@hmtbt4122 Год назад
i don't know why but on running npm i hnswlib-node there are a lots of error. gives the error " npm ERR! code 1 npm ERR! path C:\Users\heman\Desktop\project ode_modules\hnswlib-node npm ERR! command failed npm ERR! command C:\WINDOWS\system32\cmd.exe /d /s /c node-gyp rebuild npm ERR! gyp info it worked if it ends with ok npm ERR! gyp info using node-gyp@9.3.1 npm ERR! gyp info using node@18.16.0 | win32 | x64 npm ERR! gyp info find Python using Python version 3.11.3 found at "C:\Python311\python.exe" npm ERR! gyp ERR! find VS npm ERR! gyp ERR! find VS msvs_version not set from command line or npm config" this is only 40% of error log. can you suggest why its not working
@DevelopersDigest
@DevelopersDigest Год назад
I would first make sure you have python 3 installed :)
@hmtbt4122
@hmtbt4122 Год назад
@@DevelopersDigest yes sir, i reinstalled it.
@blarvinius
@blarvinius Год назад
Далее
Understanding ChatGPT/OpenAI Tokens
7:21
Просмотров 38 тыс.
DEMONS ARE ATTACKING BRAWL STARS!!!
09:08
Просмотров 7 млн
OpenAI Embeddings and Vector Databases Crash Course
18:41
DEMONS ARE ATTACKING BRAWL STARS!!!
09:08
Просмотров 7 млн