Тёмный

Flowise Ollama Tutorial | How to Load Local LLM on Flowise 

Leon van Zyl
Подписаться 24 тыс.
Просмотров 16 тыс.
50% 1

Опубликовано:

 

30 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 114   
@sadyaz64
@sadyaz64 6 месяцев назад
thank you please more video on open source models
@UncleF115
@UncleF115 4 дня назад
somehow webscraper does not upsert anything into the datastore. pdf loader is working. any idea?
@zubinbalsara8414
@zubinbalsara8414 6 месяцев назад
I am getting "Fetch Failed" error. My flowise is running in docker on localhost :3000 and ollama server is running on the machine (not docker) at localhost:11434? Can you please help me? Has flowise running in docker got to do anything with this issue? I can run ChatOpenAi without any problem, its just Ollama.
@АртурКальин
@АртурКальин 5 месяцев назад
host.docker.internal:11434
@hujeffrey5823
@hujeffrey5823 4 месяца назад
I have the same issue
@HermesMacedo
@HermesMacedo 6 месяцев назад
Leon, How to make Flow send Media (image, audio, video, PDF and other files) during the conversation and not just links? for example: Get information from a Google Drive.
@aiamfree
@aiamfree 6 месяцев назад
request structured data, parse the JSON and use client side rendering. I used this for recipe sample app, works very well
@IliasSeddik
@IliasSeddik 2 месяца назад
Thank you for this video, but I don't know why, I'm not able to bind chatOllama with the conversation chain, is there additional thing to do ?
@pamelavelasquez7244
@pamelavelasquez7244 5 месяцев назад
Thanks for the video tutorial, but embedding is not working for me, the Ollama server is running, and embedding activate mmap, this is the error 2024-04-15 22:51:49 [ERROR]: fetch failed TypeError: fetch failed at Object.fetch (node:internal/deps/undici/undici:11372:11) at async OllamaEmbeddings._request (D:\flowise\Flowise ode_modules\.pnpm\@langchain+community@0.0.39_@aws-crypto+sha256-js@5.2.0_@aws-sdk+client-bedrock-runtime@3.422_sdjgbtbvm2dvzs44hyiv6rdbae ode_modules\@langchain\community\dist\embeddings\ollama.cjs:110:26) at async RetryOperation._fn (D:\flowise\Flowise ode_modules\.pnpm\p-retry@4.6.2 ode_modules\p-retry\index.js:50:12) 2024-04-15 22:51:49 [ERROR]: [server]: Error: TypeError: fetch failed Error: TypeError: fetch failed at buildFlow (D:\flowise\Flowise\packages\server\dist\utils\index.js:415:19) at async utilBuildChatflow (D:\flowise\Flowise\packages\server\dist\utils\buildChatflow.js:229:36) at async createInternalPrediction (D:\flowise\Flowise\packages\server\dist\controllers\internal-predictions\index.js:7:29) 2024-04-15 22:55:43 [INFO]: PUT /api/v1/chatflows/0d375ada-df1f-4d66-941e-1f495ea9f4e5 2024-04-15 22:55:48 [INFO]: POST /api/v1/vector/internal-upsert/0d375ada-df1f-4d66-941e-1f495ea9f4e5 2024-04-15 23:00:50 [ERROR]: TypeError: fetch failed Error: TypeError: fetch failed at InMemoryVectorStore_VectorStores.upsert (D:\flowise\Flowise\packages\components\dist odes\vectorstores\InMemory\InMemoryVectorStore.js:26:27) at async buildFlow (D:\flowise\Flowise\packages\server\dist\utils\index.js:352:37) at async upsertVector (D:\flowise\Flowise\packages\server\dist\utils\upsertVector.js:117:32) at async Object.upsertVectorMiddleware (D:\flowise\Flowise\packages\server\dist\services\vectors\index.js:9:16) at async createInternalUpsert (D:\flowise\Flowise\packages\server\dist\controllers\vectors\index.js:28:29) 2024-04-15 23:00:50 [ERROR]: [server]: Error: Error: TypeError: fetch failed Error: Error: TypeError: fetch failed at buildFlow (D:\flowise\Flowise\packages\server\dist\utils\index.js:415:19) at async upsertVector (D:\flowise\Flowise\packages\server\dist\utils\upsertVector.js:117:32)
@subhamagrawal4740
@subhamagrawal4740 2 месяца назад
Nice Video , what to do if ollama is running behind some proxy server , then this does not work , is there any alternate in flowwise
@redrhino2048
@redrhino2048 6 месяцев назад
Hi Leon. Good work! Keep rolling out tutorials like this with Ollama! In my case, I didn't have to use the MMAP parameter. Everything works fine.
@xavierf2229
@xavierf2229 3 месяца назад
Is it possible to make a chatbot for my website using llama? and ai tools to sell? Thanks
@swhitings007
@swhitings007 6 месяцев назад
Big thanks for these videos Leon! You do such a great job of editing as well.
@leonvanzyl
@leonvanzyl 6 месяцев назад
Thank you 🙏
@conneyk
@conneyk 6 месяцев назад
This is exactly what i was looking for! Thank you so much! I’ve tried getting ollama working with flowise over the last days…
@leonvanzyl
@leonvanzyl 6 месяцев назад
Glad I could help 🙏
@UncleF115
@UncleF115 4 дня назад
i like this video cause it's free of nonsense. if you can mention how to cancel the ollama session and remove the call for action to subscribe will be perfect
@leonvanzyl
@leonvanzyl 4 дня назад
Thank you very much for the constructive feedback 🙏
@vish_9409
@vish_9409 6 месяцев назад
can you please help me with how to add our own pdfs to this
@jumadi2124
@jumadi2124 Месяц назад
hello mr what vga card are you using???,,,,,thanks.
@leonvanzyl
@leonvanzyl Месяц назад
I have an RTX 4070 😊
@JoseManuel-fp7bn
@JoseManuel-fp7bn 5 месяцев назад
Hi Leon. I have tried but I get the fetch failed error. I get the message "Ollama is running" when I run the ip localhost, but somehow flowise doesn't detect it. What could it be? Thanks!
@hujeffrey5823
@hujeffrey5823 4 месяца назад
me too
@jiuvk8393
@jiuvk8393 6 месяцев назад
I did everything exactly the same as you for the rag and made sure that ollama server is running (and talked to the model in the terminal and responded find immediately) also made sure mmap is on but still get Error: "Error: Request to Ollama server failed: 404 Not Found".
@leonvanzyl
@leonvanzyl 6 месяцев назад
That message seems to indicate that the Ollama server is unavailable
@anindabanik208
@anindabanik208 6 месяцев назад
Wow its osam,but my machine is very slow😢is any alternative for kaggle notebook?
@leonvanzyl
@leonvanzyl 6 месяцев назад
Yeah, these models are resource intensive. You could always try smaller models. Kaggle is a no-go. The point of this video, and Ollama, is to run these models locally. We will look at using hosted solutions as well though, like Huggingface. But, again, this will / could result in costs for the API usage OR infrastructure. There is a reason why OpenAI is so popular 😊
@gonzalodijoux5953
@gonzalodijoux5953 4 месяца назад
Rag doesn't work well. OLLAMA don't use the document
@leonvanzyl
@leonvanzyl 4 месяца назад
Which embedding model and vector store did you use?
@fatehkerrache9145
@fatehkerrache9145 Месяц назад
Thanks for this video. Can we please have a video with Ollama and folder loader?
@leonvanzyl
@leonvanzyl Месяц назад
You're welcome 🤗. The folder loader should be easy enough to add. Take note that it only works when running Flowise on your local machine. Simply create a folder on your machine, copy the path to the folder and add it to the folder loader. If you wanted to do something in the cloud, then you'd need to use something like the S3 file / bucket loader. Hope this helps.
@thatsweirdt
@thatsweirdt 6 месяцев назад
Hello, could you please create a content using huggingface chat and embedding?
@leonvanzyl
@leonvanzyl 6 месяцев назад
Working on a Huggingface video actually.
@cyborgmetropolis7652
@cyborgmetropolis7652 4 месяца назад
Great stuff. I tried some of the earlier tutorials using Ollama instead of OpenAI (the posiitve/negative review reply tutorial) and found the if/else didn't work with llama3. I'd like to learn more about how to make agents that are completely local and don't use any 3rd party services like open ai, pineapple, etc., but maybe that's not possible without too much missing functionality.
@antonslashcev8800
@antonslashcev8800 6 месяцев назад
Hey Leon, amazing tutorials, thank you I'm trying to make a project on Flowise using your tutorials, maybe you could help with two questions: 1. Is there a way to make a Multi-Agent system where Agents with different roles and functions can give instructions or feedback to each other before executing? (like AutoGen) I saw your tutorial on how to make something like this using Conversation Chain, but is it possible to make more advanced system with Agents? 2. How do I load images from external URLs? I don't see such a template. If I upload a pdf with an image will it work? Thanks!
@Kartratte
@Kartratte 6 месяцев назад
Hallo and thank you so much… I started testing Flowise with this and it worked.
@leonvanzyl
@leonvanzyl 6 месяцев назад
Glad to hear 👍
@anilrajshinde7062
@anilrajshinde7062 6 месяцев назад
Your all Videos are great. I am creating small web applications using flowise. Can you create one video based on adding streaming effect after creating API and that should reflect in web application? This will be very useful.
@toursian
@toursian 3 месяца назад
Thanks for your awesome videos. Please add more videos on Opensource Models. Thanks again.
@stephensamuel2770
@stephensamuel2770 6 месяцев назад
Can I use it to create a knowledge based chabot for website ?
@leonvanzyl
@leonvanzyl 6 месяцев назад
Absolutely, send me an email and my agency will assist. Link in description
@DarkKnight-uk7mq
@DarkKnight-uk7mq 6 месяцев назад
Thanks for another great. We would really appreciate it if you made a bigger project like Chatbots for a Real Estate or e-commerce website
@leonvanzyl
@leonvanzyl 6 месяцев назад
Great ideas
@DarkKnight-uk7mq
@DarkKnight-uk7mq 6 месяцев назад
@@leonvanzyl thanks
@nhtna4706
@nhtna4706 6 месяцев назад
no more api usage? not more spendings? no need of gpu's?
@leonvanzyl
@leonvanzyl 6 месяцев назад
Your PC does not have a GPU? ☺️ Unfortunately, you need powerful hardware to run the more impressive models.
@jiuvk8393
@jiuvk8393 6 месяцев назад
can I choose the installation folder for Ollama and the model? I usually use a external drive to save space in my computer.
@muchossablos
@muchossablos 6 месяцев назад
Leon, how to update Flowise ?
@leonvanzyl
@leonvanzyl 6 месяцев назад
Check out the first video in the series. There is a chapter for upgrading Flowise.
@justdavebz
@justdavebz 5 месяцев назад
How does this change if I am using docker?
@eeling9212
@eeling9212 5 месяцев назад
im getting fetch failed error.
@TheMySTeRyArTsHD
@TheMySTeRyArTsHD 21 день назад
me as well, did you find any solution?
@abelpouillet5114
@abelpouillet5114 3 месяца назад
thank you very much , please more video on open source models !
@marcioricardoluciano3779
@marcioricardoluciano3779 Месяц назад
Hi Leon, how are you? I'm from Brazil and I'm advancing my studies in the area of ​​automation in general using AI resources and when looking for videos about Flowise I found your channel. Since yesterday I've been watching all the videos on the playlist about Flowise. I want to express my gratitude for your dedication and teachings. Thank you very much!
@leonvanzyl
@leonvanzyl Месяц назад
That is incredible. Thank you for the feedback 🙏
@ricardofernandez2286
@ricardofernandez2286 5 месяцев назад
Hi Leon, very useful tutorial!!! I'm running this on CPU (8 vCPUs + 30Gb of RAM) and it is extremely slow. In fact Ollama uses just a few resources and I can't make it use all available CPUs or RAM. I know that GPU is the way to go with LLMs, but perhaps you have some suggestions on how to make this configuration perform a little better. Thank you!!!
@leonvanzyl
@leonvanzyl 5 месяцев назад
Hopefully Ollama will improve over time
@lumi.ai_
@lumi.ai_ 6 месяцев назад
Can anyone solve my doubt , I am unable to upsert?????
@leonvanzyl
@leonvanzyl 6 месяцев назад
What's the error? I had to enable MMap to get it to work, did you try that?
@lumi.ai_
@lumi.ai_ 6 месяцев назад
@@leonvanzyl Nope I havent tried but will try and get back to you. Thanks for the response i thought I should not use flowise but please help us we will make great llms. Hope you will solve our problems
@centraldeexames7300
@centraldeexames7300 6 месяцев назад
Hi Leon! Thank you for sharing another excellent content. I´m struggling to figure out how to insert a system prompt along with my own prompt into the flow of a chatbot I'm creating on Flowise. I'm using an Open Source uncensored LLM via Replicate, and it needs a system prompt to behave the way I'd like. I would be very grateful if you could help me in any way.
@leonvanzyl
@leonvanzyl 6 месяцев назад
You can set the system prompt by clicking on Additional Parameters on the chain, or you can assign a Chat Prompt Template. Apologies if is misunderstood ☺️
@rickyS-D76
@rickyS-D76 6 месяцев назад
Hi Leon, great content. Really love your contents and presentation. Can you please make a video of RAG, where you embed different types of files, like CSV, pdf, doc and do the chats with those. Thanks
@leonvanzyl
@leonvanzyl 6 месяцев назад
Hey, I actually have a video on RAG in this series. We use a web-loader in that video, but you can simply swap the loader out for anything else.
@moroccangamereviews8824
@moroccangamereviews8824 Месяц назад
THank you man for all these vids you are the best. I hope you add more vids on how to add more complexity with creative ideas. thanks
@leonvanzyl
@leonvanzyl Месяц назад
Thank you!
@RuiminWang-hk1wz
@RuiminWang-hk1wz 6 месяцев назад
Thanks for your videos. They really help me a lot!
@meister4831
@meister4831 5 месяцев назад
Thank you. How does this solution compare to using LocalAI as you showed in an older video?
@leonvanzyl
@leonvanzyl 5 месяцев назад
They do pretty much the same thing. Ollama is just a newer application for running models locally
@BadBite
@BadBite 2 месяца назад
Oh! thank you very much. Really appreciate your work! 🎉
@leonvanzyl
@leonvanzyl 2 месяца назад
You're welcome 🤗
@KevinBahnmuller
@KevinBahnmuller 6 месяцев назад
a video about Ollama function calling with flowise would be very nice :)
@leonvanzyl
@leonvanzyl 6 месяцев назад
Very few models support function calling. It's actually limited to OpenAI and Mistral at the moment. You could therefore simply download the Mistral model in Ollama 👍. Be warned, the hardware requirements for Mistral function calling is steep 😄
@fatemehjahedpari815
@fatemehjahedpari815 5 месяцев назад
Great Videos. Thanks a lot!
@Skiplegday1
@Skiplegday1 6 месяцев назад
Is there a way to share the created chatbot instance? If I would like for someone else to try out the chatbot for example.
@leonvanzyl
@leonvanzyl 6 месяцев назад
You can export a flow from the settings of the flow. The other person can then import the flow on their end.
@khalidkifayat
@khalidkifayat 6 месяцев назад
nice tutorial leon, few questions 1. can we use these Open Source Models to create a chatbot and give to clients, if yes then where will it reside ?? 2. For data privacy its a good option, but how to make it to production OR production ready deployment keeping privacy factor.
@leonvanzyl
@leonvanzyl 6 месяцев назад
Thanks! Then point of the video is to run the bots locally. If you want to use these models in the cloud you would need to use hosted services like Huggingface or AWS Bedrock. I'll definitely release a video on these. These is a cost involved in using these services of course, so I just wanted to give you guys a free local alternative.
@randomguyfrominternet
@randomguyfrominternet 6 месяцев назад
You don't need to always host everything in cloud. You can also have your own server at home, garage, office.. All you need is good enough hardware for the model and public IP with well-configured networking. But local hosting is a whole different topic to learn. So you either go: - Self-hosted server - Cloud - Combination of both (e.g. hosting Flowise, files and databases on your own server and calling your model hosted on stateless cloud compute endpoint from it)
@RenatYaraev
@RenatYaraev 6 месяцев назад
I've been watching your work on youtube for a very long time and wanted to say thank you very much for what you do and wanted to wish you a lot of luck for your wishes!
@leonvanzyl
@leonvanzyl 6 месяцев назад
Thank you!
@mehdibelkhayat5088
@mehdibelkhayat5088 6 месяцев назад
Hi Leon, thanks a lot for the video. For my purpose i used nomic-embed-text as llm for embedding, it s faster. I managed to connect directly my ollama+flowise customtool to my crm api .. but it worked only with llama2 model and not mistral ... i struggled a while then found the trick ! no need to interface to make or n8n ... i'm still working on it ..Cheers
@leonvanzyl
@leonvanzyl 6 месяцев назад
Keep me in the loop. I haven't found a reliable way to use open source models with agents.
@mehdibelkhayat5088
@mehdibelkhayat5088 6 месяцев назад
@@leonvanzyl for now i have good results with llava:13b or llama2, they are the only one which use the custom tool on flowise with ollama (with the others no results : nexusraven, orca2,gemma,phi,openchat,mistral ....) i'll keep you in touch
@BruWozniak
@BruWozniak 4 месяца назад
Been mind-blown by pretty much every single one of your videos / tutorials 👏, I'm super grateful! 🙏 Just a thought, how about an entirely open source and local stack running on Docker - I mean, for example, Flowise, ChromaDB, Mixtral (a quantized model running through Ollama?) running in different containers locally - And then, wow, a deployment pipeline to, say, Google Cloud with a CI/CD script via Github Actions - So, dev locally with the Docker containers, push to Github when satisfied, automatically build and deploy on Github Actions and boom, app available on Google Cloud - That would be incredible! I'm going to try it right now, lots of research and trials and errors ahead... 😁
@leonvanzyl
@leonvanzyl 4 месяца назад
Thank you for the feedback! That sounds like an awesome project 😁
@BruWozniak
@BruWozniak 4 месяца назад
@@leonvanzyl Looks like ```ollama run mixtral:8x7b``` is going to be a little challenging for my modest hardware 😁 Gonna try with ``` ollama run gemma:2b```, may be even ```gemma:7b```, it is still open source and apparently it is lightweight...
@BruWozniak
@BruWozniak 4 месяца назад
Ah sorry ```no markdown``` over here 😁
@zhalberd
@zhalberd 2 месяца назад
Great video thank you.
@leonvanzyl
@leonvanzyl 2 месяца назад
You're welcome 🤗
@youwang9156
@youwang9156 6 месяцев назад
Thank you for your video!!!!! just wonder if we host everything locally on Flowise, after we set It up, can we generate the Python API and get to use it somewhere else? or we can only use API generated by flowise locally as well in VS code?
@leonvanzyl
@leonvanzyl 6 месяцев назад
If you're hosting it locally, then you can only access it locally. I have a video on deploying Flowise in this series, but I'm guessing you want to use Open Source models in the cloud? Your best option is to use Huggingface (video coming soon).
@youwang9156
@youwang9156 6 месяцев назад
thank you so much, you literally save my life. I been considering hugging face as well, but open source models like Mixtral model , it doesn't work with outputparaser of langchain, only openai model
@youwang9156
@youwang9156 6 месяцев назад
do you think i can deploy the Ollama locally and flowise locally and build a outputparser framework, and eventually get to use it through local API generated by local flowise ? my goal is to find a cheaper way to run outputparser with a decent performance , since openai will cost so much @@leonvanzyl
@whackojaco
@whackojaco 6 месяцев назад
Dankie vir jou videos Leon. Ek leer baie by jou en dis lekker om 'n mede Suid Afrikaner te sien gesels oor AI en LLMs.
@leonvanzyl
@leonvanzyl 6 месяцев назад
Dankie Jaco! Bly jy hou daarvan 😁.
@PIOT23
@PIOT23 6 месяцев назад
Love the open source content! Would love to see a good video on Mixtral
@leonvanzyl
@leonvanzyl 6 месяцев назад
Hehe, my PC can barely run it 😂.
@Machiuka
@Machiuka 6 месяцев назад
The models are very slow in downloading. It is any way to download those models separately and not from ollama pull command?
@leonvanzyl
@leonvanzyl 6 месяцев назад
You can download them from Huggingface.
@Machiuka
@Machiuka 6 месяцев назад
@@leonvanzyl The problem was solved by themselve. Maybe there was a network problem. Today all worked flawlessly. Thank you for sharing this tutoria!
@florentflote
@florentflote 6 месяцев назад
@JoaquinTorroba
@JoaquinTorroba 6 месяцев назад
👏🏼
@Romusic1
@Romusic1 5 месяцев назад
great , thanks!❤
@leonvanzyl
@leonvanzyl 5 месяцев назад
You're welcome
@nabildjelloudi7087
@nabildjelloudi7087 5 месяцев назад
thank's keep going !
@leonvanzyl
@leonvanzyl 5 месяцев назад
Will do 😁
@nabildjelloudi7087
@nabildjelloudi7087 5 месяцев назад
i juste have an issue with openai api key it shows me this error when i try to run the flowise.. => "InsufficientQuotaError: 429 You exceeded your current quota...." should i pay tokens ? @@leonvanzyl
@jamminrebel3614
@jamminrebel3614 6 месяцев назад
great video as always. thx for the premium content you deliver. 🦾💙 is this only for local models or could I use open router credentials here, or would I do dedicated API agent instead of chatbot? sorry for confusion =D
@leonvanzyl
@leonvanzyl 6 месяцев назад
You're welcome 🤗. I'm not familiar with Open Router, maybe someone in comments can assist.
Далее
Add Flowise to ANYTHING! Flowise API Crash Course
28:17
OYUNCAK DİREKSİYON İLE ARABAYI SÜRDÜ 😱
00:16
Просмотров 2,1 млн
AI in n8n: Supercharge Your Workflows in Minutes!
17:21