Тёмный

100% Offline ChatGPT Alternative? 

Rob Mulla
Подписаться 170 тыс.
Просмотров 623 тыс.
50% 1

In this video I show I was able to install an open source Large Language Model (LLM) called h2oGPT on my local computer for 100% private, 100% local chat with a GPT.
Links
* h2o website: h2o.ai/
* h2oGPT UI: falcon.h2o.ai/
* h2oGPT GM-UI : gpt-gm.h2o.ai/
* h2oGPT github repo: github.com/h2oai/h2ogpt
* h2o Discord: / discord
Timeline:
00:00 100% Local Private GPT
01:01 Try h2oGPT Now
02:03 h2oGPT Github and Paper
03:11 Model Parameters
04:18 Falcon Foundational Models
06:34 Cloning the h2oGPT Repo
07:30 Installing Requirements
09:48 Running CLI
11:13 Running h2oGPT UI
12:20 Linking to Local Files
14:14 Why Open Source LLMs?
Links to my stuff:
* RU-vid: youtube.com/@robmulla?sub_con...
* Discord: / discord
* Twitch: / medallionstallion_
* Twitter: / rob_mulla
* Kaggle: www.kaggle.com/robikscube

Наука

Опубликовано:

 

13 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 809   
@chillcopyrightfreemusic
@chillcopyrightfreemusic 11 месяцев назад
This is amazing and very well put together! You have one of my favorite channels on all of RU-vid and I’ve been able to follow what you teach. It’s an honor to learn from you! Thank you!!
@robmulla
@robmulla 11 месяцев назад
Wow. Thanks for such kind words. I appreciate the positive feedback.
@AmazingArends
@AmazingArends 11 месяцев назад
Wow, I have a lot of saved documents, articles, and even e-books on my computer. The idea of my own local chatbot being able to reference all of this and carrying on a conversation with me about it is almost like having a friend that shares all my own interests and has read all the same things that I've read! Amazing how the technology is advancing! I can't wait for this!
@uberdan08
@uberdan08 11 месяцев назад
In its current state, i think you will be underwhelmed by its performance unless you have a pretty powerful GPU
@Raylightsen
@Raylightsen 11 месяцев назад
​@@lynngeek5191In simple English, what are you even talking about dude?
@fluktuition
@fluktuition 11 месяцев назад
@@Raylightsen I think what he meant is that his GPU is not good enough, since he couldn't use the 8k token version.
@fluktuition
@fluktuition 11 месяцев назад
@@lynngeek5191 You said the desktop version is limited to 2000 but that is not true, you have the option for 8000, however you need a gpu that can handle it(like a RTX 4090, or RTX A6000, or a quadro rtx 8000 card..)
@user-qw1zz2lk6g
@user-qw1zz2lk6g 10 месяцев назад
@@Raylightsen ok dude, here it is for you : He's trying to say that this shit ain't free homie. Apprently far from it according to @lynngeeks5191
@LaHoraMaker
@LaHoraMaker 11 месяцев назад
I didn't knew that you were working for h2o, but I am happy for you all. You're doing a great work making open source LLM more accesible and friendly!
@robmulla
@robmulla 11 месяцев назад
Thanks! I just started there. Still will make my normal youtube content but this open source project is exactly the sort of stuff I would normally cover. Glad you liked the video.
@lionelarucy4735
@lionelarucy4735 11 месяцев назад
I’ve always liked H2O, I used to use their deep learning framework a lot. Will definitely check this out.
@sylver369
@sylver369 11 месяцев назад
@@robmulla You work for this?... Sad.
@user-gq8gy1wm1n
@user-gq8gy1wm1n 10 месяцев назад
@@sylver369 get off his back
@Aaronius_Maximus
@Aaronius_Maximus 10 месяцев назад
​@@sylver369if you worked for anyone "better" then why are you here? 🤔
@mb345
@mb345 11 месяцев назад
Great video. I hope this gets a lot of views because it is relevant to so many different use cases that need to protect source data. Love the demo of how easy it is load vectorize your own local docs with Langchain.
@robmulla
@robmulla 11 месяцев назад
Thanks for the feedback! Glad you liked it. Please share it anywhere you think other people might like it.
@Robert-vj6up
@Robert-vj6up 11 месяцев назад
Fantastic tutorial and superb framework! Congratulations for you and the H2O team! 🔥🔥🔥
@friendfortheartists
@friendfortheartists 5 месяцев назад
As many searches I've done on RU-vid, you're Channel came up today. I'm really impressed. Great Job!
@surfernorm6360
@surfernorm6360 8 месяцев назад
This is one of the best discussions of building an AI locally I have seen Bravo!! BTW the tutorial is excellent. its clearly enunciated the print ia veru big and readable for old foggies like me and he goes slow enough to follow and describes and shows what he is doing so noobs like me can follow,. also don't forget the transcription button gives details about every minute and a half . Very welll done anybody who is patient will like this. thank you Rob Mulla
@pancakeflux
@pancakeflux 11 месяцев назад
This was great! I’m in the process of setting up langchain locally with openllm as a backend but to think I’ll try this as a next step. Thanks for sharing!
@robmulla
@robmulla 11 месяцев назад
Glad you enjoyed the video! Thanks for the feedback.
@TranslatorsTech
@TranslatorsTech 11 месяцев назад
Mighty nice video, very useful! Giving local context is interesting. The question about roller coasters was a clever way to demo the feature. Thanks! 😊
@nash......
@nash...... 10 месяцев назад
This is awesome! Definitely going down this rabbit hole
@tenkaichi412
@tenkaichi412 11 месяцев назад
Amazing! Thanks for the detailed guide. Will definitely be using this for future projects!
@johnzacharakis1404
@johnzacharakis1404 11 месяцев назад
YES PLEASE make another video where you sey up all of these in a cloud environment instead of local. Excellent video, thank you very much
@henrikgripenberg
@henrikgripenberg 11 месяцев назад
This was a really transformative experience and I really appreciate that you did this video!
@fredimachadonet
@fredimachadonet 11 месяцев назад
Awesome content! I noticed the audio dropouts from time to time. I had a similar issue this week when recording some videos myself, and the culprit for me was the Nvidia Noise Removal filter in OBS. I changed it back to RNNoise and it worked like a charm. Don't know if yours is related, but if it helps, then happy days! Cheers!
@doughughes257
@doughughes257 11 месяцев назад
Thank you so much. This is a clear guide for us to begin experimenting with our vision for a new application, and the last 4 minutes is a great executive summary for me to show to my management.
@kshitizshah7482
@kshitizshah7482 11 месяцев назад
Great video! I'm a fan of H20. Really impressed with the driverless performance. Helps me benchmark my own code! Gonna try this out later this week, thanks Rob!
@onandjvjxiou
@onandjvjxiou 11 месяцев назад
I would also like to see another video from you about setting up all a cloud environment. Thanks for sharing your knowledge.
@Monkey-Epic
@Monkey-Epic 10 месяцев назад
Awesome!!! Here I was losing hope about AI / GPT being more transparent about biases getting trained "baked" into popular chatbots already & the lack of real privacy users have about what they discuss "there is no privacy". And blammo you guys already had this out in just a few months. Super cool!! Thanks to all who went in on this!
@Charlesbabbage2209
@Charlesbabbage2209 6 месяцев назад
The last thing they want is AI “noticing” patterns in modern western society.
@TylerMacClane
@TylerMacClane 11 месяцев назад
Hello Rob, I liked your video very much. I wanted to suggest that you consider making a video on how a translator or voice-to-text transformation can become a tool for everyone based on an open language model. It would be an interesting topic to explore and could benefit many people. Thank you for your great content!
@pirateben
@pirateben 11 месяцев назад
there are plugins for this fine an open source one and use that
@andysPARK
@andysPARK 6 месяцев назад
@@pirateben what's it called Ben? Linkey pls.. :)
@musp3r
@musp3r 6 месяцев назад
Look for "Faster WhisperAI", maybe it could help you in creating transcriptions and translations from audio-to-text, I've had great success in using it to transcribe youtube videos and create subtitles for them.
@eliotwilk
@eliotwilk 8 месяцев назад
Great work. I was looking for a tutorial like this for a long time.
@voiceofreason5893
@voiceofreason5893 9 месяцев назад
Fascinating stuff. So important to figure out ways of using these tools in a way that allows us to retain some privacy. Subscribed.
@kevinmctarsney36
@kevinmctarsney36 11 месяцев назад
Could you do a video on the “fine tuning” you talk about near the end? I like the privacy attribute of running open source locally and the fine tuning would be really beneficial.
@humanist3
@humanist3 11 месяцев назад
Excellent content, especially the LangChain part, thank you!
@anazi
@anazi 11 месяцев назад
Amazing. Thanks and we cant wait for the cloud model.
@irfanshaikh262
@irfanshaikh262 10 месяцев назад
5:30 yes rob yes. Please. It will be all round approach if you start teaching python on a cloud environment. Much awaited and thanks for everything ur channels has offered me till date. Explicitly love your YT SHORTS.
@user-lk4xm6vg1w
@user-lk4xm6vg1w 11 месяцев назад
you got the like only for just including the last part 14:15 and after. the whole video is decently good. keep up the good work,the last part of info is really what eh people should get in their heads. BRAVO!!! ευχαριστω που το επες.
@dimitriskapetanios294
@dimitriskapetanios294 11 месяцев назад
Rob, the video is Awsome! Great content as usual 🤩 Would love to watch a version utilizing a spinned up instance from a cloud provider too ( for those of us without a gpu 😊)
@robmulla
@robmulla 11 месяцев назад
Thanks for watching! Will def look into a video with steps to setup on the cloud.
@deucebigs9860
@deucebigs9860 11 месяцев назад
@@robmulla definitely interested in the cloud provider video too
@Krath1988
@Krath1988 11 месяцев назад
Thanks for the Video I really appreciated and support the open source community! .
@siva_geddada
@siva_geddada 11 месяцев назад
I love the Content, even though no one doesn't know about this, Very very useful content we are expecting a cloud version demo also. Thank You
@logicnetpro
@logicnetpro 10 месяцев назад
Very informative video ion how to create your own private chat bot and have it learn from your context. Genius! I look forward to further development.
@jannekallio5047
@jannekallio5047 11 месяцев назад
been using Chatbots to write a tabletop GPG campaign for my friends, but having the main story in separate files has been a problem. If I can use the material I already have as my own training material it might be way easier! This chatbot might be exactly what I need! Cool, I will give it a go!
@XX-yu4zz
@XX-yu4zz 11 месяцев назад
update?
@shukurabdul7796
@shukurabdul7796 11 месяцев назад
you got me subscribe !!! wow thanks for explaining in step by step for anyof your content keep it up
@MacarthurPark
@MacarthurPark 5 месяцев назад
Thanks for all your efforts teaching brotha
@davida199
@davida199 10 месяцев назад
This is extremely helpful, Thank you for sharing this.
@AnimousArchangel
@AnimousArchangel 11 месяцев назад
The best explanation so far. I've experience using GPT4All, self hosted whisper, Wav2lip, stable diffusion and also tried few others that I failed to run succesfully. The AI community is growing so fast and is fascinating. I'm using RTX3060 12GB and the speed is okay for chatbot use case but for realtime AI game engine character response it is slow to get response. I recently get a hand of RTX3080 10GB and in this video I see you are using RTX 3080TI which has 10240 CUDA vs mine 8960. It is first time i see that you can use additional cards which in your case GTX1080 vs mine GTX1060 to run the LLM. Very informative video!
@hottincup
@hottincup 11 месяцев назад
You should refit the model using Lora to get a smaller size, more narrow for in game usage, that way it's more optimize
@CollosalTrollge
@CollosalTrollge 11 месяцев назад
Would AMD cards work or is it a headache?
@AnimousArchangel
@AnimousArchangel 11 месяцев назад
@@CollosalTrollge currently they are using cuda technology which require nvidia cards.
@AnimousArchangel
@AnimousArchangel 11 месяцев назад
@@hottincup good suggestion, i will try.
@ScandalUK
@ScandalUK 11 месяцев назад
If you’re trying to shoe-horn a full fledged LLM to power some NPCs then you’re doing it wrong. All that you need is basic chat personalities and the ability to train based on in-game events, this requires very little processing power!
@IItiaII
@IItiaII 11 месяцев назад
Very interesting, I stood it up on a VPS with 10 CPUs, it's painfully slow but it works!
@marioschroers7318
@marioschroers7318 9 месяцев назад
Just checked the thing out, as soon as it was mentioned (luckily, this video was suggested to me by RU-vid). Being a tech enthusiast and a translator, I ended up spending an hour discussing the technical dimensions of my profession. Loved this! I will definitely look into this in more detail later. The thing is also pleasantly polite. 😊
@syedshahab8471
@syedshahab8471 11 месяцев назад
You explained everything very clearly, Thanks
@yourockst0ne
@yourockst0ne 11 месяцев назад
Great work, thanks for the video 🎉
@brianmcmurry5915
@brianmcmurry5915 7 месяцев назад
Great video, your voice and pace is perfect. Thank you
@adriangabriel3219
@adriangabriel3219 11 месяцев назад
Hi Rob, great video. Could you make one on how to set it up using Azure and also how to use it for training a custom dataset (I would be especially interested in your recommendations for training the Falcon 40B)
@incometunnel7168
@incometunnel7168 7 месяцев назад
Nice work buddy. Keep it up
@lionellvp_
@lionellvp_ 10 месяцев назад
Thank you very much, finally a reasonably reasonable documentation. The topic of Cuda is also somehow a single cramp. I wanted to realize the whole thing as a Docker to keep the system more scaled, just cramp.
@SkateTube
@SkateTube 11 месяцев назад
I am interested more about how this can turn into features for 3d model making, music making, 3d/2d game making or even software programming. Some tests of what can it generate and other stuff could be nice.
@neuflostudios
@neuflostudios 11 месяцев назад
Yes Rob ,, would love to see your implementation in the cloud space for these models.
@aketo8082
@aketo8082 11 месяцев назад
Thank you! Nice. What is the difference to GPT4ALL? Where are the strengths and weaknesses? Can h2oGPT understand associations? Or can it be trained accordingly? Will there be more videos with how-to’s for basics, using your own files, training and corrections?
@NextLevelCode
@NextLevelCode 11 месяцев назад
Cool video. Does h2o have plans to make a one click installer any time soon? Those are available for stable diffusion and makes setup so much nicer :) Especially when you just wanna use it and not work on developing the ai.
@NextLevelCode
@NextLevelCode 8 месяцев назад
@TheTen-PointFormatStyle-di1ez anything can have an automated installer made. It’s just if they want to invest the time or not. Nothing is so special about what they are doing
@cerebralshunt2677
@cerebralshunt2677 11 месяцев назад
Yes, as requested, I am letting you know that I am interested in any of the potential future videos you mention in this video! You are giving gold away for free!
@kearneyIT
@kearneyIT 11 месяцев назад
I will be trying this, can't wait!!
@EckhartAurelius
@EckhartAurelius 11 месяцев назад
Very helpful video. Thank you!
@BradleyKieser
@BradleyKieser 11 месяцев назад
Absolutely brilliant. Thank you.
@smartstudy286
@smartstudy286 8 месяцев назад
Awesome knowledgeable video this is so useful. Keep making videos like this
@electrobification
@electrobification 11 месяцев назад
THIS IS A GAME CHANGER!! FOSS FOR THE WIN!
@harrisonleong4283
@harrisonleong4283 7 месяцев назад
Excellent stuff, thank you. I just followed your instruction, plus the README in the GIT, and spin up an instance on a Cloud VM, as my notebook has not GPU. It is fun... and I wish you can further teach us how to do the langchain things when we have a lot of documents that we want to feed them in. Thank you once again.
@heck0216
@heck0216 11 месяцев назад
This is cool and brilliant. Great tutorial
@peterhu3362
@peterhu3362 11 месяцев назад
I like this! U just earned yourself a new sub!! Would be interesting to know how to fine tune or train the installed LLMs
@jamesrico
@jamesrico 5 месяцев назад
Great content, I enjoyed your video.
@Mark_Rober
@Mark_Rober 11 месяцев назад
Dude, you are not even being biased THIS IS THE BEST INVENTION EVER!!! Open source??? AND it runs locally???? even without the file access feature this would've been the coolest piece of software I've ever encountered!
@MrNemo-un2fn
@MrNemo-un2fn 6 месяцев назад
this is the first video ive noticed that highlighted the actual subscribe button. it looked really clean.
@brunaoleme
@brunaoleme 11 месяцев назад
Amazing video. Thank you so much. I also would to love to see a video about the setup in a cloud provider.
@BryanEnsign
@BryanEnsign 11 месяцев назад
This is exactutly what i was looking for to develop a model for internal use at my compnay. Thank you!
@tacom6
@tacom6 10 месяцев назад
any specific use-case?
@BryanEnsign
@BryanEnsign 10 месяцев назад
@@tacom6 I write Inspection and test Plans, inspection test reports and standard operating procedure for industrial engineering and industrial construction. For instance if we need to write a procedure and checks for the correct way to flush HDPE or terminate and electrical panel etc. Currently I can paste SOPs or our checklists and ask if it was missed anything, ask about new ideas to add or new things entirely. It's great at asking for ASTM codes instead of looking it up or buying the PDF. I'm using Claude currently and perplexity. My company does everything internal it doesn't want the data hosted with another company. I'd like to make something for us to use internally. I believe using AI language models has sped up our procedure drafting and checksheet drafting about 40% so far. It's been game changing. But, I'm using 3d party sites and I have to scrub out client details, names etc. If I had an in house model I could let it retain all client or confidential data and others could ask requests against it also. I have a bot running that I've made through Poe using Claude as the base but I can't make it public for colleagues to use.
@tacom6
@tacom6 10 месяцев назад
@@BryanEnsign sounds fantastic. my interest is similar but with focus on cybersecurity. thanks for sharing!
@BryanEnsign
@BryanEnsign 10 месяцев назад
@@tacom6 That's awesome. So many possibilities. Luck to you brother!
@erikbahen8693
@erikbahen8693 11 месяцев назад
Nice! It's working.. I'm excited
@kompila
@kompila 9 месяцев назад
That was good presentation. Thanks !
@gregoryknight2928
@gregoryknight2928 7 месяцев назад
Great video. Thanks!
@HaraldEngels
@HaraldEngels 11 месяцев назад
Brilliant. Thank you for sharing. I am now looking for a faster machine to reproduce what you have demonstrated 🙂
@robvh2
@robvh2 8 месяцев назад
Great video. Two questions. First, how large a custom data source can it handle? Could I upload a couple of textbooks and get reliable answers? Second, can I only interact using this GUI or can I use an API to ask it questions from within my own custom app (to power conversations with NPCs, for example)?
@vimukthirandika872
@vimukthirandika872 11 месяцев назад
Nice tutorial. I hope we need more something like this. Btw are those gpus compatible and working fine?, since they are different generations...
@pointandshootvideo
@pointandshootvideo 11 месяцев назад
Great video! Thank you! I do agree that it's better to have your own local model running open source software if your machine can run it. What GPU do you need??!!! lol The biggest issue I have with ChatGPT - open source or otherwise, are incorrect responses. That makes it next to worthless because you can't trust the responses 100% of the time. Can it also respond incorrectly if you train it with your own data?? And how much of your own data do you need to train it? So if I try to train it on all the PDF's on Raman Microscopy, what's the percent likelihood that a response is going to be incorrect? Thanks in advance. Cheers!
@user-ib9uu3wk5e
@user-ib9uu3wk5e 5 месяцев назад
Great job, thanks!👍
@user-jj2mo5sl7p
@user-jj2mo5sl7p 11 месяцев назад
great i have learn so much about how use the open source ai and ai modules ,i am glad to do my self to build the project on my local computer ,the read PDF ability is so good i well try it!
@dhrubojyotimukherjee4615
@dhrubojyotimukherjee4615 10 месяцев назад
Thank you very much for providing the beautiful resource..
@Theineluctable_SOME_CANT
@Theineluctable_SOME_CANT 11 месяцев назад
WONDERFUL! Thank you.
@stevendonaldson1216
@stevendonaldson1216 8 месяцев назад
Extremely excellent explanations
@KBProduction
@KBProduction 11 месяцев назад
Really enjoy the video
@user-ov7hd5vg3j
@user-ov7hd5vg3j 10 месяцев назад
Hi Rob , I like the way you explained ,this looks so interesting how we can use LLM locally without connecting to internet.I have a doubt on this ,is there any maximum size of a file while we upload it ? Can we upload multiple files ? Can i upload my git repo ?
@neoblackcyptron
@neoblackcyptron 8 месяцев назад
Wow an OpenSource GPT model. This is freaking awesome. I am working on building some AI products this is a life saver. I am excited to play with this big time. Throwing in some Vector Memory Databases to add context on top of this and I can get my first AI product out real soon. I can easily build some text to speech and Computer Vision models of my own on tensorflow to get something big to happen. Man Christmas has really come early for me this year.
@Coecoo
@Coecoo 8 месяцев назад
It does have many limitations such as requiring fairly beefy hardware (or get really restrictive questions) and a ton of storage space.
@neoblackcyptron
@neoblackcyptron 8 месяцев назад
@@Coecoo do you think nvidia rtx 4090 can handle this? When you say beefy hardware what do you have in mind?
@Listen_bros
@Listen_bros 10 месяцев назад
Man, you're just awesome ❤
@onlyvitalnetworks2994
@onlyvitalnetworks2994 7 месяцев назад
Amazing work!
@robmulla
@robmulla 7 месяцев назад
Thank you! Cheers!
@MeatCatCheesyBlaster
@MeatCatCheesyBlaster 11 месяцев назад
This is really cool. I just installed it and tried it. actually runs pretty fast on my CPU
@spacegirlkes
@spacegirlkes 10 месяцев назад
How did you get it to work with your CPU? I keep getting token limitations on the answers. Did you follow the documentation?
@BanibrataDutta
@BanibrataDutta 11 месяцев назад
Thanks for the really informative and well paced video. The video seems to glance-over a page showing the timeline vs linkage/evolution of models at about 5:36. What is the source of that ? Would be nice to have a link to it. Definitely interested in the video on how to use cloud-GPU to run the larger models. In fact (but not surprisingly, as a newbie to generative AI and AI in general), I was under the impression that you didn't need a GPU for producing generative AI content after the foundational model or specialized model was ready. Would be nice if you could cover, what could be done on systems without a GPU (or say an iGPU only) and those that need H110/A110 type GPUs (and how many).
@methodermis
@methodermis 10 месяцев назад
You need a huge GPU to train the model and a big GPU to run it. You can train a smaller model from a bigger model, like Alpaca, but it is not as smart. The GPU has to host the entire brain of the AI so the smaller the worse.
@TechnoRiff
@TechnoRiff 11 месяцев назад
@robmulla - Very nice unassuming presentation! You explain concepts in a clear step by step manner. Do you have any suggestions for local execution on a Mac M1 leveraging its internal (non-CUDA) GPU?
@c3rb3ru5d3d53c
@c3rb3ru5d3d53c 11 месяцев назад
One suggestion here, this would be more popular for everyone if there was an installer like GPT4All has, as those who have no command-line experience can still use it.
@deltavthrust
@deltavthrust 11 месяцев назад
Great work. Thank you
@bentomo
@bentomo 8 месяцев назад
Finally a video that gives me a "hello world" for an attainable local gpt alike chat bot. Now I can actually LEARN how to fine tune a model.
@schmutly
@schmutly 11 месяцев назад
Hi, i havent watched video yet but will at lunch, pulled over for break. So how does it stack up with coding examples? chatgpt4 is very good so just wanted to ask quickly thanks,Rob
@adamd9166
@adamd9166 7 месяцев назад
Very nice! Does this offline chatbot remember the previous conversation, or does it kind of begin anew with every query? Also, is it possible to allow optional internet access to provide supporting information to your documents, or can it only query based on what you give it? Thanks!
@Martin-kr5nx
@Martin-kr5nx 8 месяцев назад
Just came across your channel and I love your content. Are there any open source models we can use that have a longer context like Anthropic that we can run locally?
@malsmith3806
@malsmith3806 11 месяцев назад
Thank you!! I’ve been stumped on building a model to generate an entire script for a Seinfeld episode based on my personal scores of each episode and I think this video just unlocked it for me!!
@olivierdulac
@olivierdulac 11 месяцев назад
You are now master of your domain!
@neemeeshkhanzode6003
@neemeeshkhanzode6003 11 месяцев назад
Hi Rob, can you please let me know should we use h2o for performing multi-class classification task? Like how to fine tune or make use of transfer learning (generating embeddings and passing them through our classification model)
@username42
@username42 9 месяцев назад
did u find the answer dude for it ? looking the similar answer
@neemeeshkhanzode6003
@neemeeshkhanzode6003 9 месяцев назад
@@username42 hi there rlare othe llms which are opensourced and are landed on huggingface hub like llama which cam be used.
@deeplearning7097
@deeplearning7097 9 месяцев назад
Thank you very much. It would be great to see something on spinning up higher memory remote GPUs.
@PythonicAccountant
@PythonicAccountant 8 месяцев назад
Can this be run on just CPUs or does it require a machine with GPUs?
@vsmcreatives3625
@vsmcreatives3625 10 месяцев назад
Great Content Rob. I like this one more, Can you please make a video of how to customize this chatbot Look and features.
@larabassabah202
@larabassabah202 3 месяца назад
Keep good work. I'm very interested
@shankerchennapareddy5629
@shankerchennapareddy5629 11 месяцев назад
Does Private GPT model use only the data available on your server or does it get the information from the web too? If yes how is theinformation save?
@thefrub
@thefrub 10 месяцев назад
These language models are getting improved so fast, by the time you have it installed and working there's 3 better ones
@MiguelMartinez-yn8hf
@MiguelMartinez-yn8hf 5 месяцев назад
Hello! Thank you very much for the info you gave us. I have a doubt, h2O could be configured as an API in order a frontend could connect to h2O? And once you ingest a documents if you reboot the program and the machine the documents is still ingested?
@homevideoz3254
@homevideoz3254 9 месяцев назад
Hi Rob. great info. what CLI are you using? Thanks
@luisibarra7551
@luisibarra7551 11 месяцев назад
This great. Thank you. Maybe I missed it, but do you have the instruction on how to clean all this up - uninstall/remove the downloaded models? Didn't see that in the readme either.
@gabefosse2050
@gabefosse2050 11 месяцев назад
Nice video! Do you know if its possible to modify the model to behave or respond in a certain way?
@Nono464.
@Nono464. 11 месяцев назад
Hi Rob, Yes please show a walkthrough of you deploying this to an EC2 instance.
Далее
Unbelievable Face Swapping with 5 Lines Code
11:00
Просмотров 66 тыс.
One brother and sister story #shorts by Secret Vlog
00:14
All You Need To Know About Running LLMs Locally
10:30
Просмотров 110 тыс.
Run your own AI (but private)
22:13
Просмотров 1,1 млн
I Analyzed My Finance With Local LLMs
17:51
Просмотров 414 тыс.
How I Made AI Assistants Do My Work For Me: CrewAI
19:21
Новая материнка без USB!!
0:39
Просмотров 48 тыс.
Mem VPN - в Apple Store
0:30
Просмотров 69 тыс.
Урна с айфонами!
0:30
Просмотров 5 млн
ПЕРЕХОЖУ НА НОВЫЙ ТЕЛЕФОН!
10:34