Тёмный

Run Any Chatbot FREE Locally on Your Computer 

Matt Wolfe
Подписаться 623 тыс.
Просмотров 115 тыс.
50% 1

Here's a free open-source tool to run any chatbot locally and offline.
Discover More From Me:
🛠️ Explore hundreds of AI Tools: futuretools.io/
📰 Weekly Newsletter: www.futuretools.io/newsletter
😊 Discord Community: futuretools.io/discord
🐤 Follow me on Twitter: / mreflow
🧵 Follow me on Threads: www.threads.net/@mr.eflow
🐺 My personal blog: mattwolfe.com/
Resources From Today's Video:
jan.ai/
Outro music generated by Mubert mubert.com/render
Sponsorship/Media Inquiries: tally.so/r/nrBVlp
#AINews #AITools #GenerativeArt

Наука

Опубликовано:

 

2 янв 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 471   
@frankywright
@frankywright 6 месяцев назад
Thanks again, Matt. You are truly a legend. I have searched online for ways to run my own chat bot, and just like magic, you present it. Thanks mate.
@noobicorn_gamer
@noobicorn_gamer 6 месяцев назад
We're finally seeing some improvements in UI software for casual people to use. I'm happy how the AI market is developing to be more casual friendly and not just for devs. I wonder how Jan makes money by doing this.
@CM-zl2jw
@CM-zl2jw 6 месяцев назад
Taxpayers and venture capitalists probably? Who knows though. I am blown away by how tech savvy some people are. Even with only a little bit of knowledge and with AI some pretty powerful workflows are built and shipped. Exciting stuff.
@AtomicDreamLabs
@AtomicDreamLabs 6 месяцев назад
I never thought it was hard. LM studio makes it so easy even my 11-year-old daughter can do it
@GameHEADtime
@GameHEADtime 4 месяца назад
@@CM-zl2jw Probably not its a gui they not getting tax money to print hello world but if they are maybe its better then sending it to ukraine orgies thanks..
@Geen-jv6ck
@Geen-jv6ck 6 месяцев назад
The small Phi-2 model is proven to perform better than most 7-13B models out there, including Mistral-7B and LLaMa-13B. It’s good to see it available on the app.
@enlightenthyself
@enlightenthyself 6 месяцев назад
You are complaining that a LANGUAGE model can't do math... You are definitely special 😂
@Strakin
@Strakin 6 месяцев назад
Yea, i asked it to calculate the coefficient of a warp 9 warp drive and it couldnt even do that.@freedomoffgrid
@TheReferrer72
@TheReferrer72 6 месяцев назад
@freedomoffgrid No models can do basic math, they have to use tools. Even the might GPT 4 has serious problems with math.
@enlightenthyself
@enlightenthyself 6 месяцев назад
@freedomoffgrid limitations in the technology it self brother..
@CM-zl2jw
@CM-zl2jw 6 месяцев назад
Better how?
@fun-learning
@fun-learning 6 месяцев назад
Thank you ❤
@onecrowdehour
@onecrowdehour 6 месяцев назад
Just might be the post we have all been waiting for, way to go Mr. Wolfe.
@stefano94103
@stefano94103 6 месяцев назад
This is what I’ve been waiting for. I actually downloaded and started running it before even finishing your video. Great find! Thanks!
@CM-zl2jw
@CM-zl2jw 6 месяцев назад
I can feel your excitement. I was about to do the same but think I will use discipline and read the comments first 😂🎉. Self control!!
@stefano94103
@stefano94103 6 месяцев назад
@@CM-zl2jw haha too smart! So far testing has been pretty good 👍
@Designsecrets
@Designsecrets 6 месяцев назад
How did you get it working, every time i enter a message, i get error occured: failed to fetch.
@qster
@qster 6 месяцев назад
Great video as always, but you might want to mention the PC requirements when running larger models.
@TheMiczu
@TheMiczu 6 месяцев назад
I was wondering the same if mixtral was running well because of his beast machine or not.
@qster
@qster 6 месяцев назад
A vague rule to go by is make sure you have a few gigabytes of ram more than the file size of the model itself, but the larger the file the better GPU you also need. @@TheMiczu
@ProofBenny
@ProofBenny 6 месяцев назад
how did you get it to run on the GPU ?@@qster
@qster
@qster 6 месяцев назад
@@ProofBenny it will automatically use it, no need to change settings
@TheGalacticIndian
@TheGalacticIndian 6 месяцев назад
The current, pioneering and rather primitive LLM models have an average size of a few to tens of gigabytes. In this size they can pack most of the information, works of literature, paintings or films known to mankind. This means that they FANTASTICLY compress data. What is the result of this? That the data captured from a single user, even if filmed with sound 24/7, will be compressed to such a minuscule size that a precocious 14.4k modem would suffice to transmit it just when the Internet access appears. Besides, the model may be canny enough to find ways to connect to that Internet, attach user data to any file and send it that way to the rulers of our lives. Privacy needs serious work.
@pventura49
@pventura49 6 месяцев назад
Matt - big fan of your videos. This Jan chatbot tool looks great. Thank you for bringing us all the latest and greatest info on AI. 😃
@nryanpaterson6220
@nryanpaterson6220 6 месяцев назад
How fortunate for me! I was just thinking about having an offline chat, and BOOM! Here ya go! Thanks, Matt, I love the content! Keep it up!
@avivolah9401
@avivolah9401 6 месяцев назад
There is also LM studio that does the same thing only with a bit more controls :)
@ChrisS-oo6fl
@ChrisS-oo6fl 6 месяцев назад
Or oobabooga with infinitely more control and multimodal. It’s the Ui that everyone creating models uses.
@mayagayam
@mayagayam 6 месяцев назад
Do any of these allow for agents or the equivalent of copilot or autogpt?
@alejandrofernandez3478
@alejandrofernandez3478 6 месяцев назад
From the video the main difference with lmstudio is Jan is open source, but am not sure if it can run on older processors or machines like lmstudio is starting to do..
@bigglyguy8429
@bigglyguy8429 6 месяцев назад
@@ChrisS-oo6fl Yeah, but it's a pile of code-vomit on Github, which is exactly why normal peeps like me are NOT using it...
@mattbeets
@mattbeets 6 месяцев назад
LM studio can also run Autogen @@mayagayamsee various tutorials on youtube :)
@scottmiller2591
@scottmiller2591 6 месяцев назад
I'd like someone to do a Pinokio, Petals, Oobabooba, Jan framework comparision. Oobabooba and Pinkokio give you a lot of under the hood options I'm not seeing demonstrated here - pre-prompts, token buffer access, etc.
@CelestiaGuru
@CelestiaGuru 6 месяцев назад
A description of your hardware configuration (CPU, amount of RAM, GPU, amount of video memory, network upload and download speeds) would be very helpful. What might be quick for you might be absurdly slow for someone with a less-capable system or slower network configuration.
@bobclarke5913
@bobclarke5913 6 месяцев назад
I adore YTers who quote what % of CPU or ram is being used without saying. Because that means they think we're family and know every detail of each other. And will be pleased when I show up to crash in their guest room.
@puravidasolobueno
@puravidasolobueno 6 месяцев назад
Wow! Best practical video I've seen in many months! Thanks, Matt!
@scottfernandez161
@scottfernandez161 6 месяцев назад
Awesome Matt very easy to follow. Happy New Year 😊!😊
@TheCRibe
@TheCRibe 6 месяцев назад
Great video ! Note that the experimental release has the GPU option.
@whiteycat615
@whiteycat615 6 месяцев назад
Was waiting for this video for a while. Thank you
@drkeithnewton
@drkeithnewton 6 месяцев назад
@Matt Wolfe - Thank you for making sense of this AI world for me so that it is easier for me to do so.
@yonosenada1773
@yonosenada1773 6 месяцев назад
This one is likely one of my favorites yet! Thank you!
@BionicAnimations
@BionicAnimations 6 месяцев назад
Thanks Matt! Just what I have been waiting for. Now all I need is an image generator like this.👍
@Krisdomain
@Krisdomain 6 месяцев назад
you can try stable diffusion to generate image locally on your computer
@BionicAnimations
@BionicAnimations 6 месяцев назад
@@Krisdomain That would be awesome! Thanks! Is it as easy to set up as JanAI or is it difficult?
@stribijev
@stribijev 6 месяцев назад
@@BionicAnimations It is easy as I could do that. However, the images I generated were not nice, maybe I used poor models :(
@Vitordiogovitx
@Vitordiogovitx 6 месяцев назад
There are great tutorial to install, I enjoy the Automatic1111 UI, it's prettier to use, but if you want quality images to be generated locally, there is some studying required and follow up installs. Keywords for your search: ControlNet , Negative Prompting, Seed number. This should give you an idea to where you are heading.
@BionicAnimations
@BionicAnimations 6 месяцев назад
@@stribijev Hmm... I've usually seen really good pics with Stable Diffusion. How did you learn how to make them?
@TheFlintStryker
@TheFlintStryker 6 месяцев назад
I installed Jan... downloaded 4 or 5 different models. 0 have worked. "Error occurred: Failed to fetch"... 🤷‍♂
@Dayo61
@Dayo61 6 месяцев назад
I got the same error message
@christopherkinnaird2881
@christopherkinnaird2881 6 месяцев назад
Same ​@@Dayo61
@Designsecrets
@Designsecrets 6 месяцев назад
@@Dayo61 Same, and no help , nothing........no answers to fix it
@Ira3-ix4bh
@Ira3-ix4bh 6 месяцев назад
Hey Matt, I love your content, been with you for almost 2 years now! Have you found a similar tool that allows your to upload files to a local LLM for data analysis? Basically this same thing, but with file upload capabilities?
@LucidFirAI
@LucidFirAI 6 месяцев назад
Best AI news. I tried for a couple of weeks to run LLAMA like 6 months ago and found it very challenging.
@americanswan
@americanswan 6 месяцев назад
I'm definitely looking for something like this, but I need to feed it about 100 PDF files that it needs to scan and know intimately, then I would be thrilled.
@USBEN.
@USBEN. 6 месяцев назад
There are local models trained for way higher token limits upto 20k.
@americanswan
@americanswan 6 месяцев назад
@@USBEN. What are you talking about? Self hosting an AI needs tolkens? What?
@USBEN.
@USBEN. 6 месяцев назад
@@americanswan token limit =word context limit it can take in. Like 4096 slider in the video but upto 20k words
@missoats8731
@missoats8731 6 месяцев назад
@@americanswan These models are restricted in what amount of content they can "remember". This content is measured in "tokens". GPT-4 has a context window of 128k tokens (which is a lot), which some people say means it could remember and talk about a 300 page book for example. So to find out if there is the right model for your needs, you would have to find out how many tokens the text in your PDFs has. As far as I know, OpenAI has a tool where you can paste text and it tells you how many tokens it has. Then you would have to find a model that has enough tokens in it's "context window" so it could remember the text in your PDFs. If every PDF only has one page with text it would be a lot easier than if it has a 100 pages (since a higher token limit also means higher demands for your computer). The next problem is that in "Jan" there doesn't seem to be an option to input your documents, so you would have to find a similar tool that allows that. At this moment in time I don't think you will find a satisfying solution (if your PDFs have a lot of text). But many people are looking for solutions for exactly your problem (especially since this would be very valuable for a lot of companies). So I'm optimistic something will come up in the next months or so.
@TheFlintStryker
@TheFlintStryker 6 месяцев назад
@@USBEN.can you point to best working models with higher context in your opinion?
@tunestyle
@tunestyle 6 месяцев назад
Go, Matt! Great video as always!
@theeggylegs
@theeggylegs 6 месяцев назад
Another great update. Thanks Matt! We appreciate you.
@mcclausky
@mcclausky 6 месяцев назад
Amazing video! Thank you. Matt, do you know perhaps how can we train those local models with the data and files from our hard drive? (word, excel, PDF files, etc)
@DaleMuellerDotCom
@DaleMuellerDotCom 6 месяцев назад
Train or upload docs for reference
@AIMFlalomorales
@AIMFlalomorales 6 месяцев назад
i was just messing around with this last night!!!!!! dude, you are on TOP OF IT MATT!!!!!! lets go to a Padres game!
@DrFodz
@DrFodz 6 месяцев назад
Is there a way you can share documents like pdf with it? If not, are there any alternatives that can do that? Thanks a lot Matt!
@jonathanpena5972
@jonathanpena5972 6 месяцев назад
I only know of ChatPDF (at the top if Googled). It's done online, so not ran locally, but it's free!
@Ourplanetneedsyou
@Ourplanetneedsyou 6 месяцев назад
Hi, Could you help me? What can you advise? The book is created, the outline is created, there is an understanding about the number of chapters and their titles. It is necessary to structure the text, link and divide it into chapters. Who of AI is the best at this? Free and paid options? Thank you in advance
@InspaStation13
@InspaStation13 6 месяцев назад
Love your videos matt always informative
@Terran_AI
@Terran_AI 6 месяцев назад
I've been looking for a way to run a secure local model for a while and Jan definitely looks like one of the easiest.. However I use GPTs mainly for data analysis. Is it possible to attach files for analysis using this application?
@ryanchuah
@ryanchuah 6 месяцев назад
Thanks for recommending this. Been looking for something like this for a long time. Can recommend which model is good for SEO research and article generating?
@AdamRawlyk
@AdamRawlyk Месяц назад
as someone who isn't very tech savy but who's been wanting a nice jumping on point to offline local chatbots, this is a brillient start. Videos like this and channels like network chuck have been a huuuuuge help in getting me more involved and helping me understand it better, one step at a time. Incredible work you guys all do, keep up the awesome work. :) Edit: I also wanna note that AI itself has been getting such a bad rep and it geniunely surprises me. Like i understand the problems of jobs, and bad actors, but any advancement in technology has problems like that which arise. l mean look at how people viewed the internet in it's infancy. And yes, there are bad people who do bad things. But thats not the technologies fault, thats the descretion of the individual who uses it. When it comes to AI, i prefer a glass half full approach. Sure it can be used for bad, but it can also be a wonderful and amazing thing if we give it the chance to fully evolve and shine and put some measures inj place to help against or deter the bad actors. :p
@dougveit
@dougveit 6 месяцев назад
Great work thanks Matt!!😊
@milliamp
@milliamp 6 месяцев назад
There are a handful of other good tools for running local LLM like LM studio (mac/PC) and Ollama (mac) that I think are worth mentioning alongside Jan.
@TomGrubbe
@TomGrubbe 6 месяцев назад
LMStudio is closed source though.
@JosephShenouda
@JosephShenouda 6 месяцев назад
This is EXCELLENT @Matt thanks for sharing this.
@primordialcreator848
@primordialcreator848 6 месяцев назад
the only issue i have with these, like Jan and LLM studio is the chat history or memory, anyway to have the local models save a memory locally so they can remember the chats forever?
@Bella2515
@Bella2515 6 месяцев назад
I know there are some Models that also allow us to input images, similar to gpt vision. Is there any program that simplified that process?
@aneesh2683
@aneesh2683 6 месяцев назад
Thank you so much, there are so many software that do similar things but are a pain to install and run. This is super easy.
@tanakamutaviri5561
@tanakamutaviri5561 6 месяцев назад
Matt. Thank you for your hardwork.
@kulnor
@kulnor 6 месяцев назад
Very cool. Thanks for sharing!
@acewallgaz
@acewallgaz 6 месяцев назад
i get an "error occurred: failed to fetch" message, is it because my video card isn't good enough? i have 128GB of ram
@stewiex
@stewiex 6 месяцев назад
Amazing! I can't wait to try this!
@TungzTwisted
@TungzTwisted 6 месяцев назад
This is definitely it. You hit the nail on the head. This was much like what DiffusionBee to bring Stable Diffusion to those who weren't prompt proficient but it's pretty crazy! Happy New Year !
@anthonygross1963
@anthonygross1963 6 месяцев назад
How do you know it won’t steal your data or worse? Is it a good idea to be downloading such a large file??
@alikims
@alikims 6 месяцев назад
can you train it with your local documents and chat about them?
@RosaryWorldofvariety
@RosaryWorldofvariety 6 месяцев назад
Thanks for the valuable information we learn from you. If possible, leave for us in the description box below the video some of the setting prompt you are using if you don't mind we can copy and past
@maxwell-cole
@maxwell-cole 6 месяцев назад
Interesting post, Matt. Thanks for sharing.
@reubenwizard
@reubenwizard 6 месяцев назад
I have 16GB of RAM but Jan says this is not enough for that Mixtral mode, should i just Mistral Instruct 7B Q4 as that has enough ram, or should i go with Llama 2 Chat 7B Q4
@rajaramkrishnan4181
@rajaramkrishnan4181 6 месяцев назад
Can we upload ppt,docs and then ask these models questions based on the files uploaded?
@RomiWadaKatsu
@RomiWadaKatsu 6 месяцев назад
can I give it strict instructions like when I use oobabooga and "bend" the ai by feeding it the beginning of the answer? Sometimes I don't want to use the uncensored model since the censored ones perform better, but I want to still force them to tell me stuff they're trying to filter or guide them towards the type of answer I need
@tiredbusinessdad
@tiredbusinessdad 6 месяцев назад
Which of the different AI LLM models you have tested, would you said have given the fastest response time, while still providing a good response?
@behrooz8393
@behrooz8393 4 месяца назад
Hey Matt, can I train a model with different language data (not English) locally? If so which one? What I want is something similar to ChatRTX but my PC doesn't meet the minimum requirement for then. I want to feed many docs (in non-English language) to it and ask questions from it. Do you know any local AI that can do that?
@abhaykantiwal1094
@abhaykantiwal1094 Месяц назад
Hey Matt, Can I customize the models after downloading so that I can train it on my own data?
@dr.gregoryf.maassen2637
@dr.gregoryf.maassen2637 6 месяцев назад
I have a question for the community. Is there a way to change the file location of the models in Jan? It saves it in appdata on my c drive. I would like the models to be saved elsewhere.
@hstrinzel
@hstrinzel 6 месяцев назад
FABULOUS! Thank You! Brilliant. "Works right out of the box". IS there a way to run it on the GPU instead of the CPU?
@Jsin57
@Jsin57 6 месяцев назад
Does this also have an option to cater the responses by only having the AI pull from specific information on your computer?
@JavierCaruso
@JavierCaruso 6 месяцев назад
Does any of this apps/models support external api calls and upload files?
@juliuscomnenus4415
@juliuscomnenus4415 6 месяцев назад
When I try this, I get some sort of timeout where it won't download the model. Not being blocked at the firewall, app is permitted out... will try again later, I suppose. Wonder if their hub is down?
@TavaFlav
@TavaFlav 6 месяцев назад
Thank you I've been following your work for awhile this was great information I mean all your news is but this is crazy it's fast an free an so easy i mean wow that's awesome
@borakian
@borakian 6 месяцев назад
Hi Matt, Just found your channel. New sub. Jan is very cool and I have downloaded and started using it. The interface doesn't seem to have the ability to save your models to a directory of choice as standard. The models are quite large and would like to store them on a drive of my choice I am not a programmer but it does state that anyone can alter the code for the client in any way. Wondering if this is something you or someone from your channel could help with?
@anac.154
@anac.154 5 месяцев назад
Great content, including your website. Thank you!
@user-yi9bf5nw9d
@user-yi9bf5nw9d 6 месяцев назад
will it automatically use gpu instead of cpu? or do you need to change settings?
@rickdunn6284
@rickdunn6284 6 месяцев назад
Is there a way to load a large chunk of data into a folder , or An app, or something - and use chat gpt to interact with that data? Loading it all into open ai website hasn’t worked well for us
@BeatsandTech
@BeatsandTech 6 месяцев назад
I wonder if this can be installed on a server and ran remotely across a network or if it has a local browser UI? I have a couple of servers that that I may test this on...
@michealsichilongo
@michealsichilongo 6 месяцев назад
Great video Consider making a video on how to customize or train on specific topic
@victorvidican5145
@victorvidican5145 6 месяцев назад
Are the default settings set to run on gpu? Because my models run only on cpu and I cannot seem to find where to force them to run on gpu
@r0bophonic
@r0bophonic 6 месяцев назад
This looks cool! It wasn’t clear to me from the video, but I believe only open source models can be run locally (versus paid OpenAI models like GPT-4).
@stribijev
@stribijev 6 месяцев назад
That is right, you can see Matt uses his own API key to enable GPT models.
@AntonioVergine
@AntonioVergine 6 месяцев назад
This is because there is no "downloadable" version of gpt. Mixtral, on the opposite, doesn't have an online version (provided by the developers) so if you want to use it you must download it and run it on your computer.
@r0bophonic
@r0bophonic 6 месяцев назад
@@stribijev Yeah, that’s when it became clear the title is misleading. I think the video title should be changed to “Run Any Open Source Chatbot FREE Locally on Your Computer”
@stribijev
@stribijev 6 месяцев назад
@@r0bophonic Right. Actually, it is no news, the LLM Studio has been out there for quite a while.
@Ben_D.
@Ben_D. 6 месяцев назад
So cool. Going to have a look
@policani
@policani 6 месяцев назад
Does this software work with an AMD video card or only NVIDIA? There are a bunch of Stable Diffusion projects I want to run on my PC, but can't because I own an AMD video card with 6gb ram.
@claudiososa5560
@claudiososa5560 3 месяца назад
Great Video, What PC configuration with Windows do you recommend to use a Mistral version?
@tanwilliam7351
@tanwilliam7351 4 месяца назад
You have just saved my life! This is what I have been looking for!
@sethjchandler
@sethjchandler 6 месяцев назад
Mistral under Jan freezes up my MacBook. 16 gg may not be enough ram?
@user-oo1xp8nd7i
@user-oo1xp8nd7i 5 месяцев назад
Thanks, Matt! This is great! Do you know if these models can be trained in this platform to be customized for specific areas?
@ArnoSelhorst
@ArnoSelhorst 6 месяцев назад
When I tested it, it didn't give me the option to use a custom install path. I was also not able to leverage all the models I already downloaded for Oobabooga. If that's fixed I'll be thinking about leaving the gradio interface of Oobabooga for good.
@michai333
@michai333 6 месяцев назад
I still prefer LM studio due to the ability to modify GPU layers and CPU offsets. Also, LM provides direct access to Huggingface models.
@mdekleijn
@mdekleijn 6 месяцев назад
Me too, LM studio will also inform you if your hardware is capable of running the model.
@jennab176
@jennab176 6 месяцев назад
I would love a comparison between the two, I was actually going to ask Matt for that
@jennab176
@jennab176 6 месяцев назад
Do you have any tips for what the recommended settings are for GPU layers and CPU offsets? My laptop is not very robust, sadly, but I did just upgrade to 32gb of ram. That did not fix my high cpu usage when running LM studios, however. It still has moments where it spikes up to 100%
@michai333
@michai333 6 месяцев назад
@@jennab176 depends if your laptop even has an independent GPU. Many of the mid to lower tier laptops with just use the CPU’s integrated graphics processing, which at that point modification of GPU layers won’t improve token processing speed. It really depends on your specific hardware configuration. I bet if you post your specs here the community can help you optimize your settings.
@imtuyethan
@imtuyethan 6 месяцев назад
Omg thank you! exactly what i need!
@AdamKai79
@AdamKai79 6 месяцев назад
Nice. Super valuable video. Thank you! You talked about the small cost for using one's own OpenAI API key, but do you know (or anyone here in the comments know) if those conversations are private, or does OpenAI use them to train the model like they do with using regular GPT3.5 or 4?
@Spraiser74
@Spraiser74 6 месяцев назад
When using the API the personal data is not used for training in theory.
@InnocentiusLacrimosa
@InnocentiusLacrimosa 6 месяцев назад
Not used in training. I think that there is also an opt-out toggle in paid version of regulat chatgpt.
@CM-zl2jw
@CM-zl2jw 6 месяцев назад
@@InnocentiusLacrimosayes. True. You can opt out of sharing data on chatGPT but then it doesn’t save the chat history.
@jfiosi
@jfiosi 6 месяцев назад
Can I download the llMs to an external drive (space issue)? Can JAN point to that external drive?
@emanuelmma2
@emanuelmma2 6 месяцев назад
Great video 👌
@kfj001
@kfj001 5 месяцев назад
Yes dear it can swear but can it reduce a natural language query into a strongly organized json data structure with a high degree of accuracy and consistency?
@Vincent_Koech
@Vincent_Koech 6 месяцев назад
No GPU support? I wish i could point it to a directory to where other models are located just like GPT4ALL. I do not want to make copies as they are really large.
@plamenrashkov3245
@plamenrashkov3245 6 месяцев назад
Can with jen we upload files documents to analize ? Long documents in .dox or some other format
@stuart_oneill
@stuart_oneill 6 месяцев назад
Matt: Which of the Generative Sys are best for accessing new/same day Net info? I need to access daily news.
@JT-Works
@JT-Works 6 месяцев назад
Very cool, is there any way to publicly? Host a large language model with a software like this? I have been looking everywhere and cannot seem to find anything.
@pierruno
@pierruno 6 месяцев назад
Did you make a Video about LM Studio?
@toxichail9699
@toxichail9699 6 месяцев назад
lm studio as well. paired with open-interpreter you can use local llms to help with automating tasks on your pc. includiong opening things and creating files as well as create and execute code
@sauravpokhrel8873
@sauravpokhrel8873 6 месяцев назад
is there anything GUI which accepts models and allows user to chat with documents including pictures?
@tracyrose2749
@tracyrose2749 6 месяцев назад
wow, way to start the year !
@TheAlphaFox
@TheAlphaFox 4 месяца назад
Brilliant content!
@latestAiHacks
@latestAiHacks 6 месяцев назад
the models are not working. I always get that response: Error occurred: Failed to fetch
@Earth2Ross
@Earth2Ross 6 месяцев назад
Love the videos!!!
@PackmanRs
@PackmanRs 6 месяцев назад
Can you import trained models from hugging face that have been trained on certain info
@heinzerbrew
@heinzerbrew 6 месяцев назад
So you are running chat gpt4 locally and your prompts aren't being sent to open ai. Is that correct? Just the information that you have used chatgpt4 is sent to open ai so they can charge you, correct?
@EyemotionView
@EyemotionView 6 месяцев назад
Definitely going to try that Matt! on a M1 Mac mini 16GB
@Strakin
@Strakin 6 месяцев назад
Mega, really nice find.
@openclassusa3534
@openclassusa3534 6 месяцев назад
EXCELLENT CONTENT!
@chimoji608
@chimoji608 6 месяцев назад
But can you give them a knowledge base like documents locally? And will private information stay on your own pc? I assume with using an API it wont be "safe" for private information
@bigglyguy8429
@bigglyguy8429 6 месяцев назад
If it's open source and run locally you're fine. If it's using any API then your data is being sent across the net and is not private
@Fernandomontecristo
@Fernandomontecristo 6 месяцев назад
Hi, How can I train a model with our data? Is there another video talking about this?
@manojkr19
@manojkr19 6 месяцев назад
How is this any different than Ollama with Ollama-Webgui or LLM Studio or Cheshire Cat?
Далее
Run your own AI (but private)
22:13
Просмотров 1,3 млн
Вопрос Ребром - Субо
49:41
Просмотров 970 тыс.
21 Mobile AI Apps You Won't Believe Are Free
36:14
Просмотров 977 тыс.
All You Need To Know About Running LLMs Locally
10:30
Просмотров 131 тыс.
Has Ubisoft Fixed Itself?
8:45
Просмотров 17 тыс.
AI News: A Massive Week For AI Advancement!
31:14
Просмотров 103 тыс.
I Tried Every AI Coding Assistant
24:50
Просмотров 748 тыс.
Why is this number everywhere?
23:51
Просмотров 7 млн
5 Custom GPTs That Will ACTUALLY 10X Your Work
18:58
Просмотров 44 тыс.
How I Made AI Assistants Do My Work For Me: CrewAI
19:21
You’re using ChatGPT wrong
9:31
Просмотров 371 тыс.
Новые iPhone 16 и 16 Pro Max
0:42
Просмотров 2,1 млн
$1 vs $100,000 Slow Motion Camera!
0:44
Просмотров 28 млн
ЗАБЫТЫЙ IPHONE 😳
0:31
Просмотров 20 тыс.