Тёмный

PrivateGPT: A Guide to Ask Your Documents Offline 

All About AI
Подписаться 167 тыс.
Просмотров 10 тыс.
50% 1

Наука

Опубликовано:

 

4 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 38   
@erictsang0
@erictsang0 Год назад
Thanks for making this really easy step by step tutorial! 😃
@JcMETAV
@JcMETAV Год назад
What is the loading capacity cap in terms of documents?
@tonysilva2654
@tonysilva2654 Год назад
Great video ..... Thanks!!
@kevinm.1565
@kevinm.1565 Год назад
Awesome! Thanks.
@rishadomar
@rishadomar Год назад
This is very exciting. I'd like to convert this into a server so that it can respond to my APIs
@CryptoCrasher
@CryptoCrasher Год назад
no matter which way I try this, it's always the same error...."ERROR: Failed building wheel for llama-cpp-python"
@jgodvliet
@jgodvliet Год назад
As always great video. What screen recording solution are you using for your vids?
@scottregan
@scottregan Год назад
Thank you this is great! Is there a ballpark token limit with regards to source / input documents? I could see this being very useful for completing my own (private, for now) academic articles that are in various stages of progress
@ziff_1
@ziff_1 Год назад
Very cool, but needs a GUI for a more attractive input & output. I'm sure someone will make one.
@SAVONASOTTERRANEASEGRETA
@SAVONASOTTERRANEASEGRETA Год назад
It would be nice if you could use it with the webUI console. So I don't like terminal style.
@CptDangernoodle
@CptDangernoodle Год назад
I'm not sure if my 2018 MacBook Pro doesn't cut it anymore, or I messed up the code, but I've been waiting over 5 minutes for it t answer my query and still nothing. CPU usage is through the roof
@dc-k4868
@dc-k4868 Год назад
Could I use this to explore an old website I have archived? It's no longer online but still I have offline access to it.
@princemars6746
@princemars6746 Год назад
Wait does this prevent you from sharing documents with chat gpt?
@therealswedishguy
@therealswedishguy Год назад
Sjukt bra video!
@the-quintessenz
@the-quintessenz Год назад
Can I connect PrivatGPT to an offline version of Wolfram Alpha?
@thiagopires00
@thiagopires00 Год назад
does anyone knows how i delete the files from it's database?
@kristianromu9128
@kristianromu9128 Год назад
Did anyone else have problems with running pipe install -r requirements.txt command? For some reasons it just stops after some download. Pip3 dosen't work either. Just spits out long message insummary "Configuring incomplete, errors occurred!"
@thiagopires00
@thiagopires00 Год назад
I had to run it on a virtual enviroment before installing try starting the virtual environment with source venv/bin/activate for mac or venv\Scripts\activate for windows, then run the install.Everytime you want to use you got to activate it again tho
@tczacko
@tczacko Год назад
Did anyone get useful results out of Private GPT? I made it run on my laptop, but the text outputs are not useful. Could/should it also work with texts in other languages than English?
@tczacko
@tczacko Год назад
There are so many videos out there on how to install Private GPT, But no one is actually discussing and showing how useful it actually is. I guess it’s just a hype?
@gnsampaio
@gnsampaio Год назад
Does it use credits from OpenAI ? Or we are running the model itself and hosting on our local?
@DJPapzin
@DJPapzin Год назад
It uses a local downloaded model
@rorymorrissey4970
@rorymorrissey4970 Год назад
Unlike OpenAI's GPT, which uses their hardware, this uses your hardware. So you won't be paying anyone but the power company for the extra electricity.
@MyAMJourney
@MyAMJourney Год назад
I like the concept but it needs improvement
@janalgos
@janalgos Год назад
give us your ideas for how to improve it
@2snipe1
@2snipe1 Год назад
How secure is this use case? I have been wanting to use this for healthcare data in my laboratory, but I do not want the data training the AI.
@Mattbriggs85
@Mattbriggs85 Год назад
I would avoid if its work stuff.
@JoeTeshima
@JoeTeshima Год назад
@mattbriggs85 did you find something in the repo that is leading you to think it is connecting to the internet and sending your private data? I haven’t looked at this yet but let us know.
@majesticglue9100
@majesticglue9100 Год назад
it is 100% run privately. That is the purpose of this repo. It accomplishes this by not using chatGPT which is closed source, and instead using open source large language models. But for purposes like asking questions, these open source language models are more than good enough and are making lots of progress catching up to chatGPT and in some areas surpassing gpt.
@majesticglue9100
@majesticglue9100 Год назад
@@Mattbriggs85 it is private, though you'll need a good gpu or run it on a third party service that can run the large language model.
@BirgittaGranstrom
@BirgittaGranstrom Год назад
I’m curious to try it so I have to go back and watch in slow motion;-)
@BrunoSousa-oi1mw
@BrunoSousa-oi1mw Год назад
When running python privategpt.py i got this PS C:\Users\Eu\Desktop\AIs\privategpt> python privategpt.py Using embedded DuckDB with persistence: data will be stored in: db Found model file. gptj_model_load: loading model from 'models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ... gptj_model_load: n_vocab = 50400 gptj_model_load: n_ctx = 2048 gptj_model_load: n_embd = 4096 gptj_model_load: n_head = 16 gptj_model_load: n_layer = 28 gptj_model_load: n_rot = 64 gptj_model_load: f16 = 2 gptj_model_load: ggml ctx size = 5401.45 MB Traceback (most recent call last): File "C:\Users\Eu\Desktop\AIs\privategpt\privategpt.py", line 76, in main() File "C:\Users\Eu\Desktop\AIs\privategpt\privategpt.py", line 36, in main llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False) File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__ File "pydantic\main.py", line 1102, in pydantic.main.validate_model File "C:\Users\Eu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\llms\gpt4all.py", line 139, in validate_environment values["client"] = GPT4AllModel( File "C:\Users\Eu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt4all\gpt4all.py", line 49, in __init__ self.model.load_model(model_dest) File "C:\Users\Eu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt4all\pyllmodel.py", line 141, in load_model llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8')) OSError: [WinError -1073741795] Windows Error 0xc000001d
@verence333
@verence333 Год назад
I'm getting some errors like "gpt_tokenize: unknown token '├'" or "gpt_tokenize: unknown token '┬'". Not sure if it's because of the PDF files I'm trying.
@squartochi
@squartochi Год назад
I am also, then sometimes it runs after :)
@justcreate1387
@justcreate1387 Год назад
A token is essentially a word(simplification). Your PDF might have some unreadable symbols (‘tokens’) that the LLM wasn’t designed to handle.
Далее
Run your own AI (but private)
22:13
Просмотров 1,5 млн
Women’s Free Kicks + Men’s 😳🚀
00:20
Просмотров 3,6 млн
Llama 3.1 Talks to your Database with Open Interpreter
2:39
This Simple File Management System Changed My Life!
9:27
This RAG AI Agent with n8n + Supabase is the Real Deal
16:27
3x 2x 1x 0.5x 0.3x... #iphone
0:10
Просмотров 2,8 млн