Thank you this is great! Is there a ballpark token limit with regards to source / input documents? I could see this being very useful for completing my own (private, for now) academic articles that are in various stages of progress
I'm not sure if my 2018 MacBook Pro doesn't cut it anymore, or I messed up the code, but I've been waiting over 5 minutes for it t answer my query and still nothing. CPU usage is through the roof
Did anyone else have problems with running pipe install -r requirements.txt command? For some reasons it just stops after some download. Pip3 dosen't work either. Just spits out long message insummary "Configuring incomplete, errors occurred!"
I had to run it on a virtual enviroment before installing try starting the virtual environment with source venv/bin/activate for mac or venv\Scripts\activate for windows, then run the install.Everytime you want to use you got to activate it again tho
Did anyone get useful results out of Private GPT? I made it run on my laptop, but the text outputs are not useful. Could/should it also work with texts in other languages than English?
There are so many videos out there on how to install Private GPT, But no one is actually discussing and showing how useful it actually is. I guess it’s just a hype?
Unlike OpenAI's GPT, which uses their hardware, this uses your hardware. So you won't be paying anyone but the power company for the extra electricity.
@mattbriggs85 did you find something in the repo that is leading you to think it is connecting to the internet and sending your private data? I haven’t looked at this yet but let us know.
it is 100% run privately. That is the purpose of this repo. It accomplishes this by not using chatGPT which is closed source, and instead using open source large language models. But for purposes like asking questions, these open source language models are more than good enough and are making lots of progress catching up to chatGPT and in some areas surpassing gpt.
When running python privategpt.py i got this PS C:\Users\Eu\Desktop\AIs\privategpt> python privategpt.py Using embedded DuckDB with persistence: data will be stored in: db Found model file. gptj_model_load: loading model from 'models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ... gptj_model_load: n_vocab = 50400 gptj_model_load: n_ctx = 2048 gptj_model_load: n_embd = 4096 gptj_model_load: n_head = 16 gptj_model_load: n_layer = 28 gptj_model_load: n_rot = 64 gptj_model_load: f16 = 2 gptj_model_load: ggml ctx size = 5401.45 MB Traceback (most recent call last): File "C:\Users\Eu\Desktop\AIs\privategpt\privategpt.py", line 76, in main() File "C:\Users\Eu\Desktop\AIs\privategpt\privategpt.py", line 36, in main llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=False) File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__ File "pydantic\main.py", line 1102, in pydantic.main.validate_model File "C:\Users\Eu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\langchain\llms\gpt4all.py", line 139, in validate_environment values["client"] = GPT4AllModel( File "C:\Users\Eu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt4all\gpt4all.py", line 49, in __init__ self.model.load_model(model_dest) File "C:\Users\Eu\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\gpt4all\pyllmodel.py", line 141, in load_model llmodel.llmodel_loadModel(self.model, model_path.encode('utf-8')) OSError: [WinError -1073741795] Windows Error 0xc000001d
I'm getting some errors like "gpt_tokenize: unknown token '├'" or "gpt_tokenize: unknown token '┬'". Not sure if it's because of the PDF files I'm trying.