There are already hundreds of this same video subject using PrivateGPT floating around YT with different talking heads explaining it a little differently. What you and the other AI tinkerers do not fully disclose is how long the ingestion time takes for most users with common computers when they finally get this all setup. PrivateGPT doesn't use GPU, only CPU. That being said, you will need a very good CPU to process the info during ingesting files and query information. Be prepared for long wait times. I'm sure the script will become more polished over time from the dev and/or from a fork dev. This has lots of potential but it's definitely not ready for any real production environment without some tweaks and optimization.
I was waiting for a similar program called Alpaka Electron to show up something after my query until I found your comment hahahah 😂 I knew that but I was just waiting a confirmation and here you go
Would an individual be able to make the tweaks/optimization to make it ready for a real production environment or would it be the team behind privateGPT only? If an individual could improve it, what would need to be changed?
In addition to the very informative and actionable content, I really appreciate the time-saving concision of Liam's presentations. That really separates him from the AI/GPT videos that now crowd that subject-area. (Couldn't be more concise if he really were an avatar of a LiamGPT chatbot.)
I was just thinking the opposite.. Clicked on how to install and he still waffles on, then leaps into an install without explaining about using my own IDE or something? Wut? Useless video
In the next few days, I'll be sharing a project that I've been working on for a few weeks now. It's a piece of software that runs completely offline (unless you enable web for in-depth current responses.) But I was unhappy with the vector database solutions available, so I rolled my own into the software. It is able to chunk, vectorize, and query 5000x faster than langchain, even on miniscule resources 🧙♂️ I'll reach out upon release so you can demo and provide feedback. No python required 🤗 everything is standalone compiled. (Windows, Mac, Linux(Inc RPi) systems)
man, I have no idea of coding, but the really interesting idea is to be able to choose privately our own and large collection of data. if we users can access that, we will be able to offer our private collections of data to others so knowledge will spread easily. of course it's all about the correct code to be able to load as large data as possible about a theme and the good quality of the data choosen. thanks for your willingness to make this happen.
I received error like this: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Wow, this is just great! Have you thought about bringing imartinez (the author of the repo) onto your show to chat about it more? It would be very interesting to see what is in the works!!
Yes I’ve been looking for something like this for a while, for my use case it would be awesome to be able to create a web interface so let’s say coworkers or friends can can use access it via the internet, I thought about using cloud flare tunnels to connect to my local PC, but I’ll need to figure out a way to connect the interface to Google sites maybe? We’ll see. Awesome content!
I've been messing with this for a couple days but because of other priorities only got it to run today. I trained it on 133 MB of pdf documents. I noticed that it uses the CPU, which worked fine for the ingest, but then when I asked it my first question, it took over five minutes to answer and it was suboptimal. No biggie on how good the answer was, because my source was probably not optimal, but the time it took to answer was crazy. I noticed it used the CPU for that too. Digging around I saw a reference to adding GPU to a method, but that method doesn't exist and my Python skills aren't up to fixing it. Just my thoughts on this. (Not complaining here, just pointing something out.)
Nice video, thanks. Is it possible you make a video creating a nice UI instead of using the cmd window? If you show us how to do it with chatgpt would be a plus.
hello sir. Now some new changes are made to the repo and the model is not training and saying that model not found ,but i placed my downloaded models in a created direcotry of models . Please do a video on this new repo of PrivateGPT. Thanking you and hoping to see a video on this !
@liam Ottley -- unfortunately they updated the repo so there is no longer a link to one of the models... can you add a download link in your description? not sure if there are other changes to the file as well...
Just follow the steps in the README of the repo, and you'll be able to make it work. You no longer need two models. And there's no need to modify the code like Liam did; by default, it already reads PDFs 👌
I just tried to install it, I cannot. Basically, I get into some kind of closed circle between "poetry install --extras vector-stores-qdrant" and "poetry install --extras ui". No idea what it is exactly that it wants and no idea what exactly it is doing. It's not supposed to require so much stuff. I installed the privateGPT-app from github and it works. SLOWLY, but works. Thanks for the video anyway, now I finally have my dream code for parsing and questioning pdfs. Pretty awesome.
Could you in the future always consider to instal, configure and run the apps in a virtual environment (venv) as a good practice? Thanks for the nice video.
@@satanael387 Just so if you transfer you working folder, you have every resources needed to have you program running. And it is easier to share to someone.
LOL! The first of the 13 requirements takes an hour to download (for me anyway). I have no idea how long the rest will take. PrivateGPT in Minutes! My extensive background in programming on an Apple IIe in 1984 is finally paying off.
The example document "state of the union speech" is around 30 pages / 6,500 words. What is the limit to the number of pages/words/number of documents/size of documents that can be added to the document folder to be converted to embedings then queried? I want to use it for database research of a local PC database that is around 1GB of various .csv and PDF files. If this isn't possible is there another method or can only a few of them be queried at a time? Or is it possible if only a few were converted to embeddings at a time until the full database could be complete? (in theory) Thanks for the amazing tutorial this is incredibly useful the main issue is if it is scalable and if not how to make it scalable while still being localized/secure/free etc....
Everyone seems excited about this PrivateGPT. I've installed it and uploaded several PDF files. Am I the only one getting incorrect responses from the bot? It doesn't seem all that phenomenal to me, to be honest.
hi, this looks great. I am getting the error below. Any ideas? Building wheels for collected packages: llama-cpp-python, hnswlib Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1
@Liam Ottley Thanks a lot! can you elaborate why you don't llama-index loader instead? is there any special reason? although llama-index use longchain..
Hi, can you pls post link for 13b model as i am looking over internet getting confused between 4bit converted model etc. Pls share link for 13b models also.
Getting the below error C:\Users\babas\AppData\Local\Temp\pip-install-ugk0x7xu\llama-cpp-python_a827470d54a74b4d954d18497e059ba1\_skbuild\win-amd64-3.9\cmake-build Please see CMake's output for more information. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
Please make a Video about HormoziGPT :-) I have a question about this one des this AI just contain information from Hormonzi or can you use it like chatgpt 3.5 and 4?
Can this software ask more complex questions like "mark hate speech in document and export the result in this json format...", or "summarize the news article in 3 paragraphs"?
How can we built this with Streamlit...basically the use can upload their docs from the Streamlit UI...and then go ahead and ask questions from the index...but like this using PrivateGPT instead of OpenAI and running it locally....thanks a ton....
Very informative video, I tried to implement the same , Have reached to stage where it ask for Enter Question. But then it throws error "AttributeError: 'DualStreamProcessor' object has no attribute 'flush'" when i ask questions. I am using windows Server 2012 R ,
The response time I'm getting is terribly slow. The only time the reponse is quicker is when I hit Control + C after a few seconds. But even then, the response is not usable. Any help?
what python version should I use because I want to create an environment before installing the requirements.txt conda create -n test_env python= ????? what python version should I type in the place of question marks .
hi please can you make the video on this project which link you have shared of github name :"privateGPT" in this video description how i can run? I am not able to understand. please help me in. the project you showed in video this is not in github link now. I have to create project chat with pdf offline.
Mac M1 here. I got this error while installing requirements: Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects
what fixed this for me is installing Visual Studio tools. google this error and you'll find a Stack Overflow thread suggesting to install VS tools. then you will need to modify the installation by adding another module (can't remember right now). once this is done, restart your PC and repeat the installation. make sure you have the models in their directory before you run the requirements.txt installation!!!
@@shacharbard1613 "then you will need to modify the installation by adding another module (can't remember right now)." well that's not very helpful advice😆 . I googled it many times and couldn't find any good solution... 😭
great video...can this be put into a Streamlit app as well...the ability to upload the docs and use the chat bot in the actual Streamlit app....thoughts?
What I need is an AI or set of AI tools that I can use offline to scan physical documents and convert pictures of text into text documents, file them into various folders and subfolders based on content and answer questions about the information in the collection of documents as a whole... And I need to do it for free on. $300 tablet. It's November 2023 now... I'll wait a year.
they removed a model so this is what you get now File "pydantic/env_settings.py", line 39, in pydantic.env_settings.BaseSettings.__init__ File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for Settings persist_directory none is not an allowed value (type=type_error.none.not_allowed)
This is same as all available chat pdf application out there. Althought accurate this solution is not feasible as the inferencing process for 1 query take up to 5 minutes. Unless we can integrate with GPU which is impossible fot gpt4all
Can anyone point me to an install video for privateGPT that starts from scratch? I mean a bare bones visual studio code without C++ compiler and whatever dependencies are needed to get this working. I am getting errors on the requirements.txt and nothing I am doing is working to resolve. I have the latest git clone from today as well.
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [308 lines of output] Failed to build llama-cpp-python hnswlib ERROR: Could not build wheels for llama-cpp-python, hnswlib, which is required to install pyproject.toml-based projects I have this problem when I try to install the preferences. How do I fix?
The error message "Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. exit code: 1" indicates that the process of building the "wheel" for the llama-cpp-python package was not successful. During the wheel building process, an error with exit code 1 occurred. Additionally, the error message mentions a dependency called hnswlib, which also failed to build. Here are some common causes of this error message and possible solutions: Unsatisfied dependencies: The package dependencies may require additional dependencies that are not installed or have incompatible versions. Make sure you have the required versions and try installing those dependencies manually before attempting to build the package. Platform compatibility: The package you are trying to build may have specific platform dependencies or requirements. Check the package or project documentation to ensure the correct platform compatibility. Incorrect environment configuration: Ensure that your development environment is properly configured, including the correct Python version, development tools such as a C++ compiler, and any required environment variables. Internal package issues: There might be internal issues with the package itself that require fixes or updates. You can search for more information about the issue in the project's source or report it to the package's developers for further assistance. If the error message doesn't provide enough information to resolve the problem, it's recommended to consult the official documentation, project repositories, or community resources related to the package or project you're trying to build. In some cases, it may also be helpful to reach out to the package's developers for further support. I got this form chat GPT 3,5 turbo :) lol
@@LiamOttley I think that there was an addition to the code to enable this. now it's: from langchain.document_loaders import TextLoader, PDFMinerLoader, CSVLoader in any case, I added PyPDFLoader