Not sure why everyone commenting negative on narrator. Installation on windows is difficult for privateGPT. I tried and got many errors. This video he explained clearly what errors we may get and how to resolve them. He even uninstalled chocolaty and reinstalled for us to understand how to do that. I like funny narration. Don't know why people can't appreciate genuine efforts.
@@Tubby-oq3uu I like the video very much and the audio is great and inspiring. Can a chatbot be installed on the Python 🐍 console 🖥 of a program like Paraview? Paraview is a software to display 3D models and it contains a Python 🐍 console to run scripts for repetitive tasks.
Awesome tutorial grandpa... After going through number of tutorials I was literally going to give up but you saved me ... Now I have a place to come if nothing works... 😊
thank you so much. i tried via other tutorials but this is the only that worked so far. installing this is not an easy task as all the steps are scattered all over the internet but you made it easier.
Simply liking the video wasn't enough. I had to post a comment and also subscribe. Thank you for showing the errors and how to mitigate them. Other tutorials will skip the error side of the work to show how professional they are and ordinary people will encounter to errors. The process is not as fluid as they show. But you made my problems go away. THANK YOU
Thank you so much for your kind words! I'm glad the troubleshooting section was helpful. It's important to me to show the real process, including the challenges.
I was looking for this type of service to process my text locally since centuries, and after I succeeded in the installation and running the program I was so so happy that I came and hit subscribe again and again and again ..... forgot that I am already subscribed 🤣 .... RU-vid should put this video on the front page! People, including myself, suffered a lot and couldn't figure out how to setup the latest privateGPTon windows due to many complexities involving the process. Thank you soooooooooooooooooooooooo much! Well, you know what, I might just download this video, its a treasure!
Crazy voice, but your tutorial seems the only one on internet to work correctly and no need additional jobs to search installation options except for downloading anaconda from beginning. Thank you very much! Maybe you know, can I enable AMD support vith vram512mb?
Thank you for the kind words! Regarding AMD support, it can be tricky. You might need to use a CPU-only version or explore alternatives like WSL2 on Windows.
make not installed. An error occurred during installation: Unable to obtain lock file access on 'C:\ProgramData\chocolatey\lib\995c915eb7cf3c8b25f2235e513ef8ca0c75c3e7' for operations on 'C:\ProgramData\chocolatey\lib\make'. This may mean that a different user or administrator is holding this lock and that this process does not have permission to access it. If no other process is currently performing an operation on this file it may mean that an earlier NuGet process crashed and left an inaccessible lock file, in this case removing the file 'C:\ProgramData\chocolatey\lib\995c915eb7cf3c8b25f2235e513ef8ca0c75c3e7' will allow NuGet to continue.
This error suggests a file permission issue or a locked file. Try running the installation as an administrator. If that doesn't work, manually delete the lock file mentioned in the error message, then attempt the installation again.
privategpt is insane if it thinks im going to install conda, poetry, mingw, and all this total bs on my computer! if you really need all this crap, it's time to make it docker compatible, and just have an image.
I understand your frustration. While these dependencies are necessary for the current setup, a Docker image could indeed simplify the process. I'll pass along the suggestion to the PrivateGPT developers.
It keeps saying ImportError: Local dependencies not found, install with `poetry install --extras llms-llama-cpp` and even after running that line and doing make run, it then complains with: ImportError: UI dependencies not found, install with `poetry install --extras ui` And then if I install that and try make run, it then goes back to the same first error! ImportError: Local dependencies not found, install with `poetry install --extras llms-llama-cpp It's a cycle. They seem to be uninstalling each other. What to do?!
That is interesting, I wonder if you try running poetry install --extras "llms-llama-cpp ui" to install both dependencies simultaneously to see if it resolves the cycle of conflicting installations.
What happens if you ask it to "create a windows installer for privategpt". Going through all of the rigamarole to install it when it is unclear how the end-product will perform is enough to make me skip it. Just saying..
😂 so true. Creating a Windows installer for PrivateGPT would be a complex task due to its dependencies. For now, I recommend following the manual installation process to ensure proper setup and compatibility.
Thank you. Worked for me too, after some adjustmens. Could you make a video how to choose a model from hugging face and what the criterias could be for choosing a model. Nothing there yet, on youtube.
@@bradcasper4823 Try running "poetry run pip install tomlkit" within your project environment. If that doesn't work, ensure you're using the latest version of Poetry and that your environment is activated.
i sympathize with ur video but it didnt work for me i still get an error after using the prompt from the github user. it might be cuz i have an amd gpu on windows? idk
I'm sorry to hear you're still encountering errors. AMD GPUs on Windows can be tricky with some AI projects. Consider trying a CPU-only setup or exploring alternatives like WSL2 for better compatibility.
Regarding language models, check the config.yaml file in the project root for model settings. As for external queries, PrivateGPT typically runs as a local server. Check the project documentation for API endpoints if available.
when I run, poetry run python scripts/setup I now get this error "None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Traceback (most recent call last): File "D:\Windows\LLM\privateGPT\scripts\setup", line 8, in from private_gpt.paths import models_path, models_cache_path ModuleNotFoundError: No module named 'private_gpt'" Even after installing torch, I get this error that "no module named private_gpt"
Visual Studio isn't always required, but the C++ build tools it provides are often necessary. You can try installing just the build tools if you prefer.
Haha, yeah, I understand the frustration. The complexity is due to the project's dependencies and customization options. Hopefully, future versions will simplify the process.
I understand your frustration. The complexity often comes from the project's dependencies and customization options. Hopefully, future versions will simplify the process.
Make sure you're in the correct directory where the pyproject.toml file is located. Use 'cd' to navigate to the project root before running Poetry commands.
Building wheels can be tricky. Make sure you have the latest C++ build tools installed. If issues persist, try using a pre-built wheel or consider using a CPU-only version. Also, recently when I did an install using a pre-built wheel for llama-cpp-python, I installed the CUDA toolkit of the matching version. For example, try this with CUDA 12.1 installed: pip install --no-cache-dir --force-reinstall llama-cpp-python --extra-index-url abetlen.github.io/llama-cpp-python/whl/cu121 numpy==1.25.2
That's unusual. Make sure you have the latest CUDA drivers installed. Also, check if there are any specific GPU settings in the PrivateGPT config that need adjustment.
The CPU run worked but i'm facing trouble running the following command for running with GPU: $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python numpy==1.26.0 I ran it in the anaconda powershell after cding to the repo and activating the environment but I get an error when it's building wheels: ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
Are you running from powershell or command prompt? Have you installed the CUDA toolkit? Maybe try running the commands separately for each thing with multiple pip installs. May need to post it as an issue on the project github's Issues.
For installing multilingual-e5-base, you can use Hugging Face's transformers library. Try "pip install transformers" and then use the model's Hugging Face identifier.
For CMAKE config, ensure it's properly installed and added to your system PATH. For the module errors, make sure you're in the correct directory and that all dependencies are properly installed.
Some latency is normal, especially during the first run or with larger models. Performance can vary based on your hardware and the specific model being used.
From what I understand, running PrivateGPT on AMD GPUs can be challenging. You might need to use a CPU-only version or explore alternatives like using WSL2 on Windows.
The installation process is far too long and convoluted. I think GPT4all had a quicker install process. PS- She clearly wanted to be on the other side of the ocean after those jokes haha
GPT4all is for the maker only, not for all. GPT4all is just a stupid tool that never help with dealing with the docs you provide to. Keeps skipping the provided docs and starts making up some random answers. Most of the time doesn't load docs you provide. Settings shows that you are using GPU but all the work done via CPU. Didn't like it at all.
PrivateGPT uses local processing, so your data should remain private. However, always review the project's privacy policy and settings for the most up-to-date information.
I apologize for any perceived slowness. AI model performance can vary based on hardware and settings. I'll try to provide more context on expected performance in future videos.