Тёмный

PrivateGPT 4.0 Windows Install Guide (Chat to Docs) Ollama & Mistral LLM Support! 

StuffAboutStuff
Подписаться 7 тыс.
Просмотров 12 тыс.
50% 1

🚀 PrivateGPT Latest Version (0.4.0) Setup Guide Video April 2024 | AI Document Ingestion & Graphical Chat - Windows Install Guide🤖 Private GPT using the Ollama backend and Mistral LLM! Easy Setup on Windows!
Welcome to the April 2024 version 0.4.0 of PrivateGPT!
🌐 New Features Overview.
In this version the complexities of setting up GPU support has been removed you can now choose to integrate this version of Private GPT with Ollama and have it do all the heavy lifting! This version still features the Web Frontend!
🔨 Building PrivateGPT on Windows is now Easy!
PrivtateGPT using Ollama Windows install instructions. Get PrivateGPT and Ollama working on Windows quickly! Use PrivateGPT for safe secure offline file ingestion, Chat to your Docs!
👍 Like, Share, Subscribe!
If you found this guide helpful, give it a thumbs up, share it with your friends, and don't forget to subscribe for more tech tutorials and AI insights. Stay tuned for future updates on StuffAboutStuff4045!
🔗 Links.
Ollama.
ollama.com/
PrivateGPT Install Instructions used.
docs.privategpt.dev/installat...
PrivateGPT Github Project Page.
github.com/zylon-ai/private-gpt
Make for Windows.
gnuwin32.sourceforge.net/pack...
📌 Timestamps
0:00 Introduction to how PrivateGPT Evolved the 3 Main Versions
2:25 Setup Ollama for PrivateGPT
3:37 Private GPT Required SW v0.4.0 & System Setup
6:28 Setup PrivateGPT on Windows
10:15 Testing PrivateGPT and Ollama

Наука

Опубликовано:

 

5 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 219   
@Offsuit72
@Offsuit72 3 месяца назад
I cannot thank you enough. I've been struggling for several days on this, it turns out I was using outdated info and half installing the wrong versions of things. You made things so clear and I'm thrilled to be successful in this!
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
You're welcome! I am glad to hear the video assisted! Thanks so much for reaching out.
@shailmatrix
@shailmatrix 6 дней назад
Thanks for creating a clear and concise video to understand the process of running Private GPT.
@stuffaboutstuff4045
@stuffaboutstuff4045 3 дня назад
Glad it was helpful! Thanks for the feedback, much appreciated.
@radudamian3473
@radudamian3473 3 месяца назад
Thank you. Liked and subscribed. I most appreciate your patience to give step by step, easy to understand and follow instructions. Helped me, a total noob...so hat off
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Thanks for the sub and the great feedback. Appreciated!
@christopherpenny6216
@christopherpenny6216 15 дней назад
Thank you sir. This is incredibly clear. Many others make assumptions about what I know already - you covered everything. Great guide!
@stuffaboutstuff4045
@stuffaboutstuff4045 15 дней назад
You're very welcome! Thanks for the feedback.
@JiuJitsuTech
@JiuJitsuTech 2 месяца назад
Thank you for this vid! I watched several others and this was the most straight forward approach. Super helpful !!
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Thanks for the feedback, glad the video was helpful.
@likanella
@likanella 23 дня назад
Thank you, thank you so much. There were no detailed instructions anywhere. Everything worked out! You're great!
@stuffaboutstuff4045
@stuffaboutstuff4045 20 дней назад
You're welcome! Glad to hear you are up and running. Thanks for the feedback!
@DeTruthful
@DeTruthful 2 месяца назад
Thanks man did a few other tutorials couldn't figure it out. This made it so simple. Subscribed!
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Thanks for the sub! Glad the video helped out.
@chahrah.5209
@chahrah.5209 2 месяца назад
Huge thank for the video, AND for taking the time to help solve problems in the comments, it was just as helpful. Definitely subscribing.
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
You're welcome! Thanks for the sub.
@GrahamJefferson
@GrahamJefferson Месяц назад
Thank you for taking the time to make this video, it was just what I was looking for. 😎
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Glad it was helpful! Thanks for taking the time to reach out.
@curtisdevault6427
@curtisdevault6427 22 дня назад
Thank you for this! I've been struggling with this for a few days now, you provided up to date and clear instructions that made it super simple!
@stuffaboutstuff4045
@stuffaboutstuff4045 20 дней назад
Great to hear! Glad you are up and running. Thanks for the feedback.
@makin1408
@makin1408 13 дней назад
"Thank you so much! Finally got it working after trying a bunch of tutorials. Yours really did the trick- super helpful!
@stuffaboutstuff4045
@stuffaboutstuff4045 12 дней назад
You're welcome! Glad the video got you up and running.
@Abhiram00
@Abhiram00 9 дней назад
it worked like a charm. thank you so much
@stuffaboutstuff4045
@stuffaboutstuff4045 9 дней назад
You're welcome! Thanks for the feedback. Glad you are up and running.
@ilieschamkar6767
@ilieschamkar6767 2 месяца назад
It worked like a charm, thanks!
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Great to hear! Thanks for the feedback much appreciated.
@OmerAbdalla
@OmerAbdalla Месяц назад
This is a great installation guide. Precise and clear steps. I made one mistake when I tried to setup the environment variable in Anaconda Command Prompt instead of Powershell prompt and once I fixed my mistae I was able to complete the configuration successfully. Thank you very much.
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
You're welcome! Thanks for reaching out. Glad the video helped.
@RyanHokie
@RyanHokie 2 месяца назад
Thank you for your detailed tutorial
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
You’re welcome 😊 Glad the video assisted. Thank you so much for the feedback.
@bananacomputer9351
@bananacomputer9351 2 месяца назад
after two hours of research,I start over with your tutorial, and finished in 10 minutes,thank you, thank you!!!
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Glad it helped! Thanks for the feedback.
@Matthew-Peterson
@Matthew-Peterson 2 месяца назад
Brilliant Guide. Subscribed.
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Welcome aboard! Thanks for the sub and feedback.
@Lucas-iv6ld
@Lucas-iv6ld 2 месяца назад
It worked, thanks!
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Glad it helped, Good to hear your are up and running.
@aysberg9403
@aysberg9403 3 месяца назад
excellent explanation, thank you very much
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Pleasure! Glad the video assisted. Thanks for the feedback!
@cinchstik
@cinchstik 3 месяца назад
got it to run on virtual box. works great! Thanks
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Great to hear! Thanks for the feedback.
@firewithcode
@firewithcode 7 дней назад
Thank you very much. It is working for me.
@stuffaboutstuff4045
@stuffaboutstuff4045 3 дня назад
You're welcome! Glad video assisted and you are up and running. Thanks for reaching out.
@firewithcode
@firewithcode 2 дня назад
@@stuffaboutstuff4045 Could you please also make a video about how to add gpu support on privateGPT?
@rchatterjee48
@rchatterjee48 Месяц назад
Thank you very much it works
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
You're welcome! Thanks for making contact.
@Isak_Isak
@Isak_Isak 5 дней назад
Great tutorial, but I have a problem when I want to search or query the file I have imported. I have an error "Initial token count exceeds token limit". I already have increased the limit but nothing change, how can I solve the error?
@likanella
@likanella 8 часов назад
Hey there! I was wondering if you could help me out with something. I'd love to create a tutorial on adding Nvidia GPU support, but I can't seem to find any clear, helpful guides on the topic. I've tried a few times, but I'm still a bit lost. Would you be able to help me out? Thanks so much!
@maxxxxam00
@maxxxxam00 2 месяца назад
Excellent video, very clear step guides. Do you have or could you make a docker compose file that does all the steps in a docker environment?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, thank you so much for the feedback, let me look into it and I will revert soon! Thanks.
@Quicksilver87878787
@Quicksilver87878787 29 дней назад
Thanks! Is there any specific reason why you are using Conda as opposed to virtualenv?
@stuffaboutstuff4045
@stuffaboutstuff4045 20 дней назад
Hi, I use Anaconda for most of my AI environments when working on Windows. I find it easy to work with and install required SW etc. Thanks for reaching out.
@rummankhan5499
@rummankhan5499 Месяц назад
awesome ! best tutorial ever... can you please make a video on web deploy/upload of local/privategpt... without openai (if thats doable)
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, thank you for the feedback! Noted on the video idea. Glad you are up and running.
@bobsteave1236
@bobsteave1236 11 дней назад
Wow , got it working both you 2.0 and 4.0 guides for this. Thanks you forever! but Now I want to change the model that is in the UI? How do I do this? The whole reason I did this was to have other models with ollama.. and the guide doesn't show this
@stuffaboutstuff4045
@stuffaboutstuff4045 9 дней назад
Hi, glad you are up and running. You can change the Ollama model. Download and install the model you want to use and change your config files. Check the Ollama example shown on the below link. docs.privategpt.dev/manual/advanced-setup/llm-backends
@chjpiu
@chjpiu 2 месяца назад
Thanks a lot. Please let me know how to change the LLM model in privategpt? For me, the default model is mistral
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, sorry for the late reply. You can check out this page to change your LLM. Let me know if you came right with this. Thanks for reaching out! 🔗docs.privategpt.dev/manual/advanced-setup/llm-backends🔗
@erxvlog
@erxvlog Месяц назад
This was excellent. One issue that did come up was uploading pdfs....there was an error related to "nomic". I signed up for nomic and installed it. PDFs seem to be working now.
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Thanks for reaching out. Glad to hear you are up and running.
@jcpamart83
@jcpamart83 9 дней назад
Yesterday I tried to install pgpt with the last versions, but it was a bad way. Now I installed all with the good versions, but in the installation of poetry, it answer me this : No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe' No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe' 'poetry' already seems to be installed. Not modifying existing installation in 'C:\Users\MYPATH\pipx\venvs\poetry'. Pass '--force' to force installation. I force it, but anyway, mini conda is the old installation. But how can I do now ???? Thanks for your help
@abhiudaychandra
@abhiudaychandra 2 месяца назад
Hi. Thanks for the great video, but the uploading of even just one document & answering is slow that I just cannot use it any further. Could you please tell me how to uninstall the privategpt, other applications I can of course uninstall, and is there some command i should enter to remove files?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, yes its a bit slower against a local LLM dependent on the GPU available in the machine. Did you try and use Open AI or one of the online providers, if you want to have it super fast. If confidentiality is not your main concern maybe give it a go. If you want to remove just uninstall all SW and delete the project folder you built PrivateGPT in and you should be fine. Thanks for reaching out.
@feliphefaleiros9540
@feliphefaleiros9540 Месяц назад
muito bem explicado, obrigado por pelos videos. Em todas versoes que explicou mostrou passo a passo você é foda very well explained, thank you for the videos. In all the verses you explained, you showed step by step you are awesome
@stuffaboutstuff4045
@stuffaboutstuff4045 9 дней назад
Thanks for the feedback. Glad you are up and running and the video assisted.
@tarandalinux8323
@tarandalinux8323 28 дней назад
Thank you for the great video. I'm at 9:48 and the command $env:PGPT_PROFILES="ollama" gives me an error: The filename, directory name, or volum label syntax is incorrect. (privategpt) C:\gpt\private-gpt>$env:PGPT_PROFILES="ollama" (I don't get the colors you get)
@stuffaboutstuff4045
@stuffaboutstuff4045 20 дней назад
Hi, can you confirm you are running this in your Anaconda PowerShell terminal, Check the steps I use from about 9:20 in the video. Let me know if you are up and running.
@Whoisthelearner
@Whoisthelearner Месяц назад
Great thanks for the awesome video, i wonder whether you would know any similar setup fro the new llama3 llm? If yes, it would be great if you can make a new video about it!!!! Great thanks!
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, sure you can. You can install llama3 on Ollama. You would need to change the config files. The link below should assist until I can update this video. Thanks for the feedback and the video idea. docs.privategpt.dev/manual/advanced-setup/llm-backends
@Whoisthelearner
@Whoisthelearner Месяц назад
@@stuffaboutstuff4045 Great thanks for the prompt reply and the link. Looking forward to your new video as well!! You make very easy for beginner like me! Really appreciate your work
@Whoisthelearner
@Whoisthelearner Месяц назад
@@stuffaboutstuff4045 if you don't mind, allow me to ask a question, I am plannign to adopt the ollama approach but i don't know of what part of the video should i turn to the command PGPT_PROFILES=ollama make run. Great thanks!
@creamonmynutella2476
@creamonmynutella2476 2 месяца назад
is there a way to make this automatically start when the system is powered on?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Sure its possible with PowerShell scripts. Let me check it out and revert.
@shashankshukla6691
@shashankshukla6691 3 месяца назад
thank you, but how can we make use of NVDIA gpu if we have one on our device, like i have NVDIA T600
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, if you build with Ollama it will offload to the GPU automatically (Nvidia or AMD). It does not hammer it to its full potential I have seen. This utilization will get better each evolution of the project. Let me know if you got the GPU to kick in when offloading.
@farfaouimohamedamine3288
@farfaouimohamedamine3288 Месяц назад
Hi, thank you for your tutorial i have followed the steps as u did but i get this error when i try to install the dependencies of the privateGpt : (privategpt) C:\pgpt\private-gpt>poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant" No Python at '"C:\Program Files\Python312\python.exe' NOTE : for the virtual envirement, i did not created inside the system32 directory, i did the creation on the pgpt directory
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, did you get this resolved. Yes, I also created a pgpt folder in the root of the drive. Just to confirm are you running Python 3.11.xx? Let me know if you came right with this.
@anjeel08
@anjeel08 2 месяца назад
This is simply superb. I could install it and run it with your clear step by step instructions. Thank you so very much. However I do notice that uploading the documents to be able to chat with my own set of data takes so long time. Is there a way we can tweak this and make uploading the document easier. I am only using 1 word doc of 30 pages with mainly text and one pdf document of size 88 pages with text and images. word doc was uploaded in 10 min but the pdf runs endlessly. Appreciate if you could make a video on how to use Open AI instead of one of the online providers to get speed (When confidentiality is not important). Thank you in advance for your tip.
@firatguven6592
@firatguven6592 2 месяца назад
I wrote also a comment, I am complaining about the same issue. I have also the version 2.0 also from him and as if it wasnt uploading slow enough, but in version 2.0 in any case the upload was considerably faster. I had during the upload 80% load on my 32 Thread cpu but now in 4.0 the cpu is just boring itself with 5%, which explains the slower upload. The parsing nodes are genereting the embeddings much slower. Since I have more than 10000 pdf files, it is unacceptable to wait endless during the uplad. Now I am waiting since 40 minutes only for 2 huge files with around 3000 pages, which took with the old one only 20 mins totally. I have no idea, how long it will take till end and we are talking only about 2 x files. The other 9998 Files will not even be uploaded in one year if the problem will not be solved. I am disappointed to lose time with 4.0.
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right.. docs.privategpt.dev/manual/advanced-setup/llm-backends
@workmail6406
@workmail6406 2 месяца назад
Hello, I have managed to follow the instructions up until 9:50 for running the environment with make run, however when I iniate the command in an administrator anaconda powershell after locating it to my private-gpt folder I encounter the error "The term 'make' is not recognized as the name of a cmdlet, function". I have no idea how I can get Anaconda Powershell to recongnize the prompt to run on my Windows pc. What can I do to finally start the private gpt server?
@workmail6406
@workmail6406 2 месяца назад
Now that I installed gitbash from the makeforwindows website it works. However, I now run into this error when running make run: Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C: gpt\private-gpt\private_gpt\__main__.py", line 5, in from private_gpt.main import app File "C: gpt\private-gpt\private_gpt\main.py", line 4, in from private_gpt.launcher import create_app File "C: gpt\private-gpt\private_gpt\launcher.py", line 12, in from private_gpt.server.chat.chat_router import chat_router File "C: gpt\private-gpt\private_gpt\server\chat\chat_router.py", line 7, in from private_gpt.open_ai.openai_models import ( File "C: gpt\private-gpt\private_gpt\open_ai\openai_models.py", line 9, in from private_gpt.server.chunks.chunks_service import Chunk File "C: gpt\private-gpt\private_gpt\server\chunks\chunks_service.py", line 10, in from private_gpt.components.llm.llm_component import LLMComponent File "C: gpt\private-gpt\private_gpt\components\llm\llm_component.py", line 9, in from transformers import AutoTokenizer # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py", line 26, in from . import dependency_versions_check ImportError: cannot import name 'dependency_versions_check' from partially initialized module 'transformers' (most likely due to a circular import) (C:\Users\dmm\anaconda3\envs\privategpt\Lib\site-packages\transformers\__init__.py) make: *** [run] Error 1 Any Idea how I can resolve this?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, can you confirm loading all the required SW including all the Make steps I perform from 3:35 into the video. Let me know if you were able to resolve this. Also confirm you are running everything in the same terminals and admin mode where needed. Make sure you use Python within 3.11.xx in your Anaconda Environment.
@nunomlucio5789
@nunomlucio5789 2 месяца назад
In terms of speed, I feel that the previous version is way faster than this one using Ollama, previous version I mean using CUDA and so.... in terms of answering and even loading documents
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Agreed, just increases the build difficulty as bit. 👨‍💻 Thanks for reaching out.
@lherediav
@lherediav 3 месяца назад
For some reason Anaconda doesnt recognize the CONDA command in my end doestn show (base) at the begining of the anaconda prompt, any solutions? i am stuck in that part 7:46 part
@lherediav
@lherediav 3 месяца назад
when i open anaconda prompt shows this: Failed to create temp directory "C:\Users\Neo Samurai\AppData\Local\Temp\conda-\"
@thehuskylovers1432
@thehuskylovers1432 3 месяца назад
Same issue Here i cannot pass this neither v2 or this version
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, just checking if you came right with this? When you open your Anaconda Prompt or Anaconda PowerShell prompt they must open and load and show (base). Is this not showing in both Anaconda Prompt or Anaconda PowerShell prompt? Did you try and open both in admin mode? It seems there is a problem with the anaconda install on the machine.
@quandoomniflunkusmoritati9359
@quandoomniflunkusmoritati9359 13 дней назад
Please address the issue of huggingface tokens and login for the install script. I have been all over the net and tried different solutions and script mods, including the huggingface CLI. However I have not been able to install a working copy yet (yes, I did accept the access the to Mistral repo on huggingface too. The python install script fails on Mistral and on the transformers and tokenizer. Shows message for gated repo but I have authenticated on the CLI and tried passing the token in the scripts. Still failing.... HELP!
@stuffaboutstuff4045
@stuffaboutstuff4045 9 дней назад
Hi, are you using Ollama as backend? If not follow the steps in the Ollama video on the channel and just hook up your install to that. Otherwise test this on a hosted LLM like OpenAI. You should not struggle if you follow the steps exactly in the 2 videos. Let me know if you are up and running.
@drSchnegger
@drSchnegger 2 месяца назад
If I make a Prompt, I get an error: Collection make_this_parameterizable_per_api_call not found whenn i do another Prompt, i get the errror: NoneType' object has no attribute 'split
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, from what I can gather you will get this error if you prompt documents but no documents are loaded. Can you ensure you uploaded documents into PrivateGPT and selected them prior to prompt. Let me know if you come right with this. Thanks for reaching out! If problem persits check out these links and check if it helps, github.com/zylon-ai/private-gpt/issues/1334 , github.com/zylon-ai/private-gpt/issues/1566
@JiuJitsuTech
@JiuJitsuTech 2 месяца назад
From the git issues page, this resolved the issue for me. "This error occurs when using the Query Docs feature with no documents ingested. After the error occurs the first time, switching back to LLM Chat does not resolve the error -- the model needs to be restarted." Enter Ctrl-C in Powershell Prompt to stop the server and of course 'make run' to re-start.
@pranavmalhotra7635
@pranavmalhotra7635 2 месяца назад
ERROR: Could not find a version that satisfies the requirement pipx (from versions: none) ERROR: No matching distribution found for pipx I am receiving this error and hence I am unable to proceed with thwe installation any tips?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, can you confirm you are installing pipx in a normal admin mode command prompt. Just to check if you followed the steps from 6:30 in the video onwards. If still not working can you confirm you have Python 3.11.xx installed with the pip package that ships with. Let me know if you came right with this. Thanks for reaching out.
@user-bg7zh7ub2h
@user-bg7zh7ub2h 2 месяца назад
i have a question how do i run it again if my system restarts what steps do i have to do again or command to run again can we set to autostart when my system starts
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, you can just run Anaconda PowerShell prompt again, activate the environment you created. Make sure you are in the project folder. Set the env variable you want to use and execute make run. Check the steps performed in the Anaconda PowerShell from 9:24 in the video. Let me know if you are up and running. Thanks for reaching out.
@user-vr5lg6mv2i
@user-vr5lg6mv2i 2 месяца назад
is python 3.12 will do the work or specifically i need 3.11 ?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, You would need Python 3.11.xx. The code currently checks if the installed Python version is in that range. I got build errors with 3.12 installed in the environment. Let me know if you are up and running.
@guille8237
@guille8237 2 месяца назад
i got it running but I want to change the model to deep seek coder, how do I do it?never mind
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Great, glad you are up and running.
@JanaFourie-cm5eh
@JanaFourie-cm5eh Месяц назад
Hi, when querying files only the sources appear after it stopped running (files ingestion seems to work fine). How can I fix this? Or is it still running but extremely slow...?
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, did you come right with this. There are some good comments on this video on speeding up the install including working with large docs that slow down the system. Check the link below, maybe this can assist. Also check the terminal when this happens for any hints on what might be hanging up. docs.privategpt.dev/manual/document-management/ingestion#ingestion-speed
@JanaFourie-cm5eh
@JanaFourie-cm5eh Месяц назад
@@stuffaboutstuff4045 Thanks, how can I contact you? I noted you are South African through the accent!
@vaibhavdivakar4653
@vaibhavdivakar4653 2 месяца назад
I followed te steps and for some reason when i do make run command, it is giving me "no Module called uvicorn". i installed the module using pip command it still says the same error.. :(
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, does it not launch at all and stops with this error. Its seems its the webserver that needs to start. When you launch it I know it can display uvicorn.error message but when you open the browser you will see the site up and everything works. If you get this, uvicorn.error - Uvicorn running on http : // 0. 0. 0. 0 : 8001 then it works. But by the comment it sound like you have the whole module missing. PrivateGPT is a complicated build, but the steps in video are valid, I would suggest retracing the required SW and versions required like Python etc. and the setup steps just to make double sure no steps were missed. I also find more success running the terminals in admin mode to avoid issues. Let me know if you came right with this and thanks for making contact.
@SiddharthShukla987
@SiddharthShukla987 2 месяца назад
I also faced the same issue because I forgot to start the env. Check yours too
@travisswiger9213
@travisswiger9213 2 месяца назад
how do i restart this? I've got it running a few times, but I if i restart i have a hell of a time getting it working again. can i make a bat file some how?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, when you launch it in the Anaconda PowerShell Prompt, just go back to that terminal when done and press "Control + C". This will shut it down. You can save the starting profile as a PowerShell script and start it or as bat if you use cmd. Thanks for making contact, let me know if you came right with this.
@Stealthy_Sloth
@Stealthy_Sloth 2 месяца назад
Please do one for llama 3.
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Thanks for the idea. If you want you can try and get it up and running. You can install the 8b model if you use Ollama. (ollama run llama3:8b) The link below has the example configs that would need to change. Thanks for reaching out and for the feedback, much appreciated. docs.privategpt.dev/manual/advanced-setup/llm-backends
@mohith-qm9vf
@mohith-qm9vf Месяц назад
Hi, will this installation work for ubuntu? if not what changes do I need to make??? thanks a lot
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, just checking if you got this built on Ubuntu. If not you can follow the steps for Linux using the link below. Thanks for reaching out. docs.privategpt.dev/installation/getting-started/installation
@mohith-qm9vf
@mohith-qm9vf Месяц назад
@@stuffaboutstuff4045 thanks a lot!!
@SuffeteIfriqi
@SuffeteIfriqi Месяц назад
Such a great video, which in my case makes it even frustrating because I'm literally stuck at the last step. It says: make: *** Keine Regel, um "run" zu erstellen. Schluss. Which translates into: make: ***No Rule, to create "run". Stop. Any idea what this might be caused by? I've restarted the entire process twice, no luck... Thank you so much.
@SuffeteIfriqi
@SuffeteIfriqi Месяц назад
I suspect it might caused by Gnu's path, although I did include it in the env variables...
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, did you manage to resolve this issue. Please check from about 9:15 into the video. Are you completing these steps in an Admin Anaconda PowerShell with the environment activate and from the correct folder. Let me know if you came right. Thanks..
@dauwswinnen2721
@dauwswinnen2721 Месяц назад
I did everything but installed the wrong model. How can I change models after doing everything?
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, If you are using Ollama you can just update your config file in your PrivateGPT folder to point to the model downloaded in Ollama. if you want multiple models on Ollama its fine. I use my Ollama to feed numerous AI frontends with multiple LLMs running. Check the link below for the defaults: Default Mistral 7b. On your Ollama box: Install the models to be used, the (default for PRIVATEGPT settings-ollama.yaml) is configured to use mistral 7b LLM (~4GB) and nomic-embed-text Embeddings (~275MB). Commands to run in CMD: ollama pull mistral ollama pull nomic-embed-text ollama serve docs.privategpt.dev/installation/getting-started/installation
@fishingbeard2124
@fishingbeard2124 Месяц назад
Can I suggest that next time you make a video like this you enlarge the window with the commands. 75% of your window in blank and the important text is small so I think it would be helpful to have less blank space and larger text. Thanks
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Thanks for the input, agreed I started to zoom in on the command prompts in newer vids. Thanks for reaching out and I hope the video helped.
@patrickdarbeau1301
@patrickdarbeau1301 3 месяца назад
Hello, I got the following error message when running the command: " poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant" " " No module named 'build' " Can you help me ? Thanks
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Hi, did you install all the required SW I install at the start.
@Matthew-Peterson
@Matthew-Peterson 2 месяца назад
Close both Anaconda Prompts and restart the process. Dont rebuild your project though. GPT4 says its a connection issue when creating and sometimes a computer restart sorts the issue. Worked for me.
@guille8237
@guille8237 2 месяца назад
open your tomfile and update the tom file with the correct Build version then update the lock file
@OscarPremium-ql5hh
@OscarPremium-ql5hh 28 дней назад
How do I start it up again ones I finished all the steps in the video successfully? Just visit the browser domain again?
@stuffaboutstuff4045
@stuffaboutstuff4045 20 дней назад
Hi, you will have to activate your conda environment. Make sure you are in the project folder and launch Anaconda PowerShell again. Check the steps from 9:24 in the video. Let me know if you are up and running.
@OscarPremium-ql5hh
@OscarPremium-ql5hh 19 дней назад
@@stuffaboutstuff4045 Wow, Thanks for your answer! Just amazing!
@Reality_Check_1984
@Reality_Check_1984 3 месяца назад
Looks like they released a 0.5.0 today. If you install this now and look at the version it will be 0.5.0. All of your install instructions still work as it wasn't a fundamental change like the last big update. They added pipeline ingestion which I hope fixes the slow ollama ingestion speed but so far I still think llama is faster.
@Reality_Check_1984
@Reality_Check_1984 3 месяца назад
so I ran it over night and ollama is still not performing well with ingestion. It definitely under utilizes the hardware for ingestion. Right now a lot of the local LLMs don't seem to leverage the hardware as well when it comes to ingestion. That is an improvement I would like to see in general. Not just of ollama or privateGPT. The ability to ingest faster through better hardware utilization/improved processing and storing ingest files long term on the drive along with the ability to query the drive and load relevant chunks into the vRAM would significantly expand the depth and breadth of what these tools can be used for. vRAM is never going to offer enough and constantly training models won't work either.
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Hi thanks for the update. Had a bit of a scare with the update available moments after publishing this vid 😊. Thanks for the confirmation, I also checked and the install instructions remain intact. Appreciate the feedback. PS. I totally agree with the performance comment made.
@drmetroyt
@drmetroyt 23 дня назад
Hope could install this as docker container
@stuffaboutstuff4045
@stuffaboutstuff4045 9 дней назад
Hi, thanks for the feedback. Let me look into this.
@The_Gamer_Boi_2000
@The_Gamer_Boi_2000 2 месяца назад
whenever i try to install poetry on pipx it gives me this error "returned non-zero exit status 1."
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, just checking in if you resolved this issue? Just want to confirm that you are following the steps I use in the video to install poetry, please check from 6:28 in the video. I use a command prompt in admin mode to complete all these steps. From 7:36 we back in Anaconda and Anaconda PowerShell prompts. Also confirm you are using Python 3.11.xx for the Anaconda environment otherwise you will get a bunch of build errors and failures. Let me know and thanks for reaching out.
@The_Gamer_Boi_2000
@The_Gamer_Boi_2000 2 месяца назад
@@stuffaboutstuff4045 im pretty sure i was doing those steps but im using webui now instead cuz its easier to do
@BetterEveryDay947
@BetterEveryDay947 2 месяца назад
can you make a vs code version?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, thanks for the idea, they release new version so quickly I will check how I can incorporate in the next one. Thanks for reaching out.
@user-jw1mz4et1e
@user-jw1mz4et1e 2 месяца назад
i install it and works but is very very slow to answer is it possible to speed it up?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, it is not the fastest with Ollama, the upside is its relatively easy to get working. Should confidentiality not be an issue using the Open AI profile will increase speed exponentially. You could also build this local if you have a proper GPU but expect a more complicated install to follow. Thanks for reaching out.
@mrxtreme005
@mrxtreme005 3 месяца назад
20gb space required?
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Hi, yes if you load all the required SW. This ensure you dont get errors if you build the other non Ollama options..
@JeffreyMerilo
@JeffreyMerilo Месяц назад
Great video! Thank you so much! Got it to work with version 5. How can we increase the tokens? I get this error File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\chat_engine\context.py", line 204, in stream_chat all_messages = prefix_messages + self._memory.get( ^^^^^^^^^^^^^^^^^ File "C:\ProgramData\miniconda3\envs\privategpt\Lib\site-packages\llama_index\core\memory\chat_memory_buffer.py", line 109, in get raise ValueError("Initial token count exceeds token limit")
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, can you have a look at this post and see if it helps. Let me know if you come right. github.com/zylon-ai/private-gpt/issues/1701
@vichondriasmaquilang4477
@vichondriasmaquilang4477 Месяц назад
so confuse what is the purpose of install ms visual studio? you didnt use it
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, VS Studio components are used in the background for compilation and build of the programs. Hope the video helped and your PrivateGPT is up and running.
@AstigsiPhilip
@AstigsiPhilip Месяц назад
Hi, is this privategpt can handle 70,000 pdf files?
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, I personally have not worked with massive datasets. I know some in the comments have. You might want to check out the link for bulk and batch ingestion. docs.privategpt.dev/manual/document-management/ingestion#bulk-local-ingestion
@FunkyZangel
@FunkyZangel Месяц назад
Can I do this all completely offline? I have a computer that has no access to the internet. I want to see if i can download everything into a usb and then transfer it over to that computer. Can anyone help me please
@Whoisthelearner
@Whoisthelearner Месяц назад
I think you can once you have everything installed, at least that works for me
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, as noted below, that correct. Once installed you can disconnect the machine if you have the LLM local. Let me know if you come right.
@FunkyZangel
@FunkyZangel Месяц назад
@@stuffaboutstuff4045 Hi thanks for the reply. I am struggling a little understanding this. Do I have to download a portable version for everything or just a portable VSC? Meaning if I want the privategpt to work on another machine from the thumbdrive, do I just need to transfer the VSC files or must I transfer everything, such as git, anaconda, python etc?
@firatguven6592
@firatguven6592 2 месяца назад
Thank you very much it works like your previous guide privateGPT2.0. But compared to the previous GPT2.0 this one is uploading the files much slower, as if it wasnt slow enough. with the 2.0 my all 32 Threads CPU was working under 80% load during the upload process, you could see that it is doing something important due to the load. But now is the CPU load only around 5%, which takes considerably more time, because I guess the parsing nodes are genereting now the embeddings much slower. This is unfortunately a deal braeaker for me. Since I have lots of huge pdf files which needs to be uploaded. I cannot wait 1 week or more just for upload. At the end a 4.0 version should be improvement but I cannot see any improvements here. Can somebody list a real improvement list please except the ollama, which is for me not a real improvement becaue the version 2.0 worked also very fine. I will switch back to 2.0, unless I can understand where is the failure?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, thanks for reaching out. The new version allows you to use numerous LLM Backends. This video shows how to use Ollama just to make the install easier for most and its now the recommended option. The new version can still be built exactly like the previous, if you had better performance using local GPU and LlamaCPP you can still enable this as profile. If you really want high speed processing you can send it to Open AI or one of the Open AI like options. Have a look at the backends you can enable for this version in the link below. Let me know if you come right.. docs.privategpt.dev/manual/advanced-setup/llm-backends
@firatguven6592
@firatguven6592 2 месяца назад
@@stuffaboutstuff4045 Thanks for advice, if change anything in the backend, it comes to error, despite according to the official manual and your explanation. If I setup ofr both ollama then it works but as mentioned the file upload is extreme slow. Now I found a solution by installing from scratch according to version 2.0 with llmacpp with huggingface embeddings, whereas I changed the ingest_mode from single to parallel now it works much faster. There should be more options in order to increase the speed by increasing the bash size or worker counts. Since they did not work before, i will not change and corrupt the installation as long as you can provide a manual how to increase the embedding speed to maximum most probably with help of gpu like in chat. The GPU support in chat works good but during embedding the GPU is not being used
@firatguven6592
@firatguven6592 2 месяца назад
@@stuffaboutstuff4045 after changing to parallel the cpu utilization is at 100% and that explains the faster embedding. Since I have one of the fastest consumer cpus the result is now finally satisfying.
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
@@firatguven6592 Awesome, glad you are running at acceptable speeds.
@firatguven6592
@firatguven6592 2 месяца назад
​@stuffaboutstuff4045 in addition to that, i could change some paramters in settings.yaml with help of LLM. These are, batch size to 32 or 64, dimension from 384 to 512, device to cuda and ingest_mode: parallel, which gave the most improvement. Now the embeddings are really fast. Thank you very much. I would like to test once also the mode sagemaker, since I could not succeed that mode working. I will try it later again.
@alicelik77
@alicelik77 2 месяца назад
Time 9:21 you opened new anaconda powershell prompt. Why did you need new powershell prompt even you were working on a powershell prompt already
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, look carefully, I am in a normal Anaconda prompt at that stage and the next commands need to go into Anaconda PowerShell. 👨‍💻 Thanks for reaching out, hope the video helped..
@SirajSherief
@SirajSherief 2 месяца назад
Can we do this for Ubuntu machine ?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, yes you can, the packages and flow will be similar to the video but obviously following the Linux steps. You can check out what's involved in building on Linux by checking out the below link. Thanks for reaching out, let me know if you come right. 🔗docs.privategpt.dev/installation/getting-started/installation
@SirajSherief
@SirajSherief 2 месяца назад
Thanks for your kindly response. But now I'm facing a new problem while try to run the private_gpt module: "TypeError: BertModel.__init__() got an unexpected keyword argument 'safe_serialization'" Please convey me how to resolve this error?
@VaporFever
@VaporFever 2 месяца назад
How can I add llama3?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, if you are using Ollama you can install it and test it out, I am currently downloading it to test. 8B Model can be installed on Ollama using - ollama run llama3:8b or you can install the 70B Model -ollama run llama3:70b. Let me know if you get it working.
@hasancifci1423
@hasancifci1423 2 месяца назад
Thanks! Do NOT start with the newest version of Python. It does not support. If you did, uninstall it. If you have a problem with pipx install poetry, delete the pipx folder.
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, agreed, its has to be Python 3.11.xx.
@noneofbusiness9764
@noneofbusiness9764 3 месяца назад
What about a step by step linux installation?
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Hi, Thanks for the idea, let me look into that.
@Omnicypher001
@Omnicypher001 3 месяца назад
Using a Chrome browser to host a web app doesn't seem very private to me.
@user-jn3xc9tv3j
@user-jn3xc9tv3j 3 месяца назад
its localhost noob
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Hi, thanks for the feedback, Noted.
@anishkushwaha9973
@anishkushwaha9973 2 месяца назад
Not working it's showing error whatever prompt im giving
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, what error do you get? Let me know and maybe I can help you out.. Thanks!
@anishkushwaha9973
@anishkushwaha9973 2 месяца назад
​@@stuffaboutstuff4045its showing Error Collection make_this_parameterizable_per_api_call not found
@reaperking537
@reaperking537 Месяц назад
private-gpt answers me blank. any solution?
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, can you confirm what LLM you sending it to? Ollama local like in the video? Are you getting no responses when you ingests docs and on the LLM Chat? Both not working? Anything happening in the terminal when it process in the Web UI? Let me know and we can hopefully get you up and running.
@reaperking537
@reaperking537 Месяц назад
@@stuffaboutstuff4045 I have difficulty with PROFILES="ollama" (LLM: ollama | Model: mistral). I followed the same steps indicated in the video. LLM Chat (no file context) doesn't work, it gives me blank responses; and Query files doesn't work either, it also gives me blank responses. The error I get in the terminal is the following: [WARNING ] llama_index.core.chat_engine.types - Encountered exception writing response to history: timed out
@reaperking537
@reaperking537 Месяц назад
@@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.
@reaperking537
@reaperking537 Месяц назад
@@stuffaboutstuff4045 I have solved the problem by modifying the response time in the 'setting-ollama.yaml' file from 120s to 240s. Thanks for the well-explained tutorial, keep it up.
@user-yr3xm1jk1q
@user-yr3xm1jk1q Месяц назад
Is it support Arabic language?
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, you can have a look at these threads, you would need to have LLM support for the language. I hope they point you in the right direction. github.com/zylon-ai/private-gpt/issues/28 github.com/zylon-ai/private-gpt/discussions/764
@user-yr3xm1jk1q
@user-yr3xm1jk1q Месяц назад
@@stuffaboutstuff4045 🙏 thx
@JiuJitsuTech
@JiuJitsuTech 2 месяца назад
To run git clone, from the Anaconda Prompt, I had to install "conda install -c anaconda git". I was then able to run "git clone ...". Else, the Prompt window was just hanging for me when I tried to git clone.
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Glad you are up and running. Thanks for sharing.
@methodssss
@methodssss Месяц назад
running into the issue with chatting with private gpt in browser. Error HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
@methodssss
@methodssss Месяц назад
I am sorry, that was the error for querying file. the error I get when trying to use the LLM Chat is "NoneType' object has no attribute 'split"
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, this is usually a document ingestion or documents not selected issue. Can you check out the below link and restart the model. Let me know if you came right with this. github.com/zylon-ai/private-gpt/issues/1566
@adamseng8514
@adamseng8514 19 дней назад
I am getting this error every time I try to upload a file to ingest or when I type a message. Everything along the way installed normally and went well so far HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
@stuffaboutstuff4045
@stuffaboutstuff4045 14 дней назад
Hi, just checking if you resolved this issue? Looks like it cannot talk to the backend you sending the requests to. Are you using Ollama on a different server? If so make sure you open its webserver to support more than localhost connections.
@adamseng8514
@adamseng8514 14 дней назад
@@stuffaboutstuff4045 I've changed it to totally local with $env:PGPT_PROFILES="local" and everything seems to work fine there, just not with anything else but that is also alright for me as I am still tinkering around with this. Can't thank you enough for this video and how easy it is to follow anyway.
@cookiedufour
@cookiedufour 2 месяца назад
After executing "make run", i run into some problems : " ----LOGGING ERROR---- Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\privategpt\Lib\site-packages\injector\__init__.py", line 798, in get return self._context[key] ~~~~~~~~~~~~~^^^^^ KeyError: During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\ProgramData\anaconda3\envs\privategpt\Lib\site-packages\injector\__init__.py", line 798, in get return self._context[key]" etc, there is still a lot. I followed every instruction carefully so, I don't know from where comes the problem... pls help
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, can you confirm that when you set the environment variable you are doing this in a Anaconda PowerShell prompt in admin mode. Make sure your environment is activated in the PowerShell terminal, these are the steps from 9:17 into the video. Let me know if you come right. Thanks for reaching out.
@cookiedufour
@cookiedufour 2 месяца назад
@@stuffaboutstuff4045 yes, I opened a new anaconda powershell prompt and ran it as admin. I am thinking of starting everything over again... What would you advise me to do ? uninstall everything and refollw the steps of your video ? Thanks for your answear !
@talatriaz
@talatriaz 3 месяца назад
Doesn't work for me - only difference is that I'm using Win 11. All versions of software installed are the same as in example except for updated pip ands poetry versions. Everything is smooth until I get to the very last step. After running make run I get the following output: poetry run python -m private_gpt Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\pgpt\private-gpt\private_gpt\__main__.py", line 5, in from private_gpt.main import app File "C:\pgpt\private-gpt\private_gpt\main.py", line 3, in from private_gpt.di import global_injector File "C:\pgpt\private-gpt\private_gpt\di.py", line 3, in from private_gpt.settings.settings import Settings, unsafe_typed_settings File "C:\pgpt\private-gpt\private_gpt\settings\settings.py", line 5, in from private_gpt.settings.settings_loader import load_active_settings File "C:\pgpt\private-gpt\private_gpt\settings\settings_loader.py", line 9, in from pydantic.v1.utils import deep_update, unique_list ModuleNotFoundError: No module named 'pydantic.v1' make: *** [run] Error 1 Seems like root cause is missing Pydantic v1 module. I have checked using pip list, and pydantic 1.10.7 is clearly present. Problem with make gnu??? Has anyone else experienced this or is it just me?.
@stuffaboutstuff4045
@stuffaboutstuff4045 3 месяца назад
Hi, did you come right with this? Just checking, you have the required SW, running everything in correct terminals (CMD, Anaconda, Anaconda PowerShell etc. I usually also ensure I run in admin mode terminal to avoid some issues. If you run make can you confirm its in the machines path? After adding to path ensure you open a new prompt windows so it loads the path. Let me know if the above helps..
@talatriaz
@talatriaz 2 месяца назад
@@stuffaboutstuff4045 Apparently the problem was windows 11. I repeated the exact same steps on a Win 10 system and it worked perfectly.
@thakurajay999
@thakurajay999 Месяц назад
Error Collection make_this_parameterizable_per_api_call not found
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, this issue would usually arise when the system does not see documents selected or ingested. Have a look at the below two posts. Let me know if resolved. github.com/ollama/ollama/issues/3052 github.com/zylon-ai/private-gpt/issues/1334
@Cool_Monk-ey
@Cool_Monk-ey 2 месяца назад
In the last step --- Logging error --- Traceback (most recent call last): File "C:\Users\1ub48\anaconda3\envs\privavtegpt\Lib\site-packages\injector\__init__.py", line 798, in get return self._context[key] ~~~~~~~~~~~~~^^^^^ KeyError: I got this error and privategpt didn't show up pls help me someone
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, just checking if you came right with this. Sound like something is going wrong when you set PGPT Profile and execute make run. Can you confirm you are doing this in admin mode Anaconda PowerShell prompt. Ensure environment is active first and check steps 9:20 into the video. Let me know if resolved, thanks for reaching out.
@travs007
@travs007 Месяц назад
@@stuffaboutstuff4045 I'm having the same problem access denied to mistral
@zackmathieu4829
@zackmathieu4829 2 месяца назад
I get and error after executing make run: KeyError:
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, can you confirm you are setting the environment variable and using make run in a admin mode Anaconda PowerShell prompt. Check 9:16 onwards in the video. If error persist please confirm if your are using Ollama like I do in the video or building for local Lllama. Thanks for reaching out!
@zackmathieu4829
@zackmathieu4829 2 месяца назад
@@stuffaboutstuff4045 I've followed all of those steps correctly but I have found that after I run, poetry install --extras "ui llms-ollama embeddings-ollama vector-stores-qdrant", setuptools never completes installation no matter how long i wait. Also even though I have completely uninstalled python and then reinstalled only python 3.11.0, when I check the version in Anaconda Prompt is returns 3.11.9
@navaneethk7798
@navaneethk7798 Месяц назад
(base) PS C:\WINDOWS\system32> conda activate privategpt (privategpt) PS C:\WINDOWS\system32> cd .\pgpt\ (privategpt) PS C:\WINDOWS\system32\pgpt> cd .\private-gpt\ (privategpt) PS C:\WINDOWS\system32\pgpt\private-gpt> $env:PGPT_PROFILES="ollama" (privategpt) PS C:\WINDOWS\system32\pgpt\private-gpt> make run make: *** No rule to make target 'run'. Stop. 🤥
@stuffaboutstuff4045
@stuffaboutstuff4045 Месяц назад
Hi, just checking if you managed to resolve this? Did you follow all the make steps I use from about 5:25 in the video. Also just check the steps from about 8:05, I create my folder for the software in the root of the drive i.e. c:\pgpt. Just check those steps and confirm SW install location. Lastly make sure you load the $env variables in Admin mode Anaconda PowerShell prompt. Let me know if you were able to resolve..
@Cashemacom-ud8xb
@Cashemacom-ud8xb 2 месяца назад
How do I enable GPU for this?
@stuffaboutstuff4045
@stuffaboutstuff4045 2 месяца назад
Hi, if you follow the instructions and config for Ollama, Ollama will handle the GPU offload. Otherwise you have to build it for full local for Llama-CPP support. Let me know if you come right..
Далее
100+ Linux Things you Need to Know
12:23
Просмотров 109 тыс.
меня не было еще год
08:33
Просмотров 2,1 млн
host ALL your AI locally
24:20
Просмотров 820 тыс.
Ollama UI - Your NEW Go-To Local LLM
10:11
Просмотров 91 тыс.
Local RAG using Ollama and Anything LLM
15:07
Просмотров 12 тыс.
I Analyzed My Finance With Local LLMs
17:51
Просмотров 434 тыс.
ОБСЛУЖИЛИ САМЫЙ ГРЯЗНЫЙ ПК
1:00
ИГРОВОВЫЙ НОУТ ASUS ЗА 57 тысяч
25:33