Welcome to StuffAboutStuff - Your Gateway to AI Mastery! 🤖✨
At StuffAboutStuff, we're on a mission to demystify the world of AI and empower you to harness the latest tools on your own PC. Whether you're a tech enthusiast, developer, or someone curious about the cutting-edge advancements in artificial intelligence, you've come to the right place!
🛠️ What to Expect: Explore with us as we dive into step-by-step tutorials, build guides, and how-to videos that simplify the process of getting the most powerful AI tools up and running on your personal computer. From machine learning frameworks to the latest AI software, we've got you covered.
🚀 Subscribe Now for Your AI Journey! Hit that subscribe button, turn on notifications, and embark on a journey to unlock the full potential of AI on your own terms.
Great tutorial, but I have a problem when I want to search or query the file I have imported. I have an error "Initial token count exceeds token limit". I already have increased the limit but nothing change, how can I solve the error?
If you get the error: "AttributeError: module 'torch' has no attribute 'dml'"; Return to the commandline virtual environment and run the following: pip install torch-directml
I followed the directions and I get an error ImportError: DLL load failed while importing onnxruntime_pybind11_state: A dynamic link library (DLL) initialization routine failed. Press any key to continue . . . How do i fix it?
It's always the indian GUY <<< Thanks bro <<< you just saved the day ,,, let me tell you that you are a head of AMD LOSERS SUPPORT Which created LOSER BLOG THAT CAN'T WORK .... THANKS BRO IWILL LIKE AND SUBSCRIBE
Yesterday I tried to install pgpt with the last versions, but it was a bad way. Now I installed all with the good versions, but in the installation of poetry, it answer me this : No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe' No Python at '"C:\Users\MYPATH\miniconda3\envs\privateGPT\python.exe' 'poetry' already seems to be installed. Not modifying existing installation in 'C:\Users\MYPATH\pipx\venvs\poetry'. Pass '--force' to force installation. I force it, but anyway, mini conda is the old installation. But how can I do now ???? Thanks for your help
Wow , got it working both you 2.0 and 4.0 guides for this. Thanks you forever! but Now I want to change the model that is in the UI? How do I do this? The whole reason I did this was to have other models with ollama.. and the guide doesn't show this
Hi, glad you are up and running. You can change the Ollama model. Download and install the model you want to use and change your config files. Check the Ollama example shown on the below link. docs.privategpt.dev/manual/advanced-setup/llm-backends
Hi, I did everything as in the video and I get a new error "ModuleNotFoundError: No module named 'pillow_avif', I installed the library with "pip install diskcache" and "pip install Pillow-avif-plugin" and the installation was successful, but the error persisted. Can you tell me how to fix it?
Hi, did you manage to resolve this. Did you followed the steps in both videos. Also ensure all the required SW etc. Let me know if you are up and running..
@@stuffaboutstuff4045 Thanks for the answer. Yes, I managed to solve the issue. I had to reinstall all the libraries and I accidentally deleted the .bat😅
Please address the issue of huggingface tokens and login for the install script. I have been all over the net and tried different solutions and script mods, including the huggingface CLI. However I have not been able to install a working copy yet (yes, I did accept the access the to Mistral repo on huggingface too. The python install script fails on Mistral and on the transformers and tokenizer. Shows message for gated repo but I have authenticated on the CLI and tried passing the token in the scripts. Still failing.... HELP!
Hi, are you using Ollama as backend? If not follow the steps in the Ollama video on the channel and just hook up your install to that. Otherwise test this on a hosted LLM like OpenAI. You should not struggle if you follow the steps exactly in the 2 videos. Let me know if you are up and running.
HI !! i try to follow all steps but unfortunatly my VM still give me (!) on my GPU that is correctly added into VM after run script in powershell, my questions is how can i fix this problem? i can't install AMD drivers from VM setup doesn't see GPU, my is a RX5700XT 8gb, thanks
Hi, The exclamation usually means the VM does not have the drivers. When you update the drivers on the host you have to update them on your VMs again. Shut down the VM and copy the folders over again exactly like I do in the video. For good measure run the script again. Let me know if you get it up and running. Thanks for reaching out.
i dont get it what you mean by Automatic base url AUTOMATIC1111_BASE_URL="192.168...." This is your ip host. but mine might be different. if i do this exactly that means i'll end up install on ur machine not my pc. Can you clarify this? at 10:43
Hello! I haven’t proceeded with this yet… but I have multiple GPUs. Am I able just to assign a specific GPU(s) to a VM? Much appreciated! Note: 4 of the same GPU, they all use the same driver.
Hi, did you come right with this. The script will share the default GPU detected on the OS. You can check out command to ID the card on the below link. learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/partition-assign-vm-gpu?tabs=powershell&pivots=windows-server#assign-gpu-partition-to-a-vm. You should be able to find the device with Get-VMPartitionableGpu
@@stuffaboutstuff4045 I see with the Get-VMHostPartitionableGpu Name property. Would I add the instance path to the Add-VMGpuPartitionAdapter portion? (Regarding the script you’ve made) Ex: Add-VMGpuPartitionAdapter -VMName $vm -InstancePath “PCI\VEN_10DE&DEV…(rest of the Device Instance)…009” Would there need to be quotation specifying a string as shown above? Or would this be correct usage?
@@stuffaboutstuff4045 yes everything is working fine but want to get response on another webpage what i am asking from gpt and get that reposnses from here to nother webpage using APIs or somethingelse
I cannot enable the automatic1111 integration even if I am already running stable diffusion and changed the base url. When i turn on the image generation (experimental) it just says "Something went wrong :/ string indices must be integers, not 'str" hence I cannot select any models
Hi, did you setup SD in your config file. Setup config file. Also make sure your config files code blocks are intact. Let me know if you got this running. Thanks for reaching out.
I am getting this error every time I try to upload a file to ingest or when I type a message. Everything along the way installed normally and went well so far HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000017C50525610>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
Hi, just checking if you resolved this issue? Looks like it cannot talk to the backend you sending the requests to. Are you using Ollama on a different server? If so make sure you open its webserver to support more than localhost connections.
@@stuffaboutstuff4045 I've changed it to totally local with $env:PGPT_PROFILES="local" and everything seems to work fine there, just not with anything else but that is also alright for me as I am still tinkering around with this. Can't thank you enough for this video and how easy it is to follow anyway.
Hi, it does work on multiple VMs. I have my GPU shared with a couple where I require GPU processing. Let me know if you came right with this. Thanks for reaching out..
i tried this man and it worked out well until i had to start the windows.bat file. the thing just wont load? I copied the exact same url address too and it failed? is it just me?
Hi, at the end can you see your webserver starting up as in my video. You should be able to access it afterwards on 127.0.0.1:8080. Let me know if you got yours up and running. Thanks for reaching out.
please point somewhere that for WINDOWS SERVER 2022 YOU NEED ADDITIONAL STEPS: ( two additional REGEDIT DWORDS , it may help someone out there: HKLM:\SOFTWARE\Policies\Microsoft\Windows\HyperV" -Name "RequireSecureDeviceAssignment" -Type DWORD -Value 0 HKLM:\SOFTWARE\Policies\Microsoft\Windows\HyperV" -Name "RequireSupportedDeviceAssignment" -Type DWORD -Value 0 ) and create the HyperV "folder" that is not there by default
Hi, I am running the Hypervisor on Win11 host. Hope you are up and running. Looks like @wgoldwing answered the question (Thanks!) Thanks for reaching out.
Technically, yes. I have shared AMD and Nvidia without issues. The script will share the default card found on the machine. I have not tested this myself, but copying the host files over should supply the driver. You will need enough resources on the card to start and share any GPU. Let me know if you could get this working. Thanks for reaching out.
all worked except requirements.txt why would that fail? i typed pip install -r requirements.txt Also note there is already a requirements.txt file in SD folder so do I have to do this step? Does the command ONLY add something to the existing requirements.txt? if so what does it add?
Hi, the command install the contents of the file. When installing carefully check the type of prompt windows I am in. Make sure you execute the command from the folder where the file resides. Let me know if you got everything up and running. Thanks for reaching out.
"Hey @stuffaboutstuff, I just wanted to take a moment and express my sincerest gratitude for the amazing step-by-step tutorial on installing Open-WebUI. Your instructions made the process so easy to follow, even for a tech-challenged individual like myself! After forgetting my password during one reboot, I spent hours trying to recover. Thanks to your insightful guide, I finally managed to regain access and get things running smoothly again. I've now subscribed to your content and will be sure to spread the word about your fantastic resources. Keep up the awesome work - you're truly making a difference in our tech-savvy community! :D"
Hi, you will have to activate your conda environment. Make sure you are in the project folder and launch Anaconda PowerShell again. Check the steps from 9:24 in the video. Let me know if you are up and running.
Thank you for the great video. I'm at 9:48 and the command $env:PGPT_PROFILES="ollama" gives me an error: The filename, directory name, or volum label syntax is incorrect. (privategpt) C:\gpt\private-gpt>$env:PGPT_PROFILES="ollama" (I don't get the colors you get)
Hi, can you confirm you are running this in your Anaconda PowerShell terminal, Check the steps I use from about 9:20 in the video. Let me know if you are up and running.
Hi, are you up and running. You need the latest windows updates installed on the OS. If you are running Windows 10 make sure its 21H2. Native windows and Hyper V, steps in the video should get you up and running. Thanks for reaching out. Let me know if you are up and running.
Hi, I use Anaconda for most of my AI environments when working on Windows. I find it easy to work with and install required SW etc. Thanks for reaching out.