@@edwardleyco9880 That's a lot to put in an answer, but I had no trouble with Ollama detecting my nvidia gpu without tinkering. I can't speak on Windows. Make sure you have the Cuda toolkit installed as well.
i had a lot of problems and frustrations trying to get this working... but now im getting installed by using an older version of debian, the new one keeps having too many problems for me
Hi Marvin Great video, thanks for this. Would you be able to provide some info on how to extend the openwebui (add graphs/charts etc), and also does it keep any logs for users, thumbsups etc
Wow! Very Helpful and Valuable content... I followed the same instructions but had issues installing on Windows even with Docker. Can you guide me if there are more resources or please make a video on how to install Ollama Web UI on Windows? Because most of the users are on Windows
for me the server is running and when i surf to localhost:3000 i get to login but then i get a blank screen. do you have any idea why i get this issue?
Now I just need to be able to run this over wifi so I can use it on my phone while my pc is running. Ok so ive got Ollama running, but it takes like 10 minutes for the ai to respond to my queries. Did I set up something wrong?
Please make sure you have connected to ollama server url correctly as described in the video(Settings). Also download the model by running ollama run mistral, to download mistral model.
step by step but for which system? you cut all the elements that could help identify the operating system. we are not blind, I think no one needs a zoom 500% on the terminal. the video is rather of the better ones but it burned my retina from too visible pixels .
Fake does not work. I tried so many times these instructions and always errors. I swore that for 1 second I saw the app work, but when you try to follow the instructions here nothing works. Verbatim I tried to get this to work..