Тёмный

Ollama on Google Colab: A Game-Changer! 

TechXplainator
Подписаться 2 тыс.
Просмотров 2,1 тыс.
50% 1

Опубликовано:

 

8 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 32   
@fabriciocincunegui5332
@fabriciocincunegui5332 15 дней назад
thnx for patient
@bnermine9780
@bnermine9780 2 дня назад
Thank you for the great video! Could the model then be used inside a local python code? I am writing a classification script using an llm but running it on my cpu takes ages. Can I edit my local python code so that the classification is done with the model running on google colab but the results are stored locally? This would also help me apply the same model to different use cases. Thank you!!
@TechXplainator
@TechXplainator День назад
Thank you so much for your kind words! And yes, you can definitely do that. Here is how that could work: 1. Keep the Colab notebook running with Ollama and Ngrok set up as shown in the tutorial. 2. In your local Python script, use the 'requests' library to send classification requests to the Ollama model via the Ngrok URL. 3. Process the responses and store the results locally. I hope that helps. Happy coding ☺️
@fabriciocincunegui5332
@fabriciocincunegui5332 15 дней назад
How do i export ollama on my cmd im on windows 11
@TechXplainator
@TechXplainator 15 дней назад
I can't verify this on a Windows PC since I don't have one, but based on my research, here's how to export the `OLLAMA_HOST` variable on Windows 11 using Command Prompt: 1. Open Command Prompt as Administrator. 2. Run the command below, replacing `` with your Ngrok URL: setx OLLAMA_HOST "" 3. Close and reopen Command Prompt to apply the changes.
@chillscripter
@chillscripter Месяц назад
do the exact thing that you said in the video but i got the error from ollama : the parameter is incorrect how can i solve that?
@TechXplainator
@TechXplainator Месяц назад
Hey there! To help me figure out what's going wrong, could you please tell me: 1. Are you using a fixed Ngrok link or letting Colab create a new one each time? 2. Did you open the link from the notebook in a browser? Does it say "Ollama is running"? 3. Have you correctly linked your local Ollama to Colab-Ollama by setting the OLLAMA_HOST environment variable to your Ngrok URL? (You can usually do this in your terminal with a command like export OLLAMA_HOST=) 4. When you run a model locally (like typing ollama run llama3.1), does the model download to your computer or to Colab? Can you see the download happening in your Colab notebook?
@user-ns7tf8zn8t
@user-ns7tf8zn8t Месяц назад
When I write the export OLLAMA_HOST it said that" export : The term 'export' is not recognized as the name of a cmdlet" Is it because I am using docker ?
@TechXplainator
@TechXplainator Месяц назад
The error message you're encountering is not related to Docker, but rather to the command shell you're using. The "export" command is specific to Unix-like systems (such as Linux and macOS) and is not recognized in Windows PowerShell or Command Prompt. To set an environment variable in Windows, you should use the "set" command instead of "export". Here's how you can set the OLLAMA_HOST variable in PowerShell: $env:OLLAMA_HOST = "your_value_here" Or in Command Prompt: set OLLAMA_HOST=your_value_here Hope this helps ☺️
@Salionca
@Salionca 2 месяца назад
Jupyter Notebook links in the video description don't work.
@TechXplainator
@TechXplainator 2 месяца назад
Oooh you're right! I messed up some of my links there. Thank you so much for pointing that out! The links are fixed now 😊
@jameschan6277
@jameschan6277 Месяц назад
Please help if I use windows PC desktop, how can I open terminals like MAC?
@TechXplainator
@TechXplainator Месяц назад
To open terminals on a Windows PC desktop similar to how you would on a Mac, you can use the following methods: Option 1: PowerShell: 1. Press `Windows+X` and select "Windows PowerShell" or "Windows PowerShell (Admin)" from the menu. 2. Alternatively, press `Windows+R`, type `powershell`, and press Enter to open a PowerShell window. Option 2: Command Prompt: 1. Press `Windows+R`, type `cmd`, and press Enter to open a Command Prompt window. 2. You can also search for "Command Prompt" in the Start menu, right-click the result, and select "Run as Administrator" if you need elevated privileges. Hope this helps ☺️
@andreabaffascirocco2934
@andreabaffascirocco2934 Месяц назад
I have try but seems that the command ollama run llama3.1 download the model on my laptop instead of colab.
@TechXplainator
@TechXplainator Месяц назад
Try running the command export OLLAMA_HOST= (check the URL says "Ollama is running" first). Then in the same terminal window, you should do "ollama run llama3" again. Hope this helps ☺️
@andreabaffascirocco2934
@andreabaffascirocco2934 Месяц назад
@@TechXplainator Thanks. i'll try
@andreabaffascirocco2934
@andreabaffascirocco2934 Месяц назад
@@TechXplainator Now all work fine, The problem is i have installed ollama on my ubuntu using snap. Whit this installation the pc try to download LLama 3.1 on pc and not on colab.
@TechXplainator
@TechXplainator Месяц назад
I'm glad it works now ☺️
@HunterJuniorX
@HunterJuniorX Месяц назад
is there a way to use models from hugging face?
@TechXplainator
@TechXplainator Месяц назад
Yes there is - if they are available as quantized models (GGUF files). I made a video on how you can import GGUF files from huggingface and use them in Ollama - feel free to check it out: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-vs1u9z2U4ZA.html
@Salionca
@Salionca 2 месяца назад
The video is greaat but I'm ot going to spend money on that. I prefer to wait to buy a new laptop.
@TechXplainator
@TechXplainator 2 месяца назад
Thanks! And thats completely understandable :-)
@MarkSmith-ho5ij
@MarkSmith-ho5ij 9 дней назад
Coders using apple lol. Please use Linux and stop this...
@rajarshisen5905
@rajarshisen5905 Месяц назад
Please help, I can run Ollama in Colab, but while run it from docker as open-web-ui, I am getting the following error while trying to chat to llama3 in the web browser. Ollama: 404, message='Not Found', url=URL('/api/chat')
@TechXplainator
@TechXplainator Месяц назад
Does Ollama work from the terminal? I mean, when running export OLLAMA_HOST= and ollama run llama3, do you get to interact with llama3 in your terminal? And do you see any action in your Colab (you should be seeing the notebook downloading a model and responding to chat)
@merocky5
@merocky5 Месяц назад
Yes, I do. Ollama is executing on Colab, when I call it from local computer terminal. Only while using the open-web-ui, following the last command of your python notebook, I get the error as described above. The front end web app starts, but while trying to chat with ollama installed in Colab, I get the error mentioned in above message. I did some internet search and appears that "api" word may be /be not included in the latest version of ollama? Please help how I can resolve this. Thanks a lot
@TechXplainator
@TechXplainator Месяц назад
To make sure we're on the same page, I just want to summarize your setup: 1. You're using a static Ngrok URL. 2. You've successfully connected your local Ollama instance with the one hosted on Colab by running an export command. 3. You've installed OpenWebUI using Docker and replaced the example Ngrok URL with your own static Ngrok URL, as indicated by this command: `docker run -d -p 4000:8080 -e OLLAMA_BASE_URL=example.com -v open-webui:/app/backend/data --name test --restart always ghcr.io/open-webui/open-webui:main` 4. The Docker container was created, but trying to access the Ollama WebUI at `localhost:4000/` results in an error. Please confirm that this summary is accurate so I can help you troubleshoot the issue ☺️
@rajarshisen5905
@rajarshisen5905 Месяц назад
@@TechXplainator Yes, the summary is spot on. I have followed all of the above bullet points and got error on last bullet point while trying to post a chat to Ollama using web-UI.
@TechXplainator
@TechXplainator Месяц назад
I was not able to replicate the error, but based on my research, here are a few things you could try: 1. Verify OpenWebUI settings: Access the OpenWebUI settings page (click on your avatar on the bottom left) and verify that the Ollama Server URL is correctly set to your Ngrok URL: Go to “connections”. Under “Ollama Base URL” you should see your static Ngrok URL 2. Network Configuration Ensure that the Docker container can communicate with the Ollama server. Use the --network=host flag to allow the Docker container to use the host network: docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL= --name open-webui --restart always ghcr.io/open-webui/open-webui:main I hope this helps. If not, please check out the troubleshooting page from Open WebUI: docs.openwebui.com/troubleshooting/
Далее
Llama 8b Tested - A Huge Step Backwards 📉
13:41
Просмотров 42 тыс.
Прохожу маску ЭМОЦИИ🙀 #юмор
00:59
Run any AI model remotely for free on google colab
7:11
host ALL your AI locally
24:20
Просмотров 1 млн
Llama3.1 Fine Tuning Complete Guide on Colab
14:39
Просмотров 2,9 тыс.
The cloud is over-engineered and overpriced (no music)
14:39
Ollama UI - Your NEW Go-To Local LLM
10:11
Просмотров 114 тыс.
Top 5 SDXL Models for AI Image Generation
11:06
Просмотров 1,5 тыс.
RAG from the Ground Up with Python and Ollama
15:32
Просмотров 30 тыс.