ERRATA: In the video I mention that setting OLLAMA_HOST is an alternative to using ngrok, but that's the case only in scenarios where you only need access on your local network. ngrok apparently lets you leverage your Ollama instance from anywhere, which sounds awesome (thanks to @havokgames8297 for pointing this out)
it doesn't, at least for me. everything that appears in the list of language models to choose from is already downloaded and ready to go. That said, they might take a few seconds to load into memory, especially if they are on the larger side. Mistral 7B only takes ~10 seconds or so to load into memory for me. Are you seeing an isue where the model is downloaded on every run?
@@codetothemoonno worries. No one would expect you to be an expert at everything. I've used Ngrok for example when developing a web app locally that has webhooks and I want an external service to be able to access my local development server. It is perfect for this. The issue is that on the free tier it won't keep the same host name, so when you configure your Enchanted LLM app - if you restart NGrok then the URL will be different. Either you can pay for the service and get static URLs (I believe), or use another static DNS service with a hostname pointing to your machine.