Тёмный

Obsidian with Ollama 

The Writer Dev 글쓰는 개발자
Подписаться 273
Просмотров 7 тыс.
50% 1

Instead of using ChatGPT for running tasks, we can protect our precious notes and ideas with Ollama, an open-source project that lets you run powerful language models locally on your machine for free.
I cover how to install Ollama, set it up with Obsidian's Copilot plugin, and use it for AI-powered tasks like summarization, explanation, translation, and template generation - all while keeping your data private and avoiding subscription fees.
P.S:
The command to run Ollama as a local server, make sure to use
"OLLAMA_ORIGINS=app://obsidian.md* ollama serve"
Timestamps:
00:00 Intro
0:36 What is local LLM?
1:32 What is Ollama?
2:04 Install Ollama
2:26 Ollama commands!
3:09 Open up the command palette
4:30 Obsidian setup for using Ollama
5:06 Note about using the right models based on resource
5:34 Use case!
6:04 Outro
- - - - - - - - - - - - - - - - - - -
Connect with me
❤️ Newsletter: joonhyeokahn.substack.com/
❤️ LinkedIn: / joonhyeok-ahn
❤️ Instagram: / writer_dev123
❤️ Threads: www.threads.net/@writer_dev123-
- - - - - - - - - - - - - - - - -

Наука

Опубликовано:

 

31 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 15   
@radonryder
@radonryder 12 дней назад
Excellent video! Going to try this out.
@the-writer-dev
@the-writer-dev 12 дней назад
Thanks and let me know your experience!
@HiltonT69
@HiltonT69 21 день назад
What would be awesome is for this to be able to use an Ollama instance running in a container on another machine - that way I can use my container host for Ollama with all it's grunt, and keep the load off my smaller laptop.
@the-writer-dev
@the-writer-dev 21 день назад
That is an interesting idea..! Thanks for the feedback I will look into this to see it’s possible
@siliconhawk9293
@siliconhawk9293 29 дней назад
what is the hardware requirements to run models locally.
@TheGoodMorty
@TheGoodMorty 29 дней назад
It can run CPU-only, it can even run on a Raspberry Pi, it's just going to be slow if you don't have a beefy GPU. Pick a smaller model and it should be alright. But unless you care about being able to customize the model in a few ways or having extra privacy with your chats, it'd probably just be easier to use an external LLM provider
@coconut_bliss5539
@coconut_bliss5539 29 дней назад
I'm running Llama3 8B model with Ollama on a basic M1 Mac with 16gb RAM - it's snappy. There is no strict cutoff for hardware requirements - if you want to run larger models with less RAM, Ollama can download quantized models which enable this (for a performance tradeoff). If you're on PC with GPU, you need 16GB of VRAM to run Llama3 8B natively. Otherwise you'll need to use a quantized model.
@elgodric
@elgodric 28 дней назад
Can this work with LM Studio?
@the-writer-dev
@the-writer-dev 28 дней назад
Good question I haven’t played with LM studio. I will and let you know!
@Alex29196
@Alex29196 28 дней назад
Copilot needs integration with Groq AI, and Text to speech integration inside chat room.
@the-writer-dev
@the-writer-dev 28 дней назад
That sounds interesting idea!
@Alex29196
@Alex29196 28 дней назад
​@@the-writer-devI will cover the costs, allowing us to remove WebsUI and solely utilize Ollama or LMstudio for the backend. With LMstudio now featuring CLI command capabilities, it's even more beneficial as it reduces the layers above Copilot. I conducted a test with LMstudio's new feature today, and the Copilot responses were noticeably faster on my low-end laptop. Additionally, we can incorporate groq's fast responses and edge neural voices, which are complimentary.
@IFTHENGEO
@IFTHENGEO Месяц назад
Awesome video man! Just sent you connect on LinkedIn
@the-writer-dev
@the-writer-dev 29 дней назад
Thanks for the support and I will check it out!
@VasanthKumar-rh5xr
@VasanthKumar-rh5xr 15 дней назад
Good video. I get this message in the terminal while setting the server step 4. >>> OLLAMA_ORIGINS=app://obsidian.md* ollama serve The "OLLAMA_ORIGINS" variable in the context provided seems to be a custom configuration, and serving files with `ollama` would again follow standard Node.js practices: 1. To set an environment variable similar to "OLLAMA_ORIGINS", you could do so within your project's JavaScript file or use shell commands (again this is for conceptual purposes): I can connect with you through other channels to work on this step.
Далее
Smart Second Brain for Obsidian(Free & Offline)
16:43
надувательство чистой воды
00:28
ЭТО ВООБЩЕ НЕ БОЛЬНО !
00:15
Просмотров 251 тыс.
Adding Custom Models to Ollama
10:12
Просмотров 19 тыс.
Installing Ollama to Customize My Own LLM
9:20
Просмотров 24 тыс.
12 Obsidian Plug-ins I *Actually* Use
12:37
Просмотров 3,2 тыс.
Using Ollama To Build a FULLY LOCAL "ChatGPT Clone"
11:17
iPhone 15 Pro vs Samsung s24🤣 #shorts
0:10
Просмотров 9 млн
Полезные программы для Windows
0:56
Power up all cell phones.
0:17
Просмотров 48 млн