Тёмный

How To Install Jan AI on Linux Mint 

The IT-Unicorn
Подписаться 9 тыс.
Просмотров 662
50% 1

Discover the power of local open source AI with our latest video! In this third installment of our series, we install and demo Jan.ai, an LLM frontend that enables you to run multiple unfiltered LLMs locally. Experience unparalleled privacy and control compared to closed source LLMs. Follow our step-by-step guide to unlock the full potential of your AI setup and enjoy the benefits of local, open source solutions. Don't miss out on this essential tutorial for AI enthusiasts!

Опубликовано:

 

15 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 24   
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
Do you prefer the privacy of local LLMs?
@timothyhayes5741
@timothyhayes5741 Месяц назад
Thank you for this series. It is nice to see you troubleshoot and explain why/how you get an open source project to work.
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
@@timothyhayes5741 Thank you.I really appreciate that feedback.It means a lot
@sneekeruk
@sneekeruk Месяц назад
Great set of videos, working my way though them at the moment, once small thing that would save you loads of work, you don't need to write the scripts or make the .desktop fine under mint, just go to the install directory, hold Ctrl+Shift and drag it to the desktop
@ngbizvn1300
@ngbizvn1300 Месяц назад
Great series for beginners. Love the detail step-by-step setup. I use Ollama but not Jan AI. One other project that i recently 'discovered' that has high potential is llamafile (by Mozilla) (opensource). I think it is by far the easiest to run as they pre-package models inside and run like windows portable programs. I like this specially in future where i can have my own model fine-tuned to what i want and have them "frozen" with all the dependencies baked inside. The base app can also run gguf files from hugging face if those with the extension "llamafile" was not pre-packaged. The coolest thing is i am able to run a 1.1b model on a mid-tier Android phone ! (but painfully slow, something like 1.5 tok/s) but with future small capable models it is very promising indeed. Worth keeping an eye on the project. Not as mature as Ollama and JanAI but getting there (also uses OpenAI API format and can be used as backend). !
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
Very cool, I love to hear about this kind of stuff. Thank you! Any other stand alone solutions outside of the LLM frontends that you like for this type of project?
@ngbizvn1300
@ngbizvn1300 Месяц назад
​@@theit-unicorn1873 Yes, ultimately once we have our own private LLM on our machine, specially on our phone; they can be used in the background for processing privately our application eg RAG with our medical history data; and when the models are more capable be able to analyse for us our own custom data thrown to them. All without leaking to big tech. I am very concern with big tech close source Models that are harvesting our data specially our domain specific data. There are tons of applications that can be done once we are able to fine-tune or use with Agents for applications. Exciting time ahead for self-sovereign, specially running on CPU only system (which i am a big supporter) btw, i do have GPU but inference on CPU is so much more flexible specially running on larger models eg 70b parameters. (my dual-Xeon has 256GB ECC RAM).
@AjaySingh-228
@AjaySingh-228 Месяц назад
Looks impressive...i installed just now
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
Nice! Let us now how it goes.
@AjaySingh-228
@AjaySingh-228 Месяц назад
@@theit-unicorn1873 I don't know how to insert ChatGPT API in it
@MrBoboka12
@MrBoboka12 Месяц назад
You forgot to mention, how those models can give uncensored answers
@fernleaf07
@fernleaf07 Месяц назад
Will you be looking at TensorFlow?
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
Perhaps down the road we will. I'll be honest, I only recently looked at TensorFlow, so I'll need to familiarize myself before trying to do a video on it. Any tips for a noob? Thanks!
@theunismulder7119
@theunismulder7119 Месяц назад
Thanks, Love the step by step process so far for the AI series!!! [from the 3rd video so far.] I have installed it on PC, with one problem. I am not getting any responses with both models been indicated to install in guide. [That is for Llama 8B Q4 & Mistral Instruct 7B Q4] I went from Assistant to Model and then select each one of them on a PC with 16Gb RAM. The thread was a short " What is the meaning of the name Hildegard" - It failed and came up with the following message - Apologies, something’s amiss! Jan’s in beta. Access troubleshooting assistance now. I will try to follow the trouble shooting Assistance document. Step 1 Follow our troubleshooting guide for step-by-step solutions.
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
Have you tried a different question just to be sure it's not the query? I doubt it is, but worth a try to rule that out.
@theunismulder7119
@theunismulder7119 Месяц назад
@@theit-unicorn1873 I have noticed that the model do startup after I type in the same query as you have type. Still no response. I did notice that when I go to the settings on the left bottom of screen - that both models is indicating "Inactive". I did try the startup there as well still no response for the same query.
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
@@theunismulder7119 any logs?
@benjaminwestlake3502
@benjaminwestlake3502 28 дней назад
@@theunismulder7119 Getting the same thing, I have the same environment as in the video (linux mint vm with 8 cores, 16 gb ram). Getting an error when trying to activate the model (doesn't matter which model I try). the error is "cortex exited with code: null" and "Error: Load model failed with error TypeError: fetch failed."
@theunismulder7119
@theunismulder7119 25 дней назад
​@benjaminwestlake3502 In my case, I have found that the CPU is not meeting the required specs for Jan- no support for AVX2.
@jeffhughes729
@jeffhughes729 Месяц назад
Hmmm. Followed the guide and everything appears to work ok but I just dont get any responses back for trom my inputs. Any ideas?
@theit-unicorn1873
@theit-unicorn1873 Месяц назад
When you asked the first question did you see the model loading? What model are you using?
@jeffhughes729
@jeffhughes729 Месяц назад
@@theit-unicorn1873 I downloaded the same as you did in the video and both give the same results. When I ask it something i can briefly see the model loading
Далее
@HolyBaam ультанул в конце 🧨
00:34
Просмотров 245 тыс.
Merab vs Sean underway!! 🚨 #ufc306
00:23
Просмотров 943 тыс.
Linux Mint 22: Excellent Distro for Windows Users
23:36
How and why I switched to Linux
12:22
Просмотров 214 тыс.
Linux Mint 22 - Cinnamon - New User Tips.
45:06
Просмотров 16 тыс.
Updating Arch Linux after 1 year without updates
26:33
Просмотров 1,7 тыс.
Why Are Open Source Alternatives So Bad?
13:06
Просмотров 635 тыс.
The Unreasonable Effectiveness of Linux Workstations
12:47
Deepin OS - Chinese Knockoff Windows?
40:44
Просмотров 4,3 тыс.
5 TWEAKS that improve Linux Mint!
12:35
Просмотров 143 тыс.
@HolyBaam ультанул в конце 🧨
00:34
Просмотров 245 тыс.