Тёмный

Run Any Hugging Face Model with Ollama in Just Minutes! 

Digital Mirror
Подписаться 6 тыс.
Просмотров 7 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 36   
@kennygoespostal
@kennygoespostal 5 месяцев назад
Love your videos. It's weird how you do everything in Windows without WSL. It could be a selling point for your videos, maybe add a Windows tag somewhere? Keep at it!
@DigitalMirrorComputing
@DigitalMirrorComputing 5 месяцев назад
That's a good idea mate, didn't think about it! I don't really like WSL to be honest, if I need to do something that specifically requires linux, I just connect to a linux VM running on another machine. thanks for the feedback mate, and the support! :)
@siferCEO
@siferCEO 4 месяца назад
Uh oh what does this mean? Error: Models based on 'LlamaForCausalLM' are not yet supported. More importantly how does one identify if the model is this “variation.”
@amrut1872
@amrut1872 4 месяца назад
'LlamaForCausalLM' is one of the many architectures that are out there for LLMs, and to identify the architecture of a particular model you need to look inside the config.json file for that model which can be found in the 'files and versions' tab for your model on hugging face.
@mahaltech
@mahaltech 29 дней назад
how can i push model from hugging face to ollama website
@wintrover
@wintrover 18 дней назад
that was very helpful. thank you.
@DihelsonMendonca
@DihelsonMendonca 2 месяца назад
💥 Wow, it's very complex. I wish there was a tool to automatically convert GGUF models to Ollama, or Ollama could use Gguf directly without all this rocket 🚀 science, man ! 😮😮
@DigitalMirrorComputing
@DigitalMirrorComputing 2 месяца назад
...and maybe there is! I just don't know one hehe :) If you find one, please let me know and I will make a video about it! :) thanks for watching mate!
@dhaneshdutta
@dhaneshdutta 4 месяца назад
can u make the same but for linux? bit confused in some steps
@siddhubhai2508
@siddhubhai2508 2 месяца назад
Write that commands then go the claude/chatgpt or the best will be the deepseek coder v2 and then ask that this command is used in windows cmd please tell me how to use it in linux, simple!
@parthwagh3607
@parthwagh3607 2 месяца назад
thank you so much for the information. But could you please tell us how we can do this for AWQ. They have multiple files in single folder. Even if I only provided path to folder where safetensors files are present, I am getting error. Also, we have to consider that there may be more than one safetensors files for single model. And one request, how to do this without using Conda.
@WreckTangledTV
@WreckTangledTV 26 дней назад
Been searching for hours for a video, you are #1 thank u so much!
@AbhijayK
@AbhijayK Месяц назад
you deserve way more subscribers my guy! thank you!
@HimanshuGhadigaonkar
@HimanshuGhadigaonkar 4 месяца назад
Thank you so much for this.. this worked like a charm.. i think we have to test with models that are not in gguf format..
@MG3-l3g
@MG3-l3g 2 месяца назад
heya, great video. I followed it perfectly until I tried to run 'ollama create' and got 'The term 'ollama' is not recognized as the name ... etc'. I definitely 'pip installed' Ollama according to the steps here. How do I fix this error?
@parthwagh3607
@parthwagh3607 2 месяца назад
may be ollama is not environment variable. You have to find where is ollama is stored and on that location open cmd.
@Moraes.S
@Moraes.S Месяц назад
Valeu Felipe, funcionou aqui. Mas na etapa final precisei adicionar um .txt no Modelfile para funcionar. Se colocar só Modelfile igual você fez, dava esse error: Error: open C:\Users\Daniel\Modelfile: The system cannot find the file specified. Quando fiz com txt: C:\Users\Daniel>ollama create bartowski_gemma-9b -f .\Modelfile.txt transferring model data 100% Top. Working like a charm.
@DigitalMirrorComputing
@DigitalMirrorComputing Месяц назад
@@Moraes.S obrigado amigo! Fico contente que tenha funcionado!
@cybercdh
@cybercdh 5 месяцев назад
Love it.
@JG-gf8hs
@JG-gf8hs 4 месяца назад
Muito bom este "passo-a-passo" do processo, obrigado! No entanto no meu caso tenho este erro quando estou na fase de criar o file : ollama create dolphin-2.9-llama3-8b -f .\Modelfile O erro é o seguinte : C:\Windows\system32>ollama create dolphin-2.9-llama3-8b -f .\Modelfile transferring model data panic: regexp: Compile(`(?im)^(from)\s+C:\Users\joseg\.cache\huggingface\hub\models--QuantFactory--dolphin-2.9-llama3-8b-GGUF\snapshots\525446eaa510585c590352c0a044c19be032a250\dolphin-2.9-llama3-8b.Q4_K_M.gguf\s*$`): error parsing regexp: invalid escape sequence: `\U` Fazes alguma ideia do que possa ser a causa ? Qualquer tipo de informaçao util na resoluçao deste impasse sera bem vinda 🙂
@DigitalMirrorComputing
@DigitalMirrorComputing 4 месяца назад
Tenta apagar o file nessa location e download again. Ou então foi o próprio model que não foi bem gravado em gguf
@Hennessyjenkins2
@Hennessyjenkins2 3 месяца назад
Pls talk about copyrights, any potential infringing if ine was to creat social media content with HF
@DigitalMirrorComputing
@DigitalMirrorComputing 3 месяца назад
That is down to the model! Make sure you check well the disclaimers for the models you choose to use! :)
@DH-zt9tw
@DH-zt9tw 2 месяца назад
didn't work on a mac
@luisEnrique-lj4fq
@luisEnrique-lj4fq Месяц назад
thanks, thanks, thanks
@NyxesRealms
@NyxesRealms 4 месяца назад
followed the instructions up until Modelfile and when I run ollama to create it it can't find the specific file.
@DigitalMirrorComputing
@DigitalMirrorComputing 4 месяца назад
Make sure you are in the same directory as the model file! Or use -f followed by model file path!
@NyxesRealms
@NyxesRealms 4 месяца назад
@@DigitalMirrorComputing I appreciate it, but I already solved it. It was actually saved as a txt file so I did some digging and made sure to remove the extension. If you ever update a video like this maybe you can include the steps to do that because you kind of breezed over it. Additionally, I ran into another issue where the file path in the Modelfile had to be replaced because it was taking \ as an escape, so i switched to single forward slash and it was able to create the file finally. :) Thank you for your quick reply though!
@popularcontrol
@popularcontrol 4 месяца назад
@@NyxesRealms Where did you find the modelfile?
@NyxesRealms
@NyxesRealms 4 месяца назад
@@popularcontrol c:/users/myname
@OfficeArcade
@OfficeArcade 5 месяцев назад
Another great video!
@DigitalMirrorComputing
@DigitalMirrorComputing 5 месяцев назад
thanks dude!! :D
@thevinn
@thevinn Месяц назад
Why would you create a video instead of a set of written instructions?
@DigitalMirrorComputing
@DigitalMirrorComputing Месяц назад
@@thevinn why would you watch the video instead of reading a set of instructions?
Далее
Best AI assistant for developers is free
10:40
Просмотров 30 тыс.
Hugging Face SafeTensors LLMs in Ollama
6:38
Просмотров 4,9 тыс.
Run your own AI (but private)
22:13
Просмотров 1,5 млн
First Look: Replit Agent
11:11
Просмотров 14 тыс.
Getting Started with Ollama and Web UI
13:35
Просмотров 29 тыс.
Importing Open Source Models to Ollama
7:14
Просмотров 31 тыс.
Using Ollama to Run Local LLMs on the Steam Deck
10:50
Просмотров 2,3 тыс.
The Ollama Course: Intro to Ollama
9:39
Просмотров 12 тыс.
Adding Custom Models to Ollama
10:12
Просмотров 30 тыс.