Love your videos. It's weird how you do everything in Windows without WSL. It could be a selling point for your videos, maybe add a Windows tag somewhere? Keep at it!
That's a good idea mate, didn't think about it! I don't really like WSL to be honest, if I need to do something that specifically requires linux, I just connect to a linux VM running on another machine. thanks for the feedback mate, and the support! :)
Uh oh what does this mean? Error: Models based on 'LlamaForCausalLM' are not yet supported. More importantly how does one identify if the model is this “variation.”
'LlamaForCausalLM' is one of the many architectures that are out there for LLMs, and to identify the architecture of a particular model you need to look inside the config.json file for that model which can be found in the 'files and versions' tab for your model on hugging face.
💥 Wow, it's very complex. I wish there was a tool to automatically convert GGUF models to Ollama, or Ollama could use Gguf directly without all this rocket 🚀 science, man ! 😮😮
...and maybe there is! I just don't know one hehe :) If you find one, please let me know and I will make a video about it! :) thanks for watching mate!
Write that commands then go the claude/chatgpt or the best will be the deepseek coder v2 and then ask that this command is used in windows cmd please tell me how to use it in linux, simple!
thank you so much for the information. But could you please tell us how we can do this for AWQ. They have multiple files in single folder. Even if I only provided path to folder where safetensors files are present, I am getting error. Also, we have to consider that there may be more than one safetensors files for single model. And one request, how to do this without using Conda.
heya, great video. I followed it perfectly until I tried to run 'ollama create' and got 'The term 'ollama' is not recognized as the name ... etc'. I definitely 'pip installed' Ollama according to the steps here. How do I fix this error?
Valeu Felipe, funcionou aqui. Mas na etapa final precisei adicionar um .txt no Modelfile para funcionar. Se colocar só Modelfile igual você fez, dava esse error: Error: open C:\Users\Daniel\Modelfile: The system cannot find the file specified. Quando fiz com txt: C:\Users\Daniel>ollama create bartowski_gemma-9b -f .\Modelfile.txt transferring model data 100% Top. Working like a charm.
Muito bom este "passo-a-passo" do processo, obrigado! No entanto no meu caso tenho este erro quando estou na fase de criar o file : ollama create dolphin-2.9-llama3-8b -f .\Modelfile O erro é o seguinte : C:\Windows\system32>ollama create dolphin-2.9-llama3-8b -f .\Modelfile transferring model data panic: regexp: Compile(`(?im)^(from)\s+C:\Users\joseg\.cache\huggingface\hub\models--QuantFactory--dolphin-2.9-llama3-8b-GGUF\snapshots\525446eaa510585c590352c0a044c19be032a250\dolphin-2.9-llama3-8b.Q4_K_M.gguf\s*$`): error parsing regexp: invalid escape sequence: `\U` Fazes alguma ideia do que possa ser a causa ? Qualquer tipo de informaçao util na resoluçao deste impasse sera bem vinda 🙂
@@DigitalMirrorComputing I appreciate it, but I already solved it. It was actually saved as a txt file so I did some digging and made sure to remove the extension. If you ever update a video like this maybe you can include the steps to do that because you kind of breezed over it. Additionally, I ran into another issue where the file path in the Modelfile had to be replaced because it was taking \ as an escape, so i switched to single forward slash and it was able to create the file finally. :) Thank you for your quick reply though!