an incredible job. don't worry about the likes, keep working wonders) there are few smart people in the world who can perceive your context... but we are all with you))
Hello, when I choose the workflows file from the upload feature in the program, a message appears saying: When loading the graph, the following node types were not found: Batch Load Images When you click on Queue prompt, another message comes SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) How is the problem solved? Thank you.
Hi, if you have comfy ui Manager install the missing nodes those extra nodes are github.com/Kosinkadink/ComfyUI-VideoHelperSuite and github.com/bash-j/mikey_nodes you can get any of them or both those are to load batch images from a folder. The json error you are having I haven't seen it but you can try saving in csv or txt I have't use any json so I don't know why you are getting that error
Super cool, thank you for these nodes. I got it working in ComfyUI with my Open AI key, but it can't find my Ollama and models. Most of my models are with LM studio - I guess they are all in different locations on my computer (windows 11). I went to the Olama git hub page which suggested environment variables - but don't they mean paths? Can I set extra paths in the comfyUI somewhere for this?
Thanks. I haven't got around to installing LM studio yet but another user told me it just needed to change ollama port on the node to make it work. I guess LM studio runs ollama in the background of so it will find all the models automaticaly. I think the port is 1234
@@impactframes I have been working on that. Starting the LM studio server with a model and pointing the IF nodes to the right server number doesn't work. The docs for LM studio aren't flushed out but what I've read so far they use the same protocol as open AI but just with a different address. I don't remember for sure, but I think it's not possible to set the server address when you have your node set to OpenAI but perhaps it might be easy to make that change such that one could?
Sorry, I make the video as I work on the computer and the hands get occluded so they lose track so they glitch. I am am going to make the videos without body tracking from now thanks
I'm building a similar tool but with diffusers and the transformers library. I've been testing ollama as well. I'm curious what your system prompt is in the modelfile(ollama)? do you use one-shot? two-shot? Good job on the release, it's genuinely cool.
Thank you, there is no preprompt on the modelfile, I am passing a system prompt to the model as a sytem message that way you can use general models and depending of the reasoning capabilities you get different results, you can read the sytem message on the code there is another one for the Llava models since thee function is a little bit different. Thanks.
around 7:30 minute mark I show how to get the models they get installed as hash256 blobs at \usr\share\ollama\.ollama\models on linux and at C:\Users\username\.ollama\models\blobs on windows
@@1ASinyagin 1)_. install ollama 2)_. Go into terminal and type: ollama run adrienbrault/nous-hermes2pro:Q5_K_S That will install the model and then you can ask any question to it. 3)_. Go to your ComfyUI custom_nodes folder type CMD in the address bar it will open command prompt terminal. Type: git clone github.com/if-ai/ComfyUI-IF_AI_tools.git That will install the custom node. Now you can start ComfyUI and load the custom workflow that is on the workflows folder inside the custom_nodes\ComfyUI-IF_AI_tools\workflows folder you can run the queue to generate an image The folder I gave you before is just were ollama store your LLMs models
Hello, the problem has been solved, thank you, but I faced another problem when asked to create an image that appears on the command screen. Error: ANTHROPIC_API_KEY is required Error: OPENAI_API_KEY is required Where do I get the API key? How is it entered if it is obtained? Can you explain it to me? Thanks again.
Yes after my full time job but AI helps a lot, If I get stuck on something usually takes less time to find the solution is not as as hard as it used to.
Thanks for the reply. if it is not a burden, can you suggest me where to start to get into diffusion? I mean I want to be capable of coding something usefull as an extension@@impactframes
@@aliyilmaz852 best start will be learning about stable diffusion with course.fast.ai practical deep learning for coders and start with some small python projects after you know the basics. Get chatgpt or Claude or the free mistral lechat to help you along the way.
How much VRAM do these LLMs require? How do you run them at the same time as running ComfyUI? Do you need a 24GB GPU in order to run them both at the same time?
it depends of the models you run Ollama uses both CPU it loads part of the model on RAM around the 8 minute mark I talk about the model sizes if you select quantize models like 2bit they are less accurate but produce faster outputs and take less vram and ram.