Тёмный

Perfect Prompts Automatically 

ImpactFrames
Подписаться 4,6 тыс.
Просмотров 5 тыс.
50% 1

Опубликовано:

 

3 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 54   
@pseudoAkk
@pseudoAkk 6 месяцев назад
an incredible job. don't worry about the likes, keep working wonders) there are few smart people in the world who can perceive your context... but we are all with you))
@impactframes
@impactframes 6 месяцев назад
Thank you for the your word of encouragement, I will keep improving it thanks.
@MarceloPlaza
@MarceloPlaza 3 месяца назад
Thanks for this integration, it works great.
@impactframes
@impactframes 2 месяца назад
thank you sorry for the late reply
@alm7traf
@alm7traf 6 месяцев назад
Hello, when I choose the workflows file from the upload feature in the program, a message appears saying: When loading the graph, the following node types were not found: Batch Load Images When you click on Queue prompt, another message comes SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) How is the problem solved? Thank you.
@impactframes
@impactframes 6 месяцев назад
Hi, if you have comfy ui Manager install the missing nodes those extra nodes are github.com/Kosinkadink/ComfyUI-VideoHelperSuite and github.com/bash-j/mikey_nodes you can get any of them or both those are to load batch images from a folder. The json error you are having I haven't seen it but you can try saving in csv or txt I have't use any json so I don't know why you are getting that error
@sarpsomer
@sarpsomer 6 месяцев назад
Neat Tutorial!
@impactframes
@impactframes 6 месяцев назад
Thank you :D
@EH21UTB
@EH21UTB 5 месяцев назад
Super cool, thank you for these nodes. I got it working in ComfyUI with my Open AI key, but it can't find my Ollama and models. Most of my models are with LM studio - I guess they are all in different locations on my computer (windows 11). I went to the Olama git hub page which suggested environment variables - but don't they mean paths? Can I set extra paths in the comfyUI somewhere for this?
@impactframes
@impactframes 5 месяцев назад
Thanks. I haven't got around to installing LM studio yet but another user told me it just needed to change ollama port on the node to make it work. I guess LM studio runs ollama in the background of so it will find all the models automaticaly. I think the port is 1234
@EH21UTB
@EH21UTB 5 месяцев назад
@@impactframes Thank you, I'll try that!
@EH21UTB
@EH21UTB 5 месяцев назад
@@impactframes I have been working on that. Starting the LM studio server with a model and pointing the IF nodes to the right server number doesn't work. The docs for LM studio aren't flushed out but what I've read so far they use the same protocol as open AI but just with a different address. I don't remember for sure, but I think it's not possible to set the server address when you have your node set to OpenAI but perhaps it might be easy to make that change such that one could?
@xdevx9623
@xdevx9623 6 месяцев назад
You don't know how much this helped me THANKS A LOTT!! and can you also make a video on ai video generation please (text to video)
@impactframes
@impactframes 6 месяцев назад
Thank you yes I am working on that, I wish I could dedicate my time exclusively to this to get there faster but is coming eventually
@NotThatOlivia
@NotThatOlivia 6 месяцев назад
very nice - going to add this to my workflow ASAP!!! GJ
@impactframes
@impactframes 6 месяцев назад
Thank you 🙂 glad you like it
@Kingphotosonline
@Kingphotosonline 6 месяцев назад
Very interested in this, however, at around the 1:30 mark, I was distracted by the avatar's... motions.
@impactframes
@impactframes 6 месяцев назад
Sorry, I make the video as I work on the computer and the hands get occluded so they lose track so they glitch. I am am going to make the videos without body tracking from now thanks
@Kingphotosonline
@Kingphotosonline 6 месяцев назад
@@impactframes Oh, it's no big deal. I just thought it was hilarious
@1videolar
@1videolar 4 месяца назад
I get this error continously and the workflow doesnt open "Preset text 'N:background' not found. Please fix this and queue again."
@impactframes
@impactframes 4 месяца назад
Were you editing the presets seems like a preset is missing in one of your files maybe re-download them.
@sushicommander
@sushicommander 6 месяцев назад
I'm building a similar tool but with diffusers and the transformers library. I've been testing ollama as well. I'm curious what your system prompt is in the modelfile(ollama)? do you use one-shot? two-shot? Good job on the release, it's genuinely cool.
@impactframes
@impactframes 6 месяцев назад
Thank you, there is no preprompt on the modelfile, I am passing a system prompt to the model as a sytem message that way you can use general models and depending of the reasoning capabilities you get different results, you can read the sytem message on the code there is another one for the Llava models since thee function is a little bit different. Thanks.
@1ASinyagin
@1ASinyagin 6 месяцев назад
Great work!!! where should the model files be stored?
@impactframes
@impactframes 6 месяцев назад
around 7:30 minute mark I show how to get the models they get installed as hash256 blobs at \usr\share\ollama\.ollama\models on linux and at C:\Users\username\.ollama\models\blobs on windows
@1ASinyagin
@1ASinyagin 6 месяцев назад
@@impactframes thank you, I copied it along the path you specified, but still the node is not detected
@impactframes
@impactframes 6 месяцев назад
@@1ASinyagin 1)_. install ollama 2)_. Go into terminal and type: ollama run adrienbrault/nous-hermes2pro:Q5_K_S That will install the model and then you can ask any question to it. 3)_. Go to your ComfyUI custom_nodes folder type CMD in the address bar it will open command prompt terminal. Type: git clone github.com/if-ai/ComfyUI-IF_AI_tools.git That will install the custom node. Now you can start ComfyUI and load the custom workflow that is on the workflows folder inside the custom_nodes\ComfyUI-IF_AI_tools\workflows folder you can run the queue to generate an image The folder I gave you before is just were ollama store your LLMs models
@Allan2302
@Allan2302 3 месяца назад
Thanks for making comfy ui better, really game changer nodes
@impactframes
@impactframes 3 месяца назад
Thank you so much this is the type of comment I love to see
@Hakim3ii
@Hakim3ii 5 месяцев назад
Under windows I could not make it work I get cuda error and dev didn't fix it issue: 3683 Is it possible to use other llm local front end?
@impactframes
@impactframes 5 месяцев назад
Kobold.cpp works on the IFchat node
@Hakim3ii
@Hakim3ii 5 месяцев назад
@@impactframes I went linux and its working
@impactframes
@impactframes 5 месяцев назад
@@Hakim3ii nice
@alm7traf
@alm7traf 6 месяцев назад
Hello, the problem has been solved, thank you, but I faced another problem when asked to create an image that appears on the command screen. Error: ANTHROPIC_API_KEY is required Error: OPENAI_API_KEY is required Where do I get the API key? How is it entered if it is obtained? Can you explain it to me? Thanks again.
@impactframes
@impactframes 5 месяцев назад
I think I fix that on the latest update but if you want to use openai you will have to enter the key
@Douchebagus
@Douchebagus 5 месяцев назад
This is amazing, exactly what I needed! Cheers man.
@impactframes
@impactframes 5 месяцев назад
Thank you for leaving a comment 🙂
@gimperita3035
@gimperita3035 6 месяцев назад
Having a lot of fun with your nodes. Thank you!!
@impactframes
@impactframes 6 месяцев назад
Thank you so much also I made a new update and you now can use anthropic and openai api optionally. Also there is a new display text node😊
@aliyilmaz852
@aliyilmaz852 6 месяцев назад
Great work! Just curious, are you coding all these stuff alone like indie developer?
@impactframes
@impactframes 6 месяцев назад
Yes after my full time job but AI helps a lot, If I get stuck on something usually takes less time to find the solution is not as as hard as it used to.
@aliyilmaz852
@aliyilmaz852 6 месяцев назад
Thanks for the reply. if it is not a burden, can you suggest me where to start to get into diffusion? I mean I want to be capable of coding something usefull as an extension@@impactframes
@impactframes
@impactframes 6 месяцев назад
@@aliyilmaz852 best start will be learning about stable diffusion with course.fast.ai practical deep learning for coders and start with some small python projects after you know the basics. Get chatgpt or Claude or the free mistral lechat to help you along the way.
@aliyilmaz852
@aliyilmaz852 6 месяцев назад
Thanks a lot! Hope I will be able to understand what you are doing, at least a little :) @@impactframes
@97BuckeyeGuy
@97BuckeyeGuy 6 месяцев назад
How much VRAM do these LLMs require? How do you run them at the same time as running ComfyUI? Do you need a 24GB GPU in order to run them both at the same time?
@impactframes
@impactframes 6 месяцев назад
it depends of the models you run Ollama uses both CPU it loads part of the model on RAM around the 8 minute mark I talk about the model sizes if you select quantize models like 2bit they are less accurate but produce faster outputs and take less vram and ram.
@SeanieinLombok
@SeanieinLombok 4 месяца назад
This is very good.
@impactframes
@impactframes 4 месяца назад
Thank you so much I am striving to make it even better🙂
@Ziov1
@Ziov1 6 месяцев назад
Can it be used to analyse images and create a prompt, say for redoing images in batch instead of having to write a prompt for each one it's generated?
@impactframes
@impactframes 6 месяцев назад
Not yet for now the image is individual I will add from batch tomorrow or Sunday
Далее
This Prompt Makes Your Prompts 10X BETTER
9:02
Просмотров 57 тыс.
Image to 3D in ComfyUI is Fantastic
13:44
Просмотров 11 тыс.
LCM for Krita - OVERPOWERED!!!!
11:36
Просмотров 52 тыс.
ComfyUI Modular Tutorial - Prompt Module
33:20
Просмотров 8 тыс.
Change clothing in 1 click OOTDiffusion
11:02
Просмотров 8 тыс.
The Secret Behind Ollama's Magic: Revealed!
8:27
Просмотров 32 тыс.
ComfyUI: Advanced Understanding (Part 1)
20:18
Просмотров 99 тыс.