Prompting Pixels is your go-to source for navigating the dynamic field of diffusion models, led by Shawn, a digital art and AI expert with over 5 years of experience. He offers in-depth insights into creating breathtaking AI art, providing tutorials for beginners and advanced tips for seasoned artists. Shawn's goal is to simplify AI art, making it approachable and motivating for both newbies and professionals.
Join us to: - Participate in our active Discord community for immediate advice and motivation.
Prompting Pixels is all about empowering through learning, creating a space where innovation and creativity flourish. Discover the vast potential of AI art with us.
Hey sorry to bother, but I've noticed ComfyUI takes WAY too long on my machine to run image input, although it runs well on just text (running a 4060 8gb, Ryzen 5 3500x 6core, 16 gb ram ddr 4). Image from prompts at 30 steps for 4 outputs takes maybe to 10 to 15 secs, while Img from Img at 20 steps takes 930 secs. Is that something I can fix? is it a common problem?
Thanks for the video! So I seem to be having issues installing GGUF. Whenever I install it/refresh. I'm getting errors saying that it wasn't install properly. Whenever I try fixing / reinstalling the error keeps persisting. Any idea how I might be able to tackle that?
i think extending more on where to download, how to download, what is a LORA, etcs, would be a great continuation for this video. I'll check if there anything on your channel regarding that! thanks a lot.
Unfortunately this is nowhere near real hires-fix but it'll do for now. Thanks for a great explanation UPD: ComfyUI docs have a workflow for hires fix with a couple extra steps
I got this error, anyone who know how to fix it? TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype. Prompt executed in 100.55 seconds huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
I am used to drawing dozens of small images first, then selecting the satisfactory ones, using SD's PNG INFO to recognize the information of the small images, and then using Hire. fix to enlarge them; How to operate in ComfyUI?
Any idea how to use flux for outpainting with foocus for irregular base image size to a fixed canvas edge? I.E I have a bunch of cut outs frmo other generations I have done, I messed up placing a crazy frame that i now no longer want, so i batch cut the middle part of the imges out, I wish to expand them to the base canvas size 512 x 768 BUt i am stuck! all the outpaintings are to a certain side, of a rectangle, and by a fixed number of pixels.. PLus none of hte outpaintings will work with Flux. (can work with sdxl and I have foocus inpainting, which I think would help...) Any pointers or a video would be really appreciated.
It's funny - I had to step away from ComfyUI for a month or two, then noticed it changed after running a git pull and it threw me off for a second. Haven't dug deeper into it - but perhaps related to this release from a few weeks ago: github.com/comfyanonymous/ComfyUI/releases/tag/v0.1.0
Interesting video I am an automatic user and I would like to fix some images of some video game characters using LoRas, my question is, facedetailer will respect the character I used with LoRas ? Or will it make me a totally different face?, that's my doubt that's why I don't let go automatic 1111 adetailer helps me a lot in that but I would like to do the same in comfyui and make more complex images here
Hi, on a1111 we can tweak some settings like Schedule bias, Preservation strength, Transition contrast boost etc.. who preserve the inpainted area but with the node of the video we have 0 control on the soft inpainting, is there a solution ?
Hey, I tried installing the ComfyUI_UltimateSDUpscale through the manager, update it, manual install it through Git, download the raw files and placing them in the correct folder, but all methods failed. the node is considered missing on Comfy and the installation failed. does anyone else have this problem? maybe after recent ComfyUi update or something? thanks.
TY for the quick and straight tutorial. I just want to ask. The node says Face Detailer, but will it still work if i changed the ultralytics to hand, person, etc, or whatever other than face? TIA
I recently saw a video about a product photographer who implemented comfyui in his workflow, he basicaly take a photo and tether it to his pc, then comfyui automatically use the photo and removes the background and add all the details for a "finished" image, I found this mindblowing. Do you have any advice to where can I start with AI (in this regard) so I can eventually give a try to ComfyUi? Your advise and time is very much appreciated
how to remove sunglasses from a face and swap the face without sunglass picture showing eyes. reactor face swap not working properly on image wearing sun shades
hey, thank you for this tutorial, however im complete beginner and from 3rd minute I'm really don't know what are you talking about and what you are doing. everything was so clear to me before but later... you do something what I completely not follow. ehh
Err, given how bad the reception was to SD3 I think that needs to be held off till Stability fixes it (stability.ai/news/license-update). Instead, Flux seems to be the much better successor that recently was released as it has many folks from the original stable diffusion team (github.com/black-forest-labs/flux)
any way to use this with flux? My workflow doesn't use ksampler but rather samplercustomadvanced. I tried to route based on the note names, but it seems to just ignore my input image and simply use the text prompt
I've been away for a bit but just started taking a look at Flux this evening - super cool stuff! Still need to learn the basics before I am ready to share anything here on the channel, but suspect some new videos in the next couple of weeks or so. In the mean time, here's a thread that uses the same `Sampler Custom Advanced` node in an img2img flow that might help: www.reddit.com/r/StableDiffusion/comments/1eigdbk/img_2_img_with_flux/