I like this kind of discussion and your videos where you show establishing lighting and composition using noise gradients. I've done a lot of photography and have a good sense of composing using cropping and camera height, but I just make do with what lighting there is. Working with SD has actually made me more aware of this by what it does automatically. Especially if you say the person is small in a large scene it will use lighting and line to draw attention to the subject, and this can even be a bit too much. And, well there does have to be light for you to see anything. By default SD may tend to be indecisive, but it does create highlights and shadows, and also use sharpness and detail to lead the eye. Do you have any ways of guiding the sharpness/detail in comfy? I guess you could use masks. I noticed in a video on using face detailers is that a way you can go wrong is by sharpening a peripheral face up too much. This gives a very pasted look.
Someone's view need to be trained to get an visual, conceptional impression like photographers or painters have, thinking about e.g. composition, harmony, rule of third, fibonacci. Most users are not getting "educated" using only AI, because these users aren't familiar with all the concepts you mentioned. They are not getting corrected by AI and image evaluations or discussions are not going to happen. There are already some positional helper tools available in ComfyUI, but a lot of users are already struggeling with prompt issues. Everybody is happy if the main idea got produced by AI. Next issues are correcting imperfections and lastly upscaling. Unfortunately composition seems to be the very last topic, most users are thinking about. I've seen portfolios where users have produced dozens of rows or more of female portrait images all looking the same regarding composition. Nevertheless, thanks for your work, Rob. I always appreciate it to listen to your ideas and concepts. Keep up the good work!
wonderful work as always. love your explanations. you happen to have the original upscaling you did ? the convoluted one. i'm curious about how you RE-merged the tiles. I have a few use cases where I want to merge my own collage via tiles. I figured out how to break out the tiles into separate images using batch image nodes... but not how to re-combine them.
Here's the original Node and Noodle version. It has the recombine on the right. The numbers are driven by Math nodes but you should be able to work out how the compositing works. Essentially pad the image for outpainting and drop a tile into the image extension. drive.google.com/file/d/1ynND0H9hFYirm3BHc0xyqxNhqZJxRKz6/view?usp=sharing
Thank you, Rob, for letting us participate on your travel through the various ps/comfyui puzzle pieces and putting it all together to a final stage. Very well presented with an amazing result!
Hey thanks for the vids, you cover a lot of good niche stuff. I have a real hard time with resolutions when I need to crop a weird size picture for inpainting or anything really if you need a video idea . Or do you just have any tips on how to size everything right all the way through a workflow. I get all twisted up about halfway through if I'm doing anything complicated relating to inpainting, resampling, or upscaling etc. edit: and I have bad adhd so you get about 5x views from me to take it all in 😆
@@robadams2451 The world is ready:) Thank you for all the fascinating videos; the insight, clarity and detail with which you cover any topic is invaluable.
Hello! new subscriber here. 🙂 This was actually wonderful! I'm trying to learn comfyui and it's so complicated, but you make it look fun. I appreciate the effort you put in these awesome videos! I'm following for more.
I explain in the video it is several modular workflows. Not a single one, they are reconfigured during the process so not one single flow! My other vids have all the individual workflows.
@Rob Adams I'm a bit behind the times, I'm just going through your Alpaca videos :) Thanks for making these. Question: in PS, when you enlarge a detail (e.g. a face) for detailing, you then somehow shrink it back and "snap" exactly where it belongs. How is this shrinking/fitting done? Is it PS or Alpaca? I haven't used Alpaca, so I don't know.
I really like your standpoint to adjust an prefinal image before rendering (action vs. reaction/relighting)! Your videos are always very interesting and inspiring with great looking images, even SD got inspired by putting a driver into your oldtimer. :-) I'm usually facing empty cars. Keep your good work up, Rob!
Heh, it's that default top left thing, I don't think it would be impossible to have negative values... or maybe it would offend Mr Python. I'd also like to be able to flip the arrows so left is left and right is right... I'm always clicking the wrong one!
I hope you will not become discouraged that more people aren't viewing these and you keep doing what you enjoy. The images you create are always interesting to stop and enjoy them, the workflows also just as much. My head is spinning with so many possible projects to become lost in A.I (currently I'm tangled in audio) that I haven't gotten around to implementing some of your techniques as I intend to just yet. But thought I'd let you know your work is inspiring to me.
You set the image size for the generation in the New Light group. Constant NumberWidth and Constant NumberHeight. The background size in the depth map is set separately but doesn't affect the final output size only the relative size of the subject. IE for a big figure decrease the Back ground size in the Depth group and vica versa.
hank you for always providing great lectures. I have one question. I would like to increase the background size for output. Which numbers should I change?
I'm curious about the blurry base image. I assume you have done some pre-processing on it to noise it up? And if so, what is the extent to which that blur/noise is necessary to generate the batched ones? Is a non noisy image with a higher denoise equivalent or does it add something that you don't get that way?
I want the input image to decide mood and to some degree composition but I also want to add uncertainty, which gives the varied results. The noise helps the image change at lower denoise levels which gives more options. I make the image in Photoshop but there are nodes to do the same thing in Comfy.
Rob, you are tackling the most important issues of an image next to the subject... lighting and composition! Your tutorials are always very helpful and it always fun to listen to your explanations and to watch your nodes path what you've developed. Thank you very much for presenting your knowledge so transparent and streamlined. Btw: I had some problems with your accent, too, but only at the beginning! ;-) Keep up your great work!
Hello Rob. You enlightened us again 😊 Thanks for the workflow but please it would be great if we could have also the images you used. Your ppscale technic is, as always, very interesting. 11:02 : What do you mean bu "Kitchen girl" ?
The images are your choice, so I don't put them on the drive. Ones similar to what you see in the workflow image will be fine. It was missing a comma, kitchen, girl!
I always enjoy following your workflows when you are putting together all pieces of a composition puzzle! Smart composition and lighting tutorial. Thank you very much, Rob! Btw one question: if I wanna get the size of a mask, is there a node for that or do I alyways have to convert the mask to an image?
Check out marigold depth estimation. It takes longer to do but from what I have experienced it is so clean. Downside is that it was heavily trained on 768 sized images so always rescale to that.
Thanks, Marigold is the best I agree, oddly in this case I'm using the less good one deliberately! I don't want more than the general features to come through, so I want to give the model guidance but not too much. Sometimes I even need to blur the map to make it less defined.
Phenomenal approach, thank you for sharing it so painstakingly. Have you thought of incorporating the new PCM loras? These cut down generation times significantly.
@@robadams2451 You should check out a model called gleipnir_v20BF16 in BF16 mode. Its terrific for upscaling/refining. Im getting great results with it and 4x_NMKD-Siax_200k using your node.
This is really amazing thank you! I have been struggling to be able to achieve something like this for months, you are the first one that i come across that is achieving amazing results by using simples nodes (better than all the others I have seen in fact). Thank you sharing this, truly