Just watched as an intro to the 'behind the scenes' workings of txt to image and so pleased w yr coverage. Fm intrigued personal experience the generated images fall way short of concept but u give sage advice on how to better guide this idiot (i will not use savant) software as it doesn't know / understd anything! thnx again. Will continue the journey - n hopefully pass the mid point.
Thanks for educating ourselves in comfyui. Things are moving extremely fast but it seems comfyui is way more powerful than A1111 in terms of fine tuning. I liked a lot your advanced trainings about masking. Do you plan to do the same for advanced prompts (wildcard and associated prompt that could be done via other nodes ?) thank you and have a great day !
yes, I intend to cover syntax and some of the sneaky tricks you can do using prompting and then I will move on to advanced use of prompt nodes like wildcards and randomizers, as well as BLIP among other things.
Your content and Tutorials have propelled me forward greatly! I appreciate it. Probably the most demystifying "prompt" explanation - demonstration I have seen. Thanks!
please make more content. your stuff is absolute top notch. i have been sitting in front of SD since it´s came out, literally thousands of hours, and never have i stumbled upon a tutorial that demystifies prompting like you just did. Sadly i have to go to sleep, but tomorrow its binge time of your videos. thank you
Hi there! Awesome video, I have a question. I'm running an XL model, and I wanted to ask what sort of syntax is recommended (and perhaps this could apply anywhere). In the video, you included this token: "A bearded man wearing armor and holding a flag" With what you were saying about the way the AI "reads" this input, do you recommend using this style, or instead one that you see fairly often: "bearded man, armor, holding a flag" Any help is super great!
My advice, important stuff gets a sentence, periphery stuff gets individual words. if individual words arent cutting it then you might have to put them in more context with added words. Generally individual words have less weight to them in SDXL, they still work but due to the contextual nature of SDXL model tokenization. they are generally applied in an uncontrolled way and could be applied to any part of the image based off probability. so yeah, sentences for stuff thats important, words for stuff thats not too important but who's influences you might want in the image. The kinds of words too have different effects too. visual themes for example apply over most of the image. objects apply to areas that resemble that object in the noise surface properties apply to the area they are most relevant.
@@ferniclestix I really like your ComfyUI tutorial, it is very useful for me and I suggest that you could zoom in the "prompt box (text encode node) " next time, when you are typing words. It is not easy to read the contents in the prompt box now. Thanks a lot.😀
@@ferniclestix Please forgive my broken English. I mean that when you recording tutorial video, sometimes, you should drag and zoom in your recording screen so we can see it more clearly. Just like close-up camera.
Great videos! Well explained! Regarding the dark streets.. Got some cool images with below. But as you're mentioning... haha SD really want to add those lights! :D pos: city streets at dark, lights out neg: text, watermark, glowing shops, street lights, shopping boards, bright sky, light Thanks for the content!
yeah, really depends on the model that one, SDXL can do dark streets but it still struggles. trick is always to pre-weight your generations before sampling using images and colors imo.
Please forgive my broken English, what I mean is Close- up camera effect, you could see this example: ru-vid.comFJwcBTL_voc?feature=share&t=2356 This is ZOOM IN( Just like Close-up Camera), we need it to see more clearly. I hope you launch more prompting tutorials, it is very useful. 😁
if you have comfyUI manager then the option is in the manager menu as a drop down for preview render. if you dont have comfyUI manager, i think I explain how to do it manually in one of my basic tutorials.
@@ferniclestix Hey thanks for getting back. I have comfy since a few days ago but I was working with Auto1111 so far... I'm just begging my comfy explorations. I have that node but did not see the option... it may be a similar node? Looking for it now. Two more questions if you can help --> can comfy also do 2D animations like Deforum? --> is there a way to reveal the nodes text when we zoom out? Thanks again, great tutorials!
1- make sure your comfyUi and manager are up to date then open the manager window off your floating side menu. you should see update, install custom node ect. to the left hand side is a bunch of drop down lists, you want "preview method" and pick something from that list then that should add previews. 2- animate diff can do animations like deforum but it requires a bunch of custom nodes and lots of work to make cool stuff, but yes it can be set like deforum unfortunately I don't have a good tutorial for this stuff yet. 3- yes, go to your web browsers options menu and find the 'zoom' function, if you zoom out with this and then zoom in on your workflow you will find text is visible from further away. although it can lower your frame rate if your computer isn't great and it will cause some issues with the double click menu.