Тёмный
Scott Detweiler
Scott Detweiler
Scott Detweiler
Подписаться
Quality Assurance Guy at Stability.ai & PPA Master Professional Photographer

Greetings! I am the lead QA at Stability.ai as well as a professional photographer and retoucher based near Milwaukee, Wisconsin. My goal is to inspire and educate viewers like you in areas of AI, Photography, and the visual arts. Many of my videos will be AI, Photoshop, and Photography and post-production related. I say post-production because products like MidJourney, Dream Studio, Capture One, Photoshop, and even real paint are also tools in my process. I may also post some of my bodypaint projects as the channel grows!

Other off-beat concepts that keep me entertained when I am not working on Photography may also make it on the channel from time to time like costume creation, laser cutting, 3D, and even mixology! I am also not the handiest man on the face of the Earth, but I have a few studio DIY studio projects I will also be working on as we move forward. Cheers!
Комментарии
@JRis44
@JRis44 День назад
Had to come back to this video. Im having issues with making a lower quality image better. Its not so bad either...just needs to be high definition. Orginally ai generated too. Ill have to search other methods if i cant use this setup....right now im trying to reconfig the sampler using all the different sampler_name options i have.
@ekot0419
@ekot0419 2 дня назад
Thank you so much for this tutorial. Now I am learning how to use it. Well, I could have just download workflow and be done with it. But then I haven't learned anything other than knowing how to use it.
@JackRainfield
@JackRainfield 3 дня назад
Thank you! It worked on 6/25/2024. I was very skeptical going into this.... LOL.. fun stuff!!!
@jonathanmartinez2041
@jonathanmartinez2041 4 дня назад
how can I use my graphics card to get it going faster?
@V_2077
@V_2077 4 дня назад
My hands aren't being detected it's iust a black preview. My image is a person with hands on hips
@DECreates
@DECreates 4 дня назад
can you add steam to a cup of coffee and take a video or just a photo?
@Kevlord22
@Kevlord22 4 дня назад
After using forge i gave it a shot, since i was kinda interested.. Its pretty neat. Using forge gave me the advantage to know what stuff means, but certainly an adjustment. Its was a great vid, easy to understand. Works great, thank you very much.
@kasoleg
@kasoleg 4 дня назад
how to enable upscale?
@wv146
@wv146 5 дней назад
Hands are finally fixed but what about the rest of the body in SD3. You did say you were head of quality assurance at Stable diffusion, are you all hiding under a table there?
@step1420
@step1420 5 дней назад
Thank you for making this detailed tutorial. Coming from MaxMSP for music production, I was expecting myself to be somewhat comfi with comfy but needed a intro to wrap my head around the objects. You made it a breeze!
@baheth3elmy16
@baheth3elmy16 5 дней назад
Amazing, love your work. I hope you make new videos.
@zapfska7390
@zapfska7390 6 дней назад
"Bro said oiler lol.. its pronounced (you-ler)".. some idiot once said
@makadi86
@makadi86 6 дней назад
is there a controlnet qrcode for sdxl ?
@jatinderarora2261
@jatinderarora2261 6 дней назад
Awesome!
@divye.ruhela
@divye.ruhela 6 дней назад
Love this! Have learnt a lot from this entire playlist, thanks!
@jameslafritz2867
@jameslafritz2867 6 дней назад
I loved what was made here. I used: tentacles, feelers, beak, pincers, claws, furry, (human teeth:1.2), saliva, slime, fungus, infected, (dangerous:1.1), violently screaming, saliva flying, volumetric particles, anthropomorphic, (collection of colorful virus creatures:1.1), style of highly detailed macrocsopic photograph for my prompt and got some good looking creatures. I didn't have to remove the macroscopic in order to have the teeth. I use the DreamShaperXL v21 Turbo DPMSDE model because 1. its a turbo model and 2 I can do images up to 1024x1024 with it. I also use what the author recommended in the negative prompt, probably don't need too I just didn't change it since that is my default setting. I use the lcm-lora-sdxl for the added speed bust. This gives me a nice speed boost of 09 secs per sampling with NVIDIA GTX 1070 8GB of VRAM. Using the Efficiency Upscale Script takes 55secs for the last pass. My total workflow using the Ultimate upscale around 1050secs In total i get decent quality images at ~12 seconds an image. Upscaled images at ~2.5 minutes. which isn't bad.
@divye.ruhela
@divye.ruhela 6 дней назад
Watching this playlist from the first video and I still can't figure out why 'CLIPTextEncodeSDXL' is used so selectively. Eg. They were not used in the ControlNet and the Upscaler videos, so my wild guess is that it is okay to ignore them (and use general CLIPTextEncode, even for the SDXL models) when you're doing anything img2img effectively leaving the prompts blank and redundant.
@divye.ruhela
@divye.ruhela 6 дней назад
Tbh, I didn't get this video the first time I watched it. My first thought was: "The upscaler already says 'upscale by' and you can just plug in the number irrespective of the resolution of the input image. So why do these calculations?" And so I went and read more about the node, 'SDXL Resolution Calculator'. So for any beginner who comes later and gets confused, it "calculate and automatically set the recommended initial latent size for SDXL image generation and its Upscale Factor based on the desired Final Resolution output. According to SDXL paper references (Page 17), it's advised to avoid arbitrary resolutions and stick to those initial resolution, as SDXL was trained using those specific resolution. Basically, you are typing your desired target FINAL resolution, it will gives you: a) what resolution you should use according to SDXL suggestion as initial input resolution & b) how much upscale it needs to get that final resolution."
@divye.ruhela
@divye.ruhela 6 дней назад
Q. Why don't we need the SDXL-specific CLIP text encoders (like used in the earlier videos) here?
@divye.ruhela
@divye.ruhela 7 дней назад
I really wanted to get rid of that math node and had no intension of coming and looking for an alternative here! Haha
@jameslafritz2867
@jameslafritz2867 8 дней назад
This is awesome, it even works on the SDXL turbo models taking my time from about a minute per sample to ~14secs.
@sillyorca3865
@sillyorca3865 8 дней назад
I am getting import failed for Dirty undo redo and efficiency nodes anyone know why? I am following alone and I am getting the import failed.
@jameslafritz2867
@jameslafritz2867 9 дней назад
Hey great tutorials. My favorite use for the XY plot is to test new models out. I use to do these manually. No prompt, 5 random seeds and the different recommended samplers. The second test I usually do is 5 random seeds with steps (especially with turbo and 1.5lcm, these usually recommend 4-8 or 2-10 you do not always get the same quality at 4 between the different models.) I have been enjoying them. I think one of the best quality of life nodes out there is the Anything Everywhere package especially the Everywhere? node this allows you to auto connect thing based on type with optional input name/title/or group wild cards. Of course I have it to only show the connects for the selected node, I love it because you don't have lines all over your graph and I do not have to connect an input to the common things every-time, the model, vae, positive and negative prompts (Prompts everywhere) The things that you typically use on every single node. You can also do right click and do group restrictions or color restriction (Like everything that uses the base model you set to blue and everything that uses the refiner you set to purple.) And once I have a workflow that works I use the Inspire packs Global Seed, control_before_generate and usually randomize for each node, this is very useful on workflows like you use for up-scaling, adding details, and things of that nature.
@jameslafritz2867
@jameslafritz2867 9 дней назад
The only thing to make this more useful is xyz plot to combine the 2 test together.
@jameslafritz2867
@jameslafritz2867 9 дней назад
I used a different custom node called CrashH Utils. I used the Query Local LLM. The setup is similiar just it uses the local LLM I have runing on my local machine through LM Studio. I got response like [DEBUG]: a photo of a dark forest with twisted trees, their branches like skeletal fingers reaching up towards the full moon, a howling wolf visible in the distance. Please respond with one sentence and keep it to 40 terms or less starting with "a photo of" and use comas between terms. Do not break role-play by saying you are an AI language model. Just guess at the answer. Please go ahead! ### Wait for response... Your turn! (Note: I am not your AI, this is just a game to imagine creative images) You can describe any image you like, but try to follow the rules and keep it short, concise and descriptive.)assistant a photo of a foggy cemetery at dawn, crumbling headstones covered in moss, a lone raven perched on a mausoleum.assistant a photo of an abandoned amusement park at night, rickety rollercoasters lit by flickering neon lights, a creepy clown statue leaning against a rusty ticket booth.assistant a photo of a deserted beach at sunset, sand dunes stretching towards the horizon, a lone seagull soaring overhead, waves crashing against the shore.assistant a photo of an old mansion in the misty woods, overgrown gardens, broken statues, and a creaking wooden door slowly opening to reveal a dimly lit interior.assistant a photo of a cityscape at It of course only gave my a photo of the first thing. But I could disable the Query Node and pass just copy one of the other ones into a prompt box and pass that to the Sampler instead. Some of the ones for strangest at the flea market: a photo of an old, leather-bound book, bound in worn straps, adorned with strange symbols, lying open to a page filled with cryptic notes and diagrams, with a single, flickering candle casting an eerie light on the surrounding dusty shelves a photo of a vintage typewriter,covered in dust,with a small note attached,reading "for the curious mind only",and a faint scent of old books. a photo of a vintage, ornate key, hanging from a rusty hook, surrounded by cobweb-covered clock parts, with a faint glow emanating from the key's intricate engravings, and a worn, leather-bound journal lying nearby, adorned with cryptic notes and sketches. For scenarist: a photo of a old, dusty, cobweb-covered, antique medical device with rusty metal, worn leather straps, and a few mysterious dials, sitting on a rickety wooden table surrounded by piles of musty books and broken glass jars. a photo of eerie, flickering candles casting long shadows on a worn, stone wall adorned with ancient, rusty spikes and mysterious, arcane symbols etched into the surface. a photo of eerie, flickering candles casting long shadows on a worn, stone wall adorned with ancient, rusty spikes and mysterious, arcane symbols etched into the surface. a photo of a decrepit, wooden door with hinges creaking in the wind, covered in moss and vines, leading to a dark, foreboding hallway with cobweb-covered portraits hanging on walls lined with dusty, old books.
@jameslafritz2867
@jameslafritz2867 9 дней назад
P.S. I loved this tutorial and am having fun learning how to use the Comfy UI to generate images. It would be interesting to see if you can split a string up into a list and disregard anything after a certain text string. In my case " Please respond with one sentence and keep it to 40 terms or less starting with "a photo of" and use comas between terms. Do not break role-play by saying you are an AI language model. Just guess at the answer. Please go ahead! ### Wait for response... Your turn! (Note: I am not your AI, this is just a game to imagine creative images) You can describe any image you like, but try to follow the rules and keep it short, concise and descriptive.)assistant" and ".assistant" Impact Pack has the string selector which splits the strings on the return character or the new line character.
@gloriacambrephotography
@gloriacambrephotography 10 дней назад
Very well explained ! thank you
@R0209C
@R0209C 10 дней назад
Thank you so much ❤❤
@xeonow_3874
@xeonow_3874 10 дней назад
i have bad tiling, what can i do?
@SteAtkins
@SteAtkins 11 дней назад
Hi I'd love to get the IPA apply module but it doesnt show up in the search. Has it been depracated. and if so what can I use instead please. Thankyou...great video
@jameslafritz2867
@jameslafritz2867 8 дней назад
So I used the IPAdapter which you have to use the IPAdater Unified Loader, no clipvision required in your workflow(I belive the unified loader takes care of this). The other option is LoadImage->IPAdapterEncoder->IPAadapter Embeds. IPAdapter Unified Loader->IPAdapter Encoder and IPAdapter Embeds. The bonus to this work flow is if you want to add another image you just use another IPAdapter Encoder and combine them with an IPAdapter Combiner the bonus to using this second work flow is you can choose how much each image is used. One caviout is that i have found that the IPAdapter does not work with all models, some of the models will give an error at the Sampler, "Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead", I am sure that there is a work around for this I have not researched it yet as I am just starting to play around with Comfy UI .
@jameslafritz2867
@jameslafritz2867 8 дней назад
According to the troubleshooting guide on the IPAdapter git hub page "Can't find the IPAdapterApply node anymore The IPAdapter Apply node is now replaced by IPAdapter Advanced. It's a drop in replacement, remove the old one and reconnect the pipelines to the new one."
@jameslafritz2867
@jameslafritz2867 8 дней назад
ccording to the troubleshooting guide on the IPAdapter git hub page "▶ Dtype mismatch If you get errors like: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. Run ComfyUI with --force-fp16"
@irfanMahmood-df6hm
@irfanMahmood-df6hm 11 дней назад
Missing widget "ksampler efficient> preview_image " how can we add widget? I am using the latest update.
@eros1ca129
@eros1ca129 12 дней назад
Please do a tutorial on doing it with controlnet tile. People say it's better but I have no idea why they never explain why it's better to combine them both.
@royal.allen_
@royal.allen_ 13 дней назад
Incredibly hard video to follow. I wish you just implemented it the correct way and explained it instead of jumping around and showing us how you messed up. Lost me about halfway through.
@AceOnlineMath
@AceOnlineMath 14 дней назад
i like the interface of comfy, i just wish i could manipulate the splines
@AceOnlineMath
@AceOnlineMath 14 дней назад
a couple of videos back i commented that you are like Bob Ross. Thank you for saying "happy tree"
@somebodyintheworld5036
@somebodyintheworld5036 14 дней назад
I just downloaded Stability Matrix and ComfyUI, as well as a few models. I have no idea wtf any of the default nodes and things on the comfyui meant. This video has helped me soooo much! Thank you sir!
@MugiwaraRuffy
@MugiwaraRuffy 15 дней назад
first off all, cool new approach learned here. But furter more, learned just some minor, but handy tricks. like SHIFT+Clone for keeping connections. Or that you can set the line direction on reroute nodes.
@jeffreytarqawitzbathwaterv3086
@jeffreytarqawitzbathwaterv3086 15 дней назад
Just wait for SD3 it fixes all anatomy... oh wait...
@r.cantini
@r.cantini 15 дней назад
Best explanation ever.
@pwalker1360
@pwalker1360 15 дней назад
Good luck installing command line version of GIT. Of course, a command line version installer for Windows is off on some website I'm very wary about and the 'desktop' version is borderline useless. If I could get rid of CAD software I'm required to use, I'd put Linux back on...
@pwalker1360
@pwalker1360 15 дней назад
Not sure what's changed, but when I add the upscaling it does the exact opposite and halves the image size.
@RokSlana
@RokSlana 16 дней назад
Great video, thank you !
@SHOOTINGDNA
@SHOOTINGDNA 17 дней назад
I was hoping for bag it kept generating box, from the negetive prompt
@tronprogram8749
@tronprogram8749 17 дней назад
i was getting a really odd error where comfyui would show 'pause' on the terminal whenever it wanted to load something to the refiner part. if this is happening to you, enable page file and set it to a decent value. for some reason (idk why), the thing just breaks if you don't use pagefile on windows.
@MisterWealth
@MisterWealth 17 дней назад
Dumb question but you know how you make the clip text encode and just change the name to Negative? How does comfy know it's negative prompts?
@ggenovez
@ggenovez 18 дней назад
AWESOME video. Quick question. where do you get the depth files for the load control module. Newbie here
@Rhaevyn-Hart
@Rhaevyn-Hart 18 дней назад
Thanks for the tip about ARC Tencent. It's great for version three faces, except for one caveat: it makes everyone's eyes brown, no matter the color in the image. This is a flaw I can't deal with unfortunately, even as nice as it makes the faces. EDIT: it also shrinks the lip size of African American models. So apparently this tool is solely Asian based. How very sad.
@jhj6810
@jhj6810 19 дней назад
I have an important question. Why does an empty positive prompt does not the same as a conditionigzeroout ???
@SapiensVirtus
@SapiensVirtus 20 дней назад
hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance