Тёмный
Joe Conway
Joe Conway
Joe Conway
Подписаться
Digital photography, photo editing, video editing, ComfyUI, OBS Studio.
Hedra - Lip syncing a Poem in Hedra
17:34
День назад
ComfyUI - Pick & Mix 3
19:48
День назад
ComfyUI - Post Processing Nodes
19:06
21 день назад
ComfyUI - MaraScott AnyBus Node
16:55
Месяц назад
ComfyUI - CLIP & Clip Skip
16:34
2 месяца назад
ComfyUI - The Negative Prompt
12:26
2 месяца назад
ComfyUI - Chibi Nodes
34:13
2 месяца назад
ComfyUI - Image Size/Resize in IMG2IMG
21:16
3 месяца назад
ComfyUI - Inpainting (Masking)
21:53
3 месяца назад
ComfyUI - Badges
4:32
4 месяца назад
ComfyUI - Embeddings
17:15
4 месяца назад
ComfyUI - Preview Method
3:19
4 месяца назад
ComfyUI - Aegisflow Utility Nodes
18:26
4 месяца назад
ComfyUI - Use Everywhere (UE )Nodes
14:43
4 месяца назад
ComfyUI - LoRa Demo
28:59
5 месяцев назад
ComfyUI - Custom Scripts by Pythongosssss
20:55
5 месяцев назад
ComfyUI - Crystools Resource Monitor & Nodes
18:53
5 месяцев назад
ComfyUI - Comfyspace Workflow Management Utility
13:09
5 месяцев назад
ComfyUI - LoRaInfo Node
13:02
5 месяцев назад
Комментарии
@jenesice
@jenesice 2 дня назад
국수 같은 선이 아주 깔끔하게 정리됩니다. 휼륭한 강의입니다
@KlausMingo
@KlausMingo 2 дня назад
I can't reproduce any of these connections. You should build the workflow from scratch while explaining things.
@databang
@databang 6 дней назад
Thanks for the tour. Good to know the tools and follow how technology transforms and manifests into different solutions. Not suspecting it was generated output, your photographic face had me fooled. The illustration is a nice test, and can see a bit of softening of the painted texture. Good observation on the dread of commiting to an output that might contain the unexpected, or need to be refined with iterations to meet expectations. But it’s free for now as you say. I’m curious about results on 3-quarter perspectives for more extreme parallax. We will see-Sub’d
@ZakariaNada
@ZakariaNada 9 дней назад
I tried your workflow but the bilbox won't upload when restarting, I clicked an "try fix" and it still didn't work.
@joeconway85
@joeconway85 9 дней назад
Hi, Sorry to hear this didn't work for you. A couple of things worth trying if you haven't already: 1. In ComfyUI Manager do an 'Update ComfyUI' and an 'Update All'. 2. In ComfyUI Manager go to 'Installed Custom Nodes' and filter on installed Nodes. Look for any conflict triangles. If you click on a triangle it will show you what is conflicting, so look for any reference to Bilbox. 3. Rather than use my DropBox Workflow download, try adding the Bilbox Photo Prompt to any Workflow (but not my one) manually by double left clicking the mouse, type bilbo in the search box and select the Photo Prompt Node. If you don't see the Node in the search list, it hasn't installed, so try again to install the 'BilboX's ComfyUI Custom Nodes' via the Install Custom Nodes Option in ComfyUI Manager. I hope this helps and Good Luck.
@EuroTruck8k
@EuroTruck8k 10 дней назад
tks
@sikhostudio
@sikhostudio 12 дней назад
She came in through the bathroom window, Protected by a silver spoon. But now she sucks her thumb and wonders By the banks of her own lagoon. Didn't anybody tell her? Didn't anybody see? Sunday's on the phone to Monday, Tuesday's on the phone to me. She said she'd always been a dancer, She worked at fifteen clubs a day, And though she thought I knew the answer, Well, I knew what I could not say. And so I quit the p'lice department, And got myself a steady job. And though she tried her best to help me, She could steal, but she could not rob. Didn't anybody tell her? Didn't anybody see? Sunday's on the phone to Monday, Tuesday's on the phone to me, oh yeah
@jcparker500
@jcparker500 12 дней назад
I went to go check this out, then read their Privacy Policy while I was there. Holy crap! I couldn't get away from there fast enough. Not just all the information they collect about you, but that they flat out say they'll use it for third-party marketing and advertising purposes.
@MonkiLOST
@MonkiLOST 13 дней назад
you explained how to download but not how to select the downloaded ... need to find another video
@nguyentu1504
@nguyentu1504 14 дней назад
I am a newbie, I have searched all over the internet. Finally, I found this guide. You are amazing.
@joeconway85
@joeconway85 13 дней назад
Glad it helps!
@contrarian8870
@contrarian8870 14 дней назад
They may be reluctant to give you a free preview, because then you could just screen capture it and save it for free. Though the idea would work if it the preview was lower rez, B&W etc.
@joeconway85
@joeconway85 14 дней назад
@@contrarian8870 Good point - even a heavily watermarked preview would do but a low res one would be even better.
@hearttouching7986
@hearttouching7986 15 дней назад
❤❤❤
@bildatheventure
@bildatheventure 16 дней назад
it didnt work for me, nothing is working for me like anyone else i dont klnow why
@joeconway85
@joeconway85 13 дней назад
Hi, sorry to hear it didn't work for you. A couple of things worth trying/thinking about: ComfyUI works best with nVidia GPU/graphics. My GPU is AMD so I have all sorts of problems regular ComfyUI users don't have. The ComfyUI folder should be installed onto your C: drive, not a Naz box or USB drive. Git must be installed first. Have a look at more recent ComfyUI installation videos in case there have been any changes to the installation process since my video. Good luck - I hope you get there!
@juraganposter
@juraganposter 16 дней назад
hi mate, great video. thank you for your explanation. if i have 50 prompts, is it possible for the generated image to be saved one by one, and not waiting for the full 50 batch? because it takes too long to see the image if i have more than 100 prompts to generate.
@joeconway85
@joeconway85 13 дней назад
Hi, thanks for your comments! From my testing I only get the images in one lot at the end of the generation, which is the same if I generated a batch of say 10 images normally. So I think this is just the way ComfyUI works, but I guess there is a risk that if your ComfyUI bombs out due to GPU or memory issues on a large batch, you may lose all the images. As I have AMD GPU, which is not popular with AI programs, I don't generate large numbers of images in one go as it is more likely to bomb out for me. Hope this helps.
@promptgeek
@promptgeek 18 дней назад
Amazing video tutorial Joe!
@bentontramell
@bentontramell 22 дня назад
Since i can't use it with the Efficient Loaders, i just build out the prompt in the node and copy over. 😅
@EZgaming2031
@EZgaming2031 22 дня назад
thanks
@evak2802
@evak2802 24 дня назад
Thanks for the tip!! But it is not working. I refreshed the nodes after copying the embedding files, which were in safetensors format, "warning, embedding:negfix does not exist, ignoring" in the console. Do I need to restart?
@joeconway85
@joeconway85 13 дней назад
Hi, sorry to hear this didn't work for you. I believe the file suffix for those embedding was .pt not .safetensors and both Embedding files should be saved into the Models\Embeddings folder. Also make sure there are no type-o errors when you put the Embedding details into your Positive prompt as per my example. Hope this helps.
@krakenunbound
@krakenunbound 24 дня назад
Tone down the base in your voice. I have to turn up volume to make out what you are saying and the base tone is vibrating everything lol
@joeconway85
@joeconway85 13 дней назад
Hi, sorry about that! Audio is a bit of an ongoing nightmare for me. I have now reverted from a more expensive XLR mic to a basic USB mic and hopefully my latest vids are less bassey. I did try all the usual filters in OBS Studio but I kept sounding like Barry White on my XLR mic! 🙂
@meadow-maker
@meadow-maker 25 дней назад
Thanks I have photoshop so I do most of my postprocessing there but there were quite a few nuggets in you video, the custom nodes and workflow manager will be useful.
@f33rox
@f33rox 25 дней назад
How do you detach the panel with all your settings and manager on the right of the screen?
@joeconway85
@joeconway85 25 дней назад
Hi, If you mean the ComfyUI Menu panel on the far right, it does not detach, its all one piece. Mine may look a little different as I have played with various different plug-ins, eg the 'Server' button which I no longer want but haven't worked out how to remove it yet. There is a setting in the ComfyUI Menu that will anchor your ComfyUI menu panel so that it is in the same location on every Workflow that you create. To do that, select the cog wheel icon at the top of the ComfyUI Menu, then scroll down to and select the 'Save Menu Position' option which is third from the bottom of the menu list. Hope this helps.
@f33rox
@f33rox 24 дня назад
@@joeconway85 Thank You! I have found this setting!
@jeffg4686
@jeffg4686 25 дней назад
Nice - photoshop has a competitor now
@rachinc
@rachinc 27 дней назад
I'm 7 minutes into this tutorial. how do you change the progress bar from percentage to time elapsed?
@joeconway85
@joeconway85 25 дней назад
Hi, On my computer I do not get a progress bar in the Resource Monitor, only a 'time taken' at the end of the task. I only get a progress bar in the KSampler while its doing its thing. That said, what Resource Monitors each of us see may differ slightly, depending on your computer. I have an AMD CPU which ComfyUI doesn't really like, so I do not get a monitor for GPU, while those with nVidia GPU do get a GPU monitor. Hope this helps.
@carlosmeza4478
@carlosmeza4478 28 дней назад
not sure if you figured out the size issue with the reference image. I ran a "upscale image" (can be a "upscale image by") node after the load image. and before the VAE encode "load image" -> "Upscale Image" -> "VAE encode" -> "KSampler" -> ETC that way I changed the reference from a 1920x1920 to 512x512. then the process ran as usual. worked extremely well to change from photo realistic to cartoon style. hope it helps. and also thanks, because this was my first time trying to use img2img and your video was a lifesaver XD
@IreneAngeliaTumbel
@IreneAngeliaTumbel Месяц назад
Hi Joe, great video! I'm really impressed with your content. I would love to discuss a potential collaboration with you. Could you please share your contact info or email so we can connect? Thanks!
@IreneAngeliaTumbel
@IreneAngeliaTumbel Месяц назад
Hi Joe, great video! I'm really impressed with your content. I would love to discuss a potential collaboration with you. Could you please share your contact info or email so we can connect? Thanks!
@mylemonage
@mylemonage Месяц назад
Your voice is lovely thank you. Pleasure to listen to and helpful.
@AkshayKumar-vd5wn
@AkshayKumar-vd5wn Месяц назад
Whats the kesrning curve of comfyui?
@rosetyler4801
@rosetyler4801 Месяц назад
random question. I never use clip skip, I've tried it, but it didn't seem to make a difference. I see people still talk about it though. My images look fine. Some great, some not so, but it seem a lot more to do with prompting and sampler settings needing to be adjusted. Do you really think clip skip matters?
@joeconway85
@joeconway85 Месяц назад
Hi Rose, I have only been using CLIP/Clip Skip for the last few weeks, mostly because I couldn't find a default Node for it. I was looking for something named like 'Load CLIP', but eventually found it was called 'Clip Set Last Layer'! What I usually do now is create an image as usual with clip skip set to -1 (or disable the Node). I then keep all the same KSampler/prompt settings but increase Clip Skip to -2, then -3, sometimes up to -5 to see if I get any improved images. I find that although the changes may be subtle, they do quite often (but not always) improve the image, so I think it is worth experimenting. Clip seems (to me) to be something talked about a lot by Python developers, but not so much by end users. I don't think I have seen any other RU-vid videos other than mine where the 'Clip Set Last Layer' Node has even been used, but there are probably some vids out there I am sure. Sometimes I go Skip -24 (on XP models) just for a laugh, as the result can be quite entertaining! :-) Hope this helps, Joe
@xbad3d
@xbad3d Месяц назад
gitclone command doesnt work
@0123456789jad
@0123456789jad Месяц назад
it didn't work out :(
@jacobjuul21
@jacobjuul21 Месяц назад
Thanks for this workflow. It really helps a lot. It's really easy now to make good portraits.
@ragemax8852
@ragemax8852 Месяц назад
Thanks for making this workflow for us, I'm using it now and it's great. I feel like I can just use this to generate images for now on and not use Fooocus anymore. I love the presets for the properties, as that gives us a chance to choose from different photography styles and stuff which helps generate the best style of picture. I got a couple of questions, though. What node can we use for LORAs and how can we connect it? Is there a way to generate just one person when I change the height and width to 1024x1024 or beyond? It always comes out generating two different people.
@joeconway85
@joeconway85 Месяц назад
Hi, thanks for your comments, much appreciated. To add a Lora just double left click and type 'Load', you will then get the option to select 'Load Lora'. The Load Lora node will need to sit between the 'Load Checkpoint' and the two Prompt Nodes so you can make the correct connections. For the other issue, I'm no expert but it sounds to me like it may be just a prompt issue. Try adding (for example) '1girl' at the beginning of your positive prompt if you just want one girl (or boy). You can also add emphasis to this by adding brackets - eg ((1girl)). Make sure there is nothing else ambiguous in the positive prompt to make it think you want more than one person in your image. You could also try increasing your CFG a little to make ComfyUI to pay more attention to your prompt. If your still getting say two girls for eg, try adding '2girl' as a negative prompt and see if that helps. It may also be worth changing your Checkpoint model if the problem persists. For photo realistic images I prefer Dreamshaper XL or JuggernautXL. SDXL is not so good for people images I think. Hope this helps.
@ragemax8852
@ragemax8852 28 дней назад
@@joeconway85 Thank you so much for answering my questions as I really appreciate it. Your suggestions for the two girls appearing it one was fixed in the prompt and fixing CFG levels as well. I agree, 1.5 still works best for people images. I've been using EpicRealism and it's working out pretty great, but Dreamshaper XL and JuggernautXL are pretty good as well. Again thanks for everything, my friend!
@cXrisp
@cXrisp Месяц назад
I heard what you did there -- "I don't have a LUT of time". 😄
@joeconway85
@joeconway85 Месяц назад
@@cXrisp Yeah I don't think my one joke got a LUT of laughs... 🤠
@user-pn6ey5dn4y
@user-pn6ey5dn4y Месяц назад
Why the Chibbi Simple Resizer? What other Resizer nodes have you tested? I'm always looking to compare and contrast the best nodes. Keen to hear your thoughts. Thank you
@angelamason3760
@angelamason3760 Месяц назад
How do I reset the password
@meadow-maker
@meadow-maker Месяц назад
Oh by the way the 'modal' is pronounced |mow-dl, I assume it's pertaining to modes.
@joeconway85
@joeconway85 Месяц назад
Ah, didn't know that - thanks!
@meadow-maker
@meadow-maker Месяц назад
Thank you so much. I don't really get where the fox's head thing comes from. I don't want the fox's head per se but I've never scene in my set up. I really liked this and I had been guilty of over egging my cfg and steps when I dialled them down I got much better results. You see other people using much higher numbers and getting good results but that might just be in pony. I saw 75 steps the other day with a nice image!
@joeconway85
@joeconway85 Месяц назад
Hi, the Node icons are called badges in ComfyUI and are useful in identifying who originated the Nodes. Badges can be toggled on or off by going into ComfyUI Manager, then in the left pane, fourth dropdown you will see badges - I selected 'Badge - Nickname' to display the icons. If a Checkpoint model is well documented in Civitai it will include suggested steps and CFG for your images. That's why I use DreamshaperXL a lot, as their page has good recommendations on what to use. If the model doesn't have recommended steps etc I usually look at some of the example images that will be at the bottom of the model page (in Civitai) and try to get a feel for a typical number of steps and CFG as my start point. XL models usually require much less steps and CFG, but not always.
@meadow-maker
@meadow-maker Месяц назад
@@joeconway85 Oh, thanks, I must have mine toggled off. Thanks.
@MaraScottAI
@MaraScottAI Месяц назад
Thx Jon for the shout out, it is always a pleasure to see a nice workflow like yours
@joeconway85
@joeconway85 Месяц назад
Thank you for sharing your work with the ComfyUI user community, it is really interesting what you have done!
@MaraScottAI
@MaraScottAI Месяц назад
@@joeconway85 thank you very much, have you tried the McBoaty V3 for better Upscaling? V4 is out but v5 is under heavy dev so I would wait for v5
@phuket2753
@phuket2753 Месяц назад
Thanks for the video. I started off using these nodes by myself, but after 5 minutes or so, I knew I had to watch a video, and it would save me so much time. I was right, and your explanation was great. Now I am ready to tackle using them. Thanks Again.
@joeconway85
@joeconway85 Месяц назад
Thanks for your comments, I'm glad it was helpful!
@cXrisp
@cXrisp Месяц назад
Thank you very much! I've been looking for a solution to the drab colors for a while.
@junehuppatz3255
@junehuppatz3255 Месяц назад
Thanks so much for this tutorial which explained scrolling text so clearly which I could not find in the other videos I watched. Now I will check out your other videos!
@dadekennedy9712
@dadekennedy9712 Месяц назад
I wish that we could add the colors we pick to the list.
@joeconway85
@joeconway85 Месяц назад
Yeah that would have been a nice extra touch. We now have so much choice of colours thanks to this add-on that saving favourites would be a bonus, but now I just make a note of the colour codes I want to use on my Workflow Nodes.
@ajay.govind
@ajay.govind Месяц назад
can you make scrolling up ext and highlight as per karaoke track?
@contrarian8870
@contrarian8870 Месяц назад
Can you make the cursor larger and make it white when you record? You often say "to this input" or "here", apparently using a cursor, but it's completely invisible to viewers if it's on a dark background.
@joeconway85
@joeconway85 Месяц назад
Hi, thanks for your comments. I will take that on board and will make the cursor more visible in future videos.
@royal.allen_
@royal.allen_ Месяц назад
I learned nothing.
@MilesBellas
@MilesBellas 2 месяца назад
via Pi CLIP Skip is a technique used in ComfyUI to improve the quality of image generation by incorporating contrastive language-image pre-training (CLIP) into the generation process. This technique helps to better align the generated images with the text prompts provided by users. The "clip clip skip" phrase refers to the specific steps involved in using CLIP Skip with ComfyUI: * CLIP: This step applies the CLIP model to encode both the generated image and the corresponding text prompt into a shared latent space, which helps to ensure that the image aligns with the prompt. * Clip: In this step, the intermediate image generated after the first CLIP step is further refined to better match the text prompt. * Skip: Finally, the refined image from the previous step is "skipped" back into the diffusion process, where it replaces one of the intermediate images generated by the model. This helps to guide the generation process towards producing images that more closely align with the provided text prompt. By using CLIP Skip in ComfyUI, users can expect more accurate and consistent results when generating images based on textual descriptions or prompts.
@joeconway85
@joeconway85 Месяц назад
Hi, thank you for that very detailed description of CLIP and 'Skip'. I do find it strange that there are so few RU-vids on this subject, as it can clearly have an important affect on an image generated in ComfyUI.
@MilesBellas
@MilesBellas Месяц назад
@@joeconway85 It was from a FREE application named PI, that is more current on Android than Apple. . via PI Pi AI, also known as "Personal Intelligence," is an AI chatbot developed by Inflection AI, a company founded in early 2022 by Mustafa Suleyman, Reid Hoffman, and Karén Simonyan. Pi AI is designed to be a friendly, empathetic, and engaging conversational AI that users can interact with in a natural and human-like manner. It leverages advanced natural language processing (NLP) techniques to understand user input, generate contextually appropriate responses, and provide assistance on a wide range of topics. Some key features of Pi AI include: * **Personalized Conversations:** Pi AI learns from each user interaction and adapts its responses accordingly, making each conversation unique and personalized. * **Wide Range of Topics:** Pi AI can engage in discussions on a variety of topics, such as entertainment, news, science, history, and personal interests. * **Empathetic and Engaging:** Pi AI aims to provide an emotionally intelligent and engaging conversation experience, recognizing user emotions and responding appropriately. * **Task Assistance:** Pi AI can help users with various tasks, such as setting reminders, providing recommendations, answering questions, and offering suggestions.
@nocnestudio7845
@nocnestudio7845 2 месяца назад
4 nodes, but You can talk all day. Sorry but i skiping your video. Really boring. 15min 4 nodes. Man to long. I can't thinking when You use 10 nodes.
@MilesBellas
@MilesBellas 2 месяца назад
"Koiboi : when to use CLIP" = faster
@xt-cj7jg
@xt-cj7jg 2 месяца назад
thank you! i was looking for a tutorial but everything was outdated
@joeconway85
@joeconway85 Месяц назад
Thank you for your comments. I agree it is difficult to find much on RU-vid etc to show us CLIP in use in ComfyUI.
@user-wn9hl7qo9r
@user-wn9hl7qo9r 2 месяца назад
be more audible in your next video i did not hear a thing you said in this video i only learnt this through the procedure you listed in the describtion
@ozburn
@ozburn 2 месяца назад
too low volume :/
@joeconway85
@joeconway85 Месяц назад
Hi, thanks for your comments, I will take that on board and try to improve on the audio quality.