It looks excellent! I haven't tried it yet, I'll tell you once I do. I really liked the image at 0:05 (it's like a factory) do you remember the prompt? Thank you very much, I am subscribing right now.
@@kevinhu1136, Thank you very much for your answer! I don't have much experience in this. My tests in V2 are failing due to a missing node, after having updated them with the Manager. Message when loading the flow: "When loading the graph, the following node types were not found: ClipTextEncodeBC Nodes that have failed to load will show as red on the graph." The ones that appear in red are 2, the ones that your tutorial says CLIP Text Encode (BC.... I will understand if you do not have the opportunity to answer me due to the precariousness of my knowledge. In any case, congratulations on your work.
Hello again, I managed to get it working by replacing the nodes with ones I downloaded from chaosaiart-nodes. It forms magnificent images for me. Now I have 2 problems: 1- the 4 preview images are very blurred. 2- It doesn't make me the high resolution image, only the low one. As before, thank you very much even if you can't answer me.
Thanks a lot for your videos on InstaMAT, they're an invaluable source of knowledge for this tool. I wonder though, if an Nvidia RTX card is accepted to operate in InstaMAT, my own 3080 is considered as "uncertified" to perform raytracing. Thanks again!
I'm not very clear on this point either, but logically speaking, Nvidia's graphics cards should not encounter such issues. Moreover, the 3080 model isn't considered outdated by any means.
@@kevinhu1136 Thank you. Do you think you can add a sketch as a third input in this setup with Controlnet? So you would go from some image references, some descriptive words and a sketch to a fully generated image.
How to make an image seamless? Now I can see the difference between the far right and the far left of the picture, and in 360 view the seam is visible. I use the sd_xl_base model, I also deleted one of the samplers because it didn’t work for me.
Yes, that's absolutely right. English can indeed be a significant barrier for non-native speakers when it comes to writing SD prompts. This approach might help alleviate some of those challenges. I was also considering the possibility of using Gemini instead of Google Translate in the first step, given that Gemini's translation capabilities appear to be superior to Google Translate's.
Thank you for your amazing network! It's been very helpful. I tested both of your models, and I found that the original version (before the remaster) gave me more creative and interesting results for the 'image to image' function. I tried it by painting four blue squares in Photoshop and asking it to generate an ice tile, and it worked perfectly. The new version, however, is very faithful to the reference image, even when I adjust the IP adapter weight and add more noise. 🤔
Thank you for your test. IPAdapter Plus has recently undergone a major update. Many previous workflows are no longer usable. I will try to update and improve it in the future.
Now, let's take a moment to discuss your poor performance, shall we? It appears that your efforts haven't quite met the mark. You see, achieving excellence requires a combination of diligence and attention to detail, qualities I'm sure you possess. However, in this instance, it seems that your execution fell short of expectations. But fear not, dear student, for every setback is an opportunity for growth. With a bit of extra effort and focus, I have no doubt that you'll soon excel in your endeavors.
Some of these modules and their nodes for the workflow have changed, and now the workflow does not work. What a pity that the workflows do not have a version for custom modules and for comfyui itself...
@@kevinhu1136 Thank you! Now it works well! Question: You use SD1.5/pytorch_model.bin in Clip Vision in your workflow - but what is this file (what is clip vision)? Because the pytorch_model.bin that I found is incompatible with SDXL models (an error about the resolution of the generated images not matching). Anyway, using other Clip Vision works great. Thanks for your research and work! =)