When I watched this, You said to press Shift + Option. I'm using a Dell computer with no option button and you gave no alternate methods so I'm now just stuck.
Thanks for the video: after months of trying finally I've managed to install Stable Diffusion on my Mac. Only one problem now, when I try to generate an image this text pops up: AttributeError: 'NoneType' object has no attribute 'lowvram' Time taken: 0.0 sec. Do you know whay is this happening and what can be done?thanks
Congrat ! What a nice job ! I have my own set of test but I think I will integrate your prompts. I noticed that many models change drastically from one version to another (Juggernault XL 9 to 10 for instance, classic or Turbo/Lightning versions)
Thanks for this detailed video. One question: what minimum GPU is needed for the complete workflow, and which of the individual steps need the most GPU RAM?
7:57 i been using colab to get to familiarize myself with SD. I feel I am now enough and want to start generating locally. I don't want to invest too much initially, so I am trying to figure out what's the best low with good enough performance and I thought the 3060 would be a good start, nothing less. Your test suggest that, now I trying to decide between RTX 3060 vs RX 6600 vs Arc 580 >. I am next going to see if Arc can generate and if so, at what results. 5700XT should be good too, but for SURE the power consumption will be crazy.
Fantastic job! Well done and thanks. But like you said,, it is getting a liitle bit stale… I would really like an small update on your A and B models and maybe some other up-and-comers. I mostly use Juggernaut, but you have given me nice pointers on other, maybe better models. Thanks again.
Thank you so much fow this easy and detailed tutorial! im a total noob with computers and very new to stable diffusion and until this video i was just confused and intimidated because i dot know any of these words and couldt follow. but this time i made it and it worked and tbh it was so much easier than expected... thx a lot!!!
Only criticism on this method of ranking is that it should have made multiple generations with specific seeds for each prompt (like how many where successful on 10 out of 10). And perhaps the recommended settings for each checkpoint (like clip 2 or etc). You will notice that it will make a huge impact on your specific type of ranking and you notice that a lot of models will almost generate the same results (specific seed will expose what base it is using).
This was a great video! I recently had to perform a fresh OS install on my MacBook Pro M1. Previously, I was able to efficiently run my Automatic1111 instance with the command PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.7 ./webui.sh --precision full --no-half, allowing me to generate a 1024x1024 SDXL image in less than 10 minutes. I am able to generate text to image now even faster than before thanks to your command chnages. However I am running into an issue with Image 2 Image, the error RuntimeError: MPS backend out of memory (MPS allocated: 17.50 GB, other allocations: 18.05 MB, max allowed: 18.13 GB). Tried to allocate 1024.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). Would you have any proposed changes?
Those variables didn't matter much. He used 32gb in two, 64gb for 4090 since it might be useful for the powerful gpu, and the last one used Google Collab, so 16gb is enough.
bro... I just know the channel and I can not do more than thank you and subscribe to continue learning from this, it's amazing the amount of effort, work and hours you invested in this research, and above all what I admire most is that you like to share this to everyone who sees you. It's an incredible job, thank you.
Loved the video. I would like to learn more about how SD processes Text Input. For example we know early words get more weight, but does it take into account phrases or just takes each word on its own merit? Stuff like that!
Thanks a lot! I've been asked by some totally non-technical friends and colleagues how this generative AI stuff works, so I put this video on and tried to make it as simplistic as possible, though not too shallow.
Yes, it took me ages to put it all together, glad you enjoyed it! Since my drawing skills are limited I used Videoscribe for the animations, but still a great lot had to be done manually.
I love the comparison and pretty much agree with your conclusion. Having seen a previous comparison, I'd go with the Mac M3 Max as the top tier choice. The 4090 could just barely keep up with it.
let's make it simpler with mocap for blender to make it easier to do motion capture rather than being limited to mixamo. would you mind making a video about it? toast
The problem with Mac is till now, they still only partly support pytorch, and speed is still as slow as you tested. I'm using a M2 Mac Studio with 32GB ram.
For some reason, I get much better aesthetics on my RTX3060 Ti than my RTX 3070, even with all of the exact same settings. Can't figure out why for the life of me.
does anyone know how to fix "Expected all tensors to be on the same device, but found at least two devices"? I followed the tutorial but alway got this error.