I hope you build a PC with a good PC. I do lots of stuff with Stable Diffusion on my PC, and I stumbled on this video today. Having a PC you can do it yourself on opens up so many possibilities.
This video was super helpful to break down the steps and show different ways to fix the flicker issue. Thanks a lot for this! Please keep the videos coming 🎉
Great video. I'm agree with the end of the video but if you can train your own model with the specific design of the Artistic Director want, it's great and this technologie saved us a lot of time on a project done 4 years ago. But yes, the better way to be fairplay is to train AI with your or from someone is agree to share is art with you.
Great video, i tried it and made progress till i tried to run the stable diffusion then i got an error about ngrok @Module not found@ etc. errors. Any ideas?
Well the colab notebook that I share in the link got corrupted thats why many people are facing similar errors. You can try any other sd automatic 1111 colab notebook. Most of the steps will be same. That might work
I loved your works! Apart from your old style before AI (Johnny Depp, scarface, thor, Troy, etc.) the new styles are much more superb quality after AI (Katie Perry roar, Shakira, wonder woman). How did you managed to create such great result? What kind of AI & other stuff have you used?
you are right brother but with ebsynth as you start doing complex movement the program just went crazy at least on my pc. the best method that I came to know up till now has being, The Corridors digital method. of traing your personal model in the style of the art form. Im making a video on how to train your own Stable-diffusion model correctly in which Im going to talk about it in more detail So please wait☺
bro, this was really piece of talent and curiosity👏. I am tried deforum stable diffusion v0.7 on google colab but it generated completely garbage video(even worse then websites like genmo) was it because of - I was using free version of colab? Plz Plz help me🙏🙏🙏🙏🙏 I am tired of finding solution on internet😔😔😔.
1 the Stable Diffusion version that im using is 1.5 2 you don't need a paid version of colab if you have enough computing units with you. You can definitely create these animations. if you have some more doubts then please be my guest.😊
you don't need a dedicated gpu for this as I already have mentioned in this video itself this whole process is running on a colab note book which has it own remote gpus given by google itself. So you should definitely try installing I rewatch the whole video I have tried my best to ans most of your doubts in the video itself.😊
Hello incredible video, super helpful. Can you please elaborate how much is 100 computing units ? How many images/bathes can we extract in an hour for 16:9ar .. subscribed will be catching your newer videos as well
Brother I'm assuming that you are talking about the compute units of google colab. 1) Well 100 compute units would cost you 10 dollar and if you can't use al of it the don't worry it will get added to your next month. 2) I never counted how many frames I was running through the notebook. because your compute unites are deducted in an hourly manner (for how many hrs you used there GPUs). Don't worry brother its quite affordable and for example I have canceled my subscription but I still have 140 unites in my colab note book dormant. therefore you can subscribe for now and as you think your work is over cancel your sub. and you will still have your remaining Compute Unite. I hope it helped you ☺
@@ThatArtsGuySiddhant-tk4jb helped a lot. Thanks for the response. One last question , can one generate video to video animation through google colab? I saw a lot many videos explaining to run stable diffusion with controlnet/ deforrum/ disco diffusion and all..but none showed batch- running for frames of any video to get frames/images with consistent character/setting movements. To cut tut short - can colab run control net and get us video to video output? (If it can I’ll be buying pro colab right away.. *your video is just about that but I guess a lot has been changed in colab rules since your upload !.! ? …
Its going to be tricky brother 😅 as said it before sd only read in 512 by 512 pixels so first you have to rotoscope your main cherecters and then export it in 512 by 512 frame size (i mean shrinking your reel(main character) till it fits in a squarish frame) then make frame of that and ran it through Sd
Actually, I created this video precisely so that you wouldn't need to rely on expensive GPUs and RAM. This entire setup runs on Google Colab. Please re-watch the video again, and it should clear up any doubts you have.
Bhai peli bat aapke normal pc pei toh yeh chalne se raha kyuki bohot jada GPU processing power lgti hei, mtlb joh log locally SD run krte hei unke GPU-GPU sirf 1-2 lakh ka hota hei. Ab tum btao kya zada sasta hei?😂@@_WhatsUp_bro_
My PC has an RTX 3090 with 24Gb video memory, and 128Gb RAM. Can I run the software on my PC? It would, I think, make the whole process a lot easier because I don't have much space on my Google Drive...unless I could use it on One Drive instead. x
RTX 3090 with 24Gb video memory, and 128Gb RAM. is powerful enough you can try I tried running it on my PC but failed coz my Gpu was not Strong enough. I had made a video regarding that documenting the whole process by the name of "How not to install SD on your PC " it might help you.
You still have the same problem all have, including the corridor crew, which is how do you control the gaze direction……there is no way to do it inside SD……in your tutorial, Scarlet Johansen is staring at camera, but the render is doing so to her right……SD and A.I in general has no way to control the direction of the eyes in a batch sequence…….
totally agree you can never get the level of precision of an animation from just a machine but with SD you can make a Video that looks like an animation with fraction of the cost and time. In my personal opinion this is the main value preposition on SD or any other AI platform. 😅
I was with you up to @19:42; if you are not going to defend AI generated art in any way then why upload videos about it? This is great content, and there are issues to debate and discuss methods of how to effectively use the technology. Just don't throw the stone and hide the hand, I'm all in.
Liar, Liar pants on fire. You can`t get this result without controlnet and training model. But I see artifacats like Ebsynth makes. Where I can see full result?
@@ThatArtsGuySiddhant-tk4jb Ok, shure. Where I can see final result? I don`t believe, that this is just Stable diffusion without any plugins like control net or Ebsynth
@@MaksymSieroshtanov If you had seen the whole video, I talked about all the other software that I have used with stable diffusion, their pros and cons. and which software is best suited for which shot
Enjoyed the vid. BUT. It's still not a copy and you should know that. It's an interpretation with knowledge of it's subject. Just as if a skilled artist were to reproduce the art of another artist from memory. AI is really a collective of all the knowledge we feed it, and it's only limited by WHAT we feed it and how much it can store, similar to our mental capacities. We're essentially building synthetic brains for specific purposes is the most basic way to describe what we are and have been doing. This isn't a problem for artists, it's a POTENTIAL problem or even a renaissance for all industries, all media, life, and us the people the whole world. The floodgates are open and there is NO closing them PROMISE! I only say that to tell anyone apprehensive of this tech going forward that they really have no choice and there's no avoiding it short of possibly shunning computational electronics and becoming amish. Everyone's best bet is really to LEAN INTO it, we are really going to want to try and become as symbiotic with what's to come as we can if we want the future to be good, but we really need to be careful it's not ruined by our own selfish hands. We can't stop it, because it's not dependent on stopping these models from becoming things in the first place, rather we've reached already reached the point in which computational HARDWARE is powerful enough on even a consumer level that even if all of these developments were struck from the internet and MAGICALLY all of the models and code out in the world currently were to be erased that it would still happen just as easily as some dude at home who knows how to code will eventually just figure it out and write it back into existence. Hell the principle ideas on which AI and Neural Nets are based around aren't even all that complicated, and that's not saying the IMPLEMENTATION of those principles are all that simple (oh contraire), but people are smart and there're a lot of people (i.e the infinite monkey theorem). Sentient AI is close, all that is even needed at this stage is for the first AGI models creation, and then given free reign to do as it pleases with access to the internet and sufficient space to store it's newfound knowledge which at this point could be a thing already just doing as it chooses from some savvy coders desktop as if it were anyone else browsing the web. Heck if said hypothetical coder were to have a server rack of storage himself (nothing unreal, but within the realm of reason when thinking a tech worker/enthusiast/hobbyist) could potentially either manually or automatically convert large stacks of data into categorized LoRa or equivalent datasets to save even more space and again allow for more consumption and growth. The average consumer today can currently and easily obtain any of the pre-requisites to make any of this a reality, right now though...... we're just waiting. Sorry for cryptic rant. P.S. Wouldn't mind seeing you revisit this in the future!
your knowledge is GOOD, but your video preview is BAD, because you mix put other video to your information which unconnected, like spongebob in your video what for? for fun?