If you liked this video, check out my latest video on this topic! I show you how you can render any 3D scene with AI, whether video or image. Amazing for creating assets, concept art, animatics or trippy Ai animations! ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-mu3JEfx3PHM.html
Hahha 😂 bruh this is bitter truth but , after lots of struggle and lots of knowing a specific tool . You can create what u want if u r master in written prompt
This could actually be THE animatic method i will be using moving forward. Cause you get the actual scene with basic props, but able to visualize the final product in a way you couldn't before. I mean, as a final renders, it's hot garbage, but i see huge potential in this technique. Really awesome, thank you.
Oh course the real issue with this approach is that the shadows are all baked into the material, so lighting become problematic. But for a fast scene with little camera movement and lighting, this will work. Its a great approach for scene prototyping.
Honestly, I think you're brilliant. I love creating stories and I have been looking for a way to start creating my own from scratch. This just opened so many opportunities
Love it - had my head spinning when this was first announced a few months ago but damn dude your video really shows the insane potential of Video > Video AI workflows 👀 Keep it up man, just shared this with my private community 💪
Just do it for each visual plane instead, then combined at the end to avoid doubling effect and have more camera freedom. Heck projecting, then masking part at another angle, port back to ai, infill the masked part, project the masked area, merge all angles using the mask of each.
By generating the whole scene, you get more stylistic consistency. But you could seperate the foreground and background elements before projecting first. Although trying projection without seperating could end up saving a bunch of time if it works for what you want to do (i.e. very small camera movement)
Really cool. Just out of interest, did you maybe try it the other way around? Meaning export a depth map animation (eg with the camera movement) out of blender and then batch process them in SD? Would be curious to see how consistent this is? Maybe in combination with Animate diff in comfyui.. Thank you
I feel like we’re about a year away from someone putting this entire workflow into one plugin, complete with random primitive placement, camera placement w animations, scene changes, and character generation. Then you just use your phone camera to film yourself acting and watch the entire movie come to life! With random prompt presets to help try new layouts and styles.
Thanks for sharing! Been thinking about similar ideas... generative AI & Blender, specially after TripoSR was published just recently. Now there start to be so much tools with AI image making that it is more question of discovering the workflows. I'm myself visual artist working with gen AI.
Brilliant video, thanks for sharing your knowledge! :D I have a question tho : when doing a camera-projection like that, the foreground elements' textures are bleeching on the background elements. The more you move the camera, the more you'll see those texture "errors" Do you have any trick to fix that? Beside some manual texture rework I can't come with a clever and nice way
I did everything as you have instructed but my Stable diffusion screen is different so I can not enter some of the parameters. It worked and generated an image but can not control it
How we do get stable diffusion ? It doesn’t really explain it very well what need to do to get to that part to use software stable diffusion . Can you help me out thanks
it does not works. when i select the sky and connect a second texture in the shading section then it's changing me all surrounded area, not only sky. (9:44) what can I do wrong?
This is exactly how games are going to be down the road. Blocked in proxy models driving AI filters. Realtime will be a bit longer but way sooner than we think and VFX stuff...like shown here...is already a gamechanger.
I honestly love all your videos. Having said that, Blender is still needing a PHD to use. What you did in this one video would take me weeks nonstop to learn. Not feasible for someone trying to learn Blender on the side.
Well I've got a BIG question AND no many time 😭 for the context, i'm in my Last year of a Design degree and I have three weeks for realese a video game background with your tuto. But firt of all, I used Midjourney so I don't know how to use Stable diffusion... well no problem BUT (yes thus Word resum my degree) I create my own graphic style with Midjourney and I would like to know IF I can put a reference image insted of what you selected at 5:20 the "model".
Excellent video! I'm a career filmmaker trying to future proof my job with AI and 3D and this was a great addition to my knowledge. More of this content please!
You are so good! I’m gonna have to look at your videos. I’ve been trying to find a way to use AI to create 3D images of my drawings to create characters. Can it be done?
hi mick just follow your tutorial and my results are not really in position or image generation it is not following the depth image accordly... (looks like a little more aleatory) even adjusting CFG scale doesn't help so much. any hints or things to check? let me know cheers and thanks
Hello Mickmumpitz, 1st thank you for this tutorial. I went to Automatic 1111 but there were a bunch of names with folders. I don't want to download a virus. What should I do and how will I do it?