Learn A.I. Art beyond just the prompt! Subscribe to see the tools and workflows that get you consistent, professional results for games, videos, books, album covers and much more.
I’m Albert Bozesan, a video producer, writer and voice actor from Munich. With a few years of experience doing creative work the normal way, I’ve learned that Open Source software and A.I. can superpower my workflow - and want to show you how!
German legal stuff starts here.
Impressum gemäß § 5 TMG:
Sitz der Gesellschaft: Peak State Entertainment GmbH Am Werfersee 8 83339 Chieming
Vertreten durch: Den Geschäftsführer Albert Bozesan
Eintragung im Handelsregister: Amtsgericht Traunstein HRB 27020 Ust.-ID: DE318467792
Nice video dude! I am stuck on something though. Each time you generate a texture, the add on almost perfectly gives you for what you're looking for, for example: 6:05 , however for me, it doesn't really do that, it gives a very zoomed in texture.
Great tutorial. By the way, do you have any idea how to create a visualization of a particular house on a photo with empty parcel. Do you have an idea what workflow to adopt?
If you don’t need a different angle of the house, use a Canny ControlNet, img2img of the photo with a medium denoise. Make sure your prompt is very descriptive. Then inpaint the parcel.
I came here after finding one of Storybook Studios shorts, wanted to learn about ControlNets and how they worked, excellent demo, I was able to recreate what you taught quickly.
Question: about the 2 images at 12:00 and 12:06, how do you ensure the wall texture behind the guy is the same texture (like precisely) as the one on the wall seen from the first image?
Been looking into Blender and possibly turning some of our short films into animation types. For copyright and trademark purposes, how safe is it being Open Source? 🤔
Blender is used by massive commercial studios. It’s safe. Just make sure you download the official version off blender.org, there are some fakes out there.
hello. Good tutorial! i was wondering if i can take the generated space image and somehow base the next image on the previous one so that the details remain the same,,, for example if i want the same room but a view at the side of the couch...
Hi mr. Albert. I’m following you for a longtime, I just couldn’t bring myself to cold message on Linkedin. I found some company (selling guitar courses), based in Helsinki, looking for an AI video creator intern - i know you are not an intern but I thought you might be interested in reaching out to them maybe you can collab in the future. I am not affiliated in any way shape or form with them, just saw the ad. Cheers!
Hi Cosmin! Thanks for reaching out and sharing this - don’t worry, feel free to connect on LinkedIn anytime! I’m very happy as Creative Director at Storybook Studios, but I’ll push this comment. Maybe somebody in this community will find it interesting!
i got an error while running the webui-user.bat file RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. does anyone know how to fix, thx
That’s a classic error, happens to many all the time. Unfortunately there’s no super clear fix - you have to try to reinstall torch a couple of times, worst case reinstall the UI. A quick google or Reddit search with your exact error message will guide you through.
Ease of use for viewers! Comfy scares beginners away from AI and can be frustrating even for experienced users if you just want to do something quick and simple. That said, I have a ton of ComfyUI content coming soon 👀
Thanks! Glad it was somewhat helpful. As the title of the video indicates, the plugin is probably pretty old at this point - I recommend checking out StableProjectorZ for stuff like this.
Albert and others. I have an problem with Forge and Alpha T and I can't find an answer. Please point me there if you can help me or (hopefully) give me some pointers in a reply. All is working well when I create fantastic images with NO BACKGROUND... the images have the light/dark gray hash that appears to be an ALPHA channel transparent background. I download and then load the PNG to my video editor and also to Gimp.... However, the PNG file appears to be a FLAT bitmapped image only... the hashed background is just colored bits; not a transparent background. Can anyone help me? Did I "SAVE" the image incorrectly from SD Forge? Are there any tricks to getting a true ALPHA layer/ transparent background in the downloaded file? In Gimp I added an ALPHA channel, but the hashed background is not in that layer; it is just hashed graphics. I have to "erase" the darn hashed background. It takes me several minutes to use GIMP to clean up the background hash boxes. Also, Is there any way to "salvage" and convert my already-saved-images to a true transparent background layer?
Hej, I finally followed this tut, but when I dragged the window and couch to the room, they position themselves on top of the room or outside of the cube. I could not get them to sit at the 3D cursor, as in, on the floor. I have to use g and choose an axis to move them into position. Any ideas?
@@albertbozesan wrote "Is “snapping” on at the top of your viewport?" No it is not on (magnet). It is not such a biggy. I follow other tuts and then I forget what I did.
Works like a charm! My idea of using the depth map to drive an animation in Stable Diffusion did not work out that well though, so maybe I need to make Animations in Blender and only use generated textures from SD? 🤔 We'll see ...
It’s a perfectly valid idea! You can steer animations using depth, it just needs to be a rather complex AnimateDiff workflow in ComfyUI. I’ll have a course up semi-soon that includes something like that.
I am likely to wait until there is a prompt app to generate .blend files. I am also likely to wait until the fucking nerds stop trying to make me learn more complicated shit to do shit nowadays !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! NO CODE SOLUTIONS !!!!!!!!!!!!!!!!!!!! PROMPT TO COMPLETE OUTPUTS ONLY !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
great tutorial. I've followed every step, but at the render stage, the image is a deapth map but just a black to white gradient, it does not recognize the depth of the meshes. I don't find the solution. It renders a flat image
Maybe your camera is outside the room? In that case it would be rendering the outside wall - the “backface culling” I set up at the beginning is only for the viewport preview, not the render, unfortunately.
@@albertbozesan it changes the shape of the fall-off If I'm not mistaken, it's more effective at keeping background elements that are far away from. The camera than the original equation. The curve has a bend rather than a straight line on the graph.
I have found stacking depth, line art, normal, and other controlnets in Krita’s stable diffusion, referencing the appropriate blender render pass, a good way to go. I have made several videos about this on my channel.
Glad you like it! You can find the model on Xinsir’s huggingface. Download diffusion_pytorch_model.safetensors and name it how you like. huggingface.co/xinsir/controlnet-depth-sdxl-1.0
Thanks for keeping it simple. I'm saving this tutorial. I saw Space Vets and thought it was really well done. I'm currently working to do my own animated movie with AI right now so It's good to see others doing this.
@@albertbozesan Thank you. It's called Escape From Planet Omega-12. It's more adult-oriented sci-fi (Think old-school stuff like Heavy Metal or Fire and Ice) but it's on my RU-vid page as the starter video. I would love for you to check it out. I've been doing art and film for a long time and although I'm by no means a technician, I'm very excited about the new era of AI filmmaking. I see people like you as pioneers, making movie history, so if I can carve out a small part for myself in all of this, I'll be very happy. Please, stay in touch. Cheers.
@@albertbozesan Thanks. Yeah, it's on my channel, titled Escape From Planet Omega-12. Although, I'll say that what I'm doing right now has already gone way beyond what I've posted. I'll be updating, soon. Cheers.
File "C:\Users\shear\Documents\stable-diffusion-webui\modules\sd_models.py", line 234, in select_checkpoint raise FileNotFoundError(error_message) FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at: - file C:\Users\shear\Documents\stable-diffusion-webui\model.ckpt - directory C:\Users\shear\Documents\stable-diffusion-webui\models\Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations. Stable diffusion model failed to load it opens a browser but all these errors
As an artist, I really liked how you took advantage of the AI tech to add intricate levels of detail to your art as a tool. AI art will open up new possibilities or even art genres that weren't possible before by easily experimenting and playing around with details and textures a lot more like you did. I'm really considering using AI tools when I make art.
Let me ask, if I have 3 in-game item images (guns, armor, medical boxes...) with completely different styles, how can I make Stable Diffusion synchronize the styles? Thank you
I answered this elsewhere in the comments but here’s the gist of my suggestion: I think you’re best off upscaling the classic way and then using the original PNG as a mask in Photoshop or similar.