Тёмный
Albert Bozesan
Albert Bozesan
Albert Bozesan
Подписаться
Learn A.I. Art beyond just the prompt! Subscribe to see the tools and workflows that get you consistent, professional results for games, videos, books, album covers and much more.

I’m Albert Bozesan, a video producer, writer and voice actor from Munich. With a few years of experience doing creative work the normal way, I’ve learned that Open Source software and A.I. can superpower my workflow - and want to show you how!

German legal stuff starts here.

Impressum gemäß § 5 TMG:

Sitz der Gesellschaft:
Peak State Entertainment GmbH
Am Werfersee 8
83339 Chieming

Vertreten durch:
Den Geschäftsführer Albert Bozesan

Eintragung im Handelsregister:
Amtsgericht Traunstein
HRB 27020
Ust.-ID: DE318467792

Kontakt:
team@peak-state.com

Weitere Infos auf peak-state.com/impressum
Комментарии
@_NguyenThanhat-vu7vv
@_NguyenThanhat-vu7vv День назад
What is the name of the checkpoint you are using? Can I have the download link?
@jasonoy3142
@jasonoy3142 3 дня назад
Nice video dude! I am stuck on something though. Each time you generate a texture, the add on almost perfectly gives you for what you're looking for, for example: 6:05 , however for me, it doesn't really do that, it gives a very zoomed in texture.
@albertbozesan
@albertbozesan 3 дня назад
This video is super old, I assume the addon is outdated. I renamed the tutorial for that reason a while back.
@jasonoy3142
@jasonoy3142 3 дня назад
@@albertbozesan Idk how i missed the Outdated part lmao Are you planning making an update video?
@plejra
@plejra 6 дней назад
Great tutorial. By the way, do you have any idea how to create a visualization of a particular house on a photo with empty parcel. Do you have an idea what workflow to adopt?
@albertbozesan
@albertbozesan 6 дней назад
If you don’t need a different angle of the house, use a Canny ControlNet, img2img of the photo with a medium denoise. Make sure your prompt is very descriptive. Then inpaint the parcel.
@anonymous-pr2sy
@anonymous-pr2sy 10 дней назад
thx dood big help
@albertbozesan
@albertbozesan 8 дней назад
Happy to help!
@MikeBurnsArrangedAccidents
@MikeBurnsArrangedAccidents 14 дней назад
Flux didn't care much for my reference image no mater how much I prioritized Controlnet.
@albertbozesan
@albertbozesan 14 дней назад
I prefer controlnet with SDXL, more reliable at the moment. You could use Flux as img2img in a final pass?
@MikeBurnsArrangedAccidents
@MikeBurnsArrangedAccidents 13 дней назад
@@albertbozesan Thanks! I won't bang my head against the wall looking for other workarounds.
@MikeBurnsArrangedAccidents
@MikeBurnsArrangedAccidents 14 дней назад
Don't forget to render in EEVEE folks. Save yourself time.
@char_art
@char_art 18 дней назад
It's very bad, plus your polyflow is rubbish... I suggest you go to 3D school first to master the basics...
@char_art
@char_art 18 дней назад
Fortunately we don't have monkeys like you in our industry 😅
@visualdrip.official
@visualdrip.official 21 день назад
the blender startup file is no longer available :(
@albertbozesan
@albertbozesan 20 дней назад
Ugh, that’s a bummer. I have an updated version of this vid on my list but it could be a while. Working on a longer course right now.
@mjsavage711
@mjsavage711 24 дня назад
I came here after finding one of Storybook Studios shorts, wanted to learn about ControlNets and how they worked, excellent demo, I was able to recreate what you taught quickly.
@albertbozesan
@albertbozesan 24 дня назад
Thank you! Glad it was helpful :) more coming soon.
@EternalAI-v9b
@EternalAI-v9b 25 дней назад
You wrote "old video"? Ok do you have an update?
@albertbozesan
@albertbozesan 25 дней назад
I have a few hours of content coming up in partnership with an education platform!
@EternalAI-v9b
@EternalAI-v9b 25 дней назад
Question: about the 2 images at 12:00 and 12:06, how do you ensure the wall texture behind the guy is the same texture (like precisely) as the one on the wall seen from the first image?
@albertbozesan
@albertbozesan 25 дней назад
We don’t. We prompt and curate the results carefully.
@EternalAI-v9b
@EternalAI-v9b 25 дней назад
@@albertbozesan Thanks for the answer!
@LightWarriors4Life
@LightWarriors4Life 26 дней назад
Been looking into Blender and possibly turning some of our short films into animation types. For copyright and trademark purposes, how safe is it being Open Source? 🤔
@albertbozesan
@albertbozesan 25 дней назад
Blender is used by massive commercial studios. It’s safe. Just make sure you download the official version off blender.org, there are some fakes out there.
@taavetmalkov3295
@taavetmalkov3295 28 дней назад
hello. Good tutorial! i was wondering if i can take the generated space image and somehow base the next image on the previous one so that the details remain the same,,, for example if i want the same room but a view at the side of the couch...
@albertbozesan
@albertbozesan 27 дней назад
Not precisely, but you can check out my latest tutorial to get more control in rooms. You can also research “IPadapter” to get similar images.
@qoph1988
@qoph1988 29 дней назад
One of the best teachers for AI visual art stuff, thanks for helping me keep abreast of these tools as the industry changes
@albertbozesan
@albertbozesan 27 дней назад
That means a lot, thank you! A big project is coming up with more AI lessons, stay tuned :)
@minuandtoyo
@minuandtoyo 29 дней назад
Hi mr. Albert. I’m following you for a longtime, I just couldn’t bring myself to cold message on Linkedin. I found some company (selling guitar courses), based in Helsinki, looking for an AI video creator intern - i know you are not an intern but I thought you might be interested in reaching out to them maybe you can collab in the future. I am not affiliated in any way shape or form with them, just saw the ad. Cheers!
@albertbozesan
@albertbozesan 29 дней назад
Hi Cosmin! Thanks for reaching out and sharing this - don’t worry, feel free to connect on LinkedIn anytime! I’m very happy as Creative Director at Storybook Studios, but I’ll push this comment. Maybe somebody in this community will find it interesting!
@minuandtoyo
@minuandtoyo 29 дней назад
⁠@@albertbozesanok thank you for the reply. I’ll forward the job listing in a PM then, maybe your studio might be interested
@vbag42
@vbag42 Месяц назад
i got an error while running the webui-user.bat file RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check. does anyone know how to fix, thx
@albertbozesan
@albertbozesan 29 дней назад
That’s a classic error, happens to many all the time. Unfortunately there’s no super clear fix - you have to try to reinstall torch a couple of times, worst case reinstall the UI. A quick google or Reddit search with your exact error message will guide you through.
@stableArtAI
@stableArtAI Месяц назад
Skimmed, but looking forward to watch and following more in depth with this technique. Great stuff.
@ravenoftheredsky
@ravenoftheredsky Месяц назад
I've seen this video before. I will watch it again. It's so satisfying
@PipDude13
@PipDude13 Месяц назад
Brilliant! P.S. the git clone way didn't work for me so I downloaded the stable diffusion release and it worked like a charm
@albertbozesan
@albertbozesan Месяц назад
Glad it worked for you! Yeah, the one click installers are super reliable at this point. If I made this video again I would just recommend that.
@PipDude13
@PipDude13 Месяц назад
@@albertbozesan I was breaking my head with an inconsistent hash error I kept getting until I found that installed method :) appreciate your content!
@goldenlotus8968
@goldenlotus8968 Месяц назад
Very useful!
@albertbozesan
@albertbozesan Месяц назад
Glad you think so!
@NerdyTrends
@NerdyTrends Месяц назад
any reason why you don't use comfyui instead?
@albertbozesan
@albertbozesan Месяц назад
Ease of use for viewers! Comfy scares beginners away from AI and can be frustrating even for experienced users if you just want to do something quick and simple. That said, I have a ton of ComfyUI content coming soon 👀
@idengifafict9737
@idengifafict9737 Месяц назад
This was really good. I really felt like the add on wasn’t very useful but I was obviously using it wrong. I can’t wait to play with it again.
@albertbozesan
@albertbozesan Месяц назад
Thanks! Glad it was somewhat helpful. As the title of the video indicates, the plugin is probably pretty old at this point - I recommend checking out StableProjectorZ for stuff like this.
@bobfrank7055
@bobfrank7055 Месяц назад
Albert and others. I have an problem with Forge and Alpha T and I can't find an answer. Please point me there if you can help me or (hopefully) give me some pointers in a reply. All is working well when I create fantastic images with NO BACKGROUND... the images have the light/dark gray hash that appears to be an ALPHA channel transparent background. I download and then load the PNG to my video editor and also to Gimp.... However, the PNG file appears to be a FLAT bitmapped image only... the hashed background is just colored bits; not a transparent background. Can anyone help me? Did I "SAVE" the image incorrectly from SD Forge? Are there any tricks to getting a true ALPHA layer/ transparent background in the downloaded file? In Gimp I added an ALPHA channel, but the hashed background is not in that layer; it is just hashed graphics. I have to "erase" the darn hashed background. It takes me several minutes to use GIMP to clean up the background hash boxes. Also, Is there any way to "salvage" and convert my already-saved-images to a true transparent background layer?
@fm3d-k1w
@fm3d-k1w Месяц назад
Good jobs
@rowerowynomada2158
@rowerowynomada2158 Месяц назад
albert code not working
@albertbozesan
@albertbozesan Месяц назад
Like the title says, this is outdated. It remains up because of sponsor obligations but I recommend checking out my newer vids.
@ZergRadio
@ZergRadio Месяц назад
Hej, I finally followed this tut, but when I dragged the window and couch to the room, they position themselves on top of the room or outside of the cube. I could not get them to sit at the 3D cursor, as in, on the floor. I have to use g and choose an axis to move them into position. Any ideas?
@albertbozesan
@albertbozesan Месяц назад
Is “snapping” on at the top of your viewport?
@ZergRadio
@ZergRadio Месяц назад
@@albertbozesan wrote "Is “snapping” on at the top of your viewport?" No it is not on (magnet). It is not such a biggy. I follow other tuts and then I forget what I did.
@sirmeon1231
@sirmeon1231 Месяц назад
Works like a charm! My idea of using the depth map to drive an animation in Stable Diffusion did not work out that well though, so maybe I need to make Animations in Blender and only use generated textures from SD? 🤔 We'll see ...
@albertbozesan
@albertbozesan Месяц назад
It’s a perfectly valid idea! You can steer animations using depth, it just needs to be a rather complex AnimateDiff workflow in ComfyUI. I’ll have a course up semi-soon that includes something like that.
@phu320
@phu320 Месяц назад
I am likely to wait until there is a prompt app to generate .blend files. I am also likely to wait until the fucking nerds stop trying to make me learn more complicated shit to do shit nowadays !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! NO CODE SOLUTIONS !!!!!!!!!!!!!!!!!!!! PROMPT TO COMPLETE OUTPUTS ONLY !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@albertbozesan
@albertbozesan Месяц назад
And this is why the nerds get to make cool stuff first 🤓
@phu320
@phu320 Месяц назад
@@albertbozesan NERDS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
@thibaudherbert3144
@thibaudherbert3144 Месяц назад
great tutorial. I've followed every step, but at the render stage, the image is a deapth map but just a black to white gradient, it does not recognize the depth of the meshes. I don't find the solution. It renders a flat image
@albertbozesan
@albertbozesan Месяц назад
Maybe your camera is outside the room? In that case it would be rendering the outside wall - the “backface culling” I set up at the beginning is only for the viewport preview, not the render, unfortunately.
@thibaudherbert3144
@thibaudherbert3144 Месяц назад
@@albertbozesan yes it fixed it ! thanks you =)
@AZTECMAN
@AZTECMAN Месяц назад
Great job. One thing I would do differently is use y = 1 / (depth+1) (nodes for addition and division; no need to invert or colorize)
@albertbozesan
@albertbozesan Месяц назад
Thanks! I don't quite follow - is this not just a different node for the same result? How does it save effort?
@AZTECMAN
@AZTECMAN Месяц назад
@@albertbozesan it changes the shape of the fall-off If I'm not mistaken, it's more effective at keeping background elements that are far away from. The camera than the original equation. The curve has a bend rather than a straight line on the graph.
@albertbozesan
@albertbozesan Месяц назад
@@AZTECMAN I see! Thanks for the tip!
@fx_node
@fx_node Месяц назад
I have found stacking depth, line art, normal, and other controlnets in Krita’s stable diffusion, referencing the appropriate blender render pass, a good way to go. I have made several videos about this on my channel.
@Rock-Of-3y3
@Rock-Of-3y3 Месяц назад
Exactly what I needed. Thank you.
@albertbozesan
@albertbozesan Месяц назад
You're welcome!
@GiancarloBombardieri
@GiancarloBombardieri Месяц назад
great!! I didn´t know it was possible!
@GiancarloBombardieri
@GiancarloBombardieri Месяц назад
great tutorial! Thanks!! Where can I download the control net model Depth Xinsir that you use?
@albertbozesan
@albertbozesan Месяц назад
Glad you like it! You can find the model on Xinsir’s huggingface. Download diffusion_pytorch_model.safetensors and name it how you like. huggingface.co/xinsir/controlnet-depth-sdxl-1.0
@albertbozesan
@albertbozesan Месяц назад
Here you go! huggingface.co/xinsir/controlnet-depth-sdxl-1.0/tree/main Rename diffusion_pytorch_model.safetensors
@christophermoonlightproduction
@christophermoonlightproduction Месяц назад
Thanks for keeping it simple. I'm saving this tutorial. I saw Space Vets and thought it was really well done. I'm currently working to do my own animated movie with AI right now so It's good to see others doing this.
@albertbozesan
@albertbozesan Месяц назад
Thank you!! Feel free to share your project if you want to, I’m very curious 😄
@christophermoonlightproduction
@christophermoonlightproduction Месяц назад
@@albertbozesan Thank you. It's called Escape From Planet Omega-12. It's more adult-oriented sci-fi (Think old-school stuff like Heavy Metal or Fire and Ice) but it's on my RU-vid page as the starter video. I would love for you to check it out. I've been doing art and film for a long time and although I'm by no means a technician, I'm very excited about the new era of AI filmmaking. I see people like you as pioneers, making movie history, so if I can carve out a small part for myself in all of this, I'll be very happy. Please, stay in touch. Cheers.
@christophermoonlightproduction
@christophermoonlightproduction Месяц назад
@@albertbozesan Thanks. Yeah, it's on my channel, titled Escape From Planet Omega-12. Although, I'll say that what I'm doing right now has already gone way beyond what I've posted. I'll be updating, soon. Cheers.
@seans4018
@seans4018 Месяц назад
Do you have tutorials on how you did the Space Vets art??? Looks great!
@albertbozesan
@albertbozesan Месяц назад
Thank you! Check out the making of vid on the website, and the CivitAI talk linked in the description for more infos 😄
@kdrude123
@kdrude123 Месяц назад
Excellent video... I'm a blender newbie and I could follow along. I appreciate all the tips and being concise!
@albertbozesan
@albertbozesan Месяц назад
Thank you for letting me know!! Clarity was super important to me, I know how tricky Blender can get.
@davtech
@davtech Месяц назад
RU-vid recommended and subscribed. Looking forward to watching future and past videos.
@albertbozesan
@albertbozesan Месяц назад
That’s great to hear! Thanks.
@Techne89
@Techne89 Месяц назад
Nice to see you back mate,
@albertbozesan
@albertbozesan Месяц назад
Thank you for sticking around!
@albertwang5974
@albertwang5974 Месяц назад
What a nice trick!
@albertbozesan
@albertbozesan Месяц назад
Glad you like it!
@_tmh
@_tmh Месяц назад
Actually so much more than a trick. This tech is key for any filmmaker who wants to do more than just trailers! Thanks for the awesome explanation!
@ZergRadio
@ZergRadio Месяц назад
Very very nicely done. I love this method. As you said, this give one much much more control of what one wants!
@albertbozesan
@albertbozesan Месяц назад
Thank you! Glad you like it 😄
@OtakuOrigins
@OtakuOrigins Месяц назад
When I use loras it gives completely different pictures compared to the model
@albertbozesan
@albertbozesan Месяц назад
I don’t know if Layerdiffuse plays well with LORAs.
@shearak1421
@shearak1421 Месяц назад
File "C:\Users\shear\Documents\stable-diffusion-webui\modules\sd_models.py", line 234, in select_checkpoint raise FileNotFoundError(error_message) FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at: - file C:\Users\shear\Documents\stable-diffusion-webui\model.ckpt - directory C:\Users\shear\Documents\stable-diffusion-webui\models\Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations. Stable diffusion model failed to load it opens a browser but all these errors
@albertbozesan
@albertbozesan Месяц назад
Did you download any models?
@yorkewu9946
@yorkewu9946 2 месяца назад
Awesome video, can I achive the same using unreal engine? any diea thanks a lot
@albertbozesan
@albertbozesan Месяц назад
I’m sure that’s easily done, but I’m no expert in that engine.
@yorkewu9946
@yorkewu9946 Месяц назад
@@albertbozesan Thank you~😇
@squitist
@squitist 2 месяца назад
As an artist, I really liked how you took advantage of the AI tech to add intricate levels of detail to your art as a tool. AI art will open up new possibilities or even art genres that weren't possible before by easily experimenting and playing around with details and textures a lot more like you did. I'm really considering using AI tools when I make art.
@albertbozesan
@albertbozesan 2 месяца назад
Absolutely check AI out! It may or may not work well for you and your process, but trying is always a good thing when it comes to new tools!
@hoangpham9930
@hoangpham9930 2 месяца назад
Let me ask, if I have 3 in-game item images (guns, armor, medical boxes...) with completely different styles, how can I make Stable Diffusion synchronize the styles? Thank you
@albertbozesan
@albertbozesan 2 месяца назад
Yes! Try the “IPADAPTER” controlnet. I explain controlnet in my newer videos.
@Artazar777
@Artazar777 2 месяца назад
Great video, thank you. Can you tell me how to scale an image without a background? When I upscale such an image, a background is added to it
@albertbozesan
@albertbozesan Месяц назад
I answered this elsewhere in the comments but here’s the gist of my suggestion: I think you’re best off upscaling the classic way and then using the original PNG as a mask in Photoshop or similar.
@mooripo
@mooripo 2 месяца назад
Thank you, it even works on my AMD 7800 XT, there is an error at the start, when you'll google it, you'll find a solution quickly
@Dane_Riazer
@Dane_Riazer 2 месяца назад
Awesome tutorial! this will be one of my standard videos for mapping with Dream Texture for my animation short. Thank you again!
@albertbozesan
@albertbozesan 2 месяца назад
This vid is pretty old - check out stableprojectorz.com for a better free option to texture assets!