Тёмный

This Will Change EVERYTHING in Architectural Visualization FOREVER! 

Design Input
Подписаться 15 тыс.
Просмотров 102 тыс.
50% 1

Опубликовано:

 

27 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 162   
@mkemaladro5942
@mkemaladro5942 6 месяцев назад
Very nice work, I'm a student trying to learn this and just stumbled on your video. It's constructive and informative, keep up the good work sir!!!
@ilaydakaratas1957
@ilaydakaratas1957 Год назад
Such useful tools!! I will definetly try it out! Thank you for the video!! Also, that was an interesting pavilion model
@designinput
@designinput Год назад
Hey there, thanks for your support and lovely comment ❤❤ I hope you liked the pavilion :)
@B-water
@B-water Год назад
A gift from heaven...a million thanks 😃😃😃
@designinput
@designinput Год назад
Thanks for your great comments!
@nickp8094
@nickp8094 Год назад
I think its really good and i can see the evolution of it in my head as a visualiser. Feels like one day you will really be able to custom load in pre written scripts that perform very specific functions to make it even more as per the experience of working for a client. Basically visualisation will become a bit like computer programming, not necessarily quicker or easier
@designinput
@designinput Год назад
Hi, thanks for your comment. Well said, totally agree!
@mukondeleliratshilavhi5634
@mukondeleliratshilavhi5634 Год назад
gpt aUTO
@peterpanic7019
@peterpanic7019 Год назад
Thanks for your great quality videos, I just watched the latest ones about AI and image generation, can't wait to try them out. Hope you channel grows :)
@designinput
@designinput Год назад
Hey, thank you so much for your support ❤ Please let us know what you think after you try it out :)
@mockingbird1128
@mockingbird1128 Год назад
what did u watch im new to this
@armannasr3681
@armannasr3681 Год назад
@@mockingbird1128 try stable diffusion + controlnet
@sherifamr4160
@sherifamr4160 Год назад
love the way you explained it, to the point and easy to follow up. I do have a question hopefully you will read my comment, but I wanted to ask if you already have materials on your pavilion would that somehow redirect the rendering process into what we want acting as a more parameters? ... I hope I am making sense in my comment. again thank you so much I love that you are sharing your knowledge with us shows how amazing you are as a person.
@designinput
@designinput Год назад
Hey, thanks a lot for your lovely comment! Unfortunately, it is not possible to use materials as a parameter at the moment, but I am sure soon we will be able to have more control over this workflow. Thanks a lot for your kind words
@reflections191
@reflections191 Год назад
Very well explained, Thanks for the great video!
@designinput
@designinput Год назад
Hey, thanks for your lovely comment!
@firatgunesbalci2743
@firatgunesbalci2743 Год назад
Great videos 👍🏻👍🏻👍🏻Can you explain Sketchup workflow as well?
@designinput
@designinput Год назад
Thanks a lot
@NicoChin
@NicoChin Год назад
if you tell the client that the last picture is man-made and the same picture you say was created by AI. If the client's attitude does not change, then AI will really change the world.
@phgduo3256
@phgduo3256 Год назад
I am become a fan of your works. Will spen the next holidays of this month on these AI series. Thanks
@designinput
@designinput Год назад
Hi, thanks a lot for your lovely comment and feedback
@pranayyalamuri3127
@pranayyalamuri3127 Год назад
Thanks for the content ❤
@designinput
@designinput Год назад
Hey, thanks a lot for your great comment and support!
@tatianagavrilova2252
@tatianagavrilova2252 Год назад
It look like a magic! Thanks a lot
@designinput
@designinput Год назад
Hi, thanks a lot, glad you liked it! You are very welcome!
@MDLEUA
@MDLEUA Год назад
Great tutorial, followed Ambrosini videos but I like this format more!
@designinput
@designinput Год назад
Hey, thank you! Glad to hear that you liked it! Did you have a chance to try it?
@niirceollae2
@niirceollae2 Год назад
wow... that is insane. i have to try it now
@designinput
@designinput Год назад
Hey, thanks for your lovely comment! Please share your experiences after you tried it out, and feel free to ask if you have any problems.
@dkn822
@dkn822 Год назад
Thank you for all this amazing information and resources, I will definitely use this for my projects. Subscribed and eager to watch your upcoming videos! Keep it up!
@designinput
@designinput Год назад
Hey, thanks a lot for your lovely comment and support! I am happy to hear that you liked it! Please share your experiences with me once you try it out!
@HannesGrebin
@HannesGrebin Год назад
Wizard! Thank you so much for your concise introduction and other videos. Just came along from the Parametric Architecture course of Aturo Tedeschi who you might know (the grasshopper guy)
@designinput
@designinput Год назад
Hi, thanks a lot for your lovely comment and feedback
@emekachime1089
@emekachime1089 Год назад
Looking forward to your next video of CLASSICAL RENDER VS AI RENDER .👍
@designinput
@designinput Год назад
Hey, thanks a lot for your lovely comment! It will be out soon :)
@JJSnel-uh3by
@JJSnel-uh3by Год назад
I love the setup but the voice is just too funny xD
@designinput
@designinput Год назад
:)
@dianaallaham2801
@dianaallaham2801 10 месяцев назад
Since your video there has been an update to the Ambrosinus, and for some reason I cannot get the port to be available. Do you happen to know what inputs should go into the LaunchSD as it has many more inputs now?
@amazingsound63
@amazingsound63 Год назад
Scary For Future Job Opportunity.
@ilhan1936
@ilhan1936 Год назад
Thats really great thanks for the video! Eline saglik arkadasim :)
@designinput
@designinput Год назад
Hi Ilhan, thanks a lot for your lovely comment :)) ❤❤
@azimbekibraev1249
@azimbekibraev1249 6 месяцев назад
Selam aleykum Omer! Ambrosinus has updated and your sample GH fail is no longer work, could you please share the updated version, if this workflow is still relevant. Thank you in advance
@william0916
@william0916 Год назад
Thank you for sharing this fabulous workflow!! I am about to try it out, and I'm wondering if there are any newer extensions and development you would suggest us to use (since this video is from April, not sure if there's anything new in these 3 months!) Thank you in advance and have a nice day :)
@designinput
@designinput Год назад
Hey, thanks a lot for the feedback! Of course, there are lots of new developments happening every day, I am trying to stay updated as much as I can and share what I learn. But in terms of this specific workflow, there are new major updates for both Stable Diffusion and Grasshopper extensions. But both should still work fine!
@borchzhang2211
@borchzhang2211 Год назад
How to handle parameter settings indoors to better align with the model?
@designinput
@designinput Год назад
Hey, thanks for your comment! For indoor views, you can try the Depth Model too. Is there any specific parameter you want to ask? Maybe I can help better with that one :)
@zafiriszafiropoulos5346
@zafiriszafiropoulos5346 Год назад
hi there. I only have rhino 6, and the ambrosini tool is only available for rhino 7. is there another way?
@MertMert-g7c
@MertMert-g7c 8 месяцев назад
I have a no data problem when I connect 2.24 LaunchSD to the panel, how can I solve it?
@Masoud.Ansari
@Masoud.Ansari Год назад
Thank you for sharing this is awesome 👌
@designinput
@designinput Год назад
Hey, thanks a lot! Glad to hear that you liked it :)
@Masoud.Ansari
@Masoud.Ansari Год назад
@Design Input yourwelcome bro
@darkrider897
@darkrider897 Год назад
Hi sir, I was stuck at 2:28 when u clicked on the administrator window. I tried to do it by right clicking webui-user.bat, then click run as administrator. However it just flashes but nothing happens. How do I solve the problem?
@designinput
@designinput Год назад
Hey, you don't need to run the webui-user.bat file as administrator, you need to run Rhino as administrator. And make sure to add the --api parameter to the .bat file. If you can't start Stable Diffusion inside Grasshopper you can just run it manually and if you have --api commend, it should automatically connect to the Grasshopper plugin.
@DannoHung
@DannoHung Год назад
Backing the rendered image out to a textured and lit scene is the next step probably, hah!
@designinput
@designinput Год назад
👍
@mrezaforoozandeh520
@mrezaforoozandeh520 10 месяцев назад
thanks but by clicking start botton the webui-user.bat wont run by --api. i edit the bat file but after clicking start it wont be able to run it in that way and changes the bat file back to origin
@kedarundale972
@kedarundale972 Год назад
Thank you for the wonderful video. I had one question, so everything in the script works perfectly on my computer but when I connect value list to Mode, I get error. Do you know why this could be? Basically the mode doesn't take any other input apart from 0 - which is the T2I Basic. In my stable diffusion I do see the other models but I am not sure what the error is. The same thing is happening with SAMPLER MODEL, it does not take any input apart Euler A. Any suggestions will be helpful. Thank you.
@designinput
@designinput Год назад
Hey, thanks for your comment. I am not sure why you can't see the other modes. There was a new update to the ambrosinus-toolkit plugin since I published the video, maybe you should update it to work. I will check the file and upload an updated version soon. Let me know if you are still having problems with it. Thank you!
@firatgunesbalci2743
@firatgunesbalci2743 Год назад
When I first saw the teaser, I thought that you used ArkoAi
@designinput
@designinput Год назад
Hey, haha, yes, that's the most "popular" one nowadays, but I feel like you don't have much control over it. I will share a video soon to compare different AI Render alternatives. Thanks for your comments!
@motivizer5395
@motivizer5395 Год назад
Amazing video . Can you make a video for sketchup as well about this process ?
@designinput
@designinput Год назад
Hi, thanks for your comment and suggestion! I will definitely try it out and share the results!
@전형욱-e1w
@전형욱-e1w Год назад
Hi. What’s your rhino version and ladybug version? Ladybug is not working on my rhino.
@designinput
@designinput Год назад
Hey, I was using 1.6 version, you can download it here: www.food4rhino.com/en/app/ladybug-tools But even if Ladybug doesn't work, you can still use this workflow, just you won't be able to see the images directly inside Grasshopper.
@arv3ryn
@arv3ryn Год назад
Great video, also what is you computer specs, cuz I have a basic laptop, wondering whether I can run this
@designinput
@designinput Год назад
Hey, thanks a lot for you lovely feedback! I am using a laptop with RTX3060 (6GB VRAM) and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. I will share another workflow how you can use Stable Diffusion without any computer in couple of days.
@adel.419
@adel.419 Год назад
I have followed everything in the video but when I tried my own model and hit the generate button the AleNG-Ioc battery turned red and doesn't generate anything and the panel connected to the info says "No data was collected" even though the viewport appears in the LB image viewer
@lawrencenathan351
@lawrencenathan351 Год назад
quick question : Do i just add this on top of sketchup? or is there any simple tutorial i can follow on combining ai in sketcuo? thanks
@designinput
@designinput Год назад
Hi, this workflow doesn't work with SketchUp at the moment, but you can try platforms like VerasAI. Thanks for your comment!
@moaazaldahan1175
@moaazaldahan1175 Год назад
thank you very much
@designinput
@designinput Год назад
Hey, your are very welcome
@METTI1986LA
@METTI1986LA Год назад
It’s actually good but I rather have control over the textures and put them where I want to have them - it’s really not that hard... of course it takes a bit more time but why would you need 1000 renders just to get overwhelmed by the choices you have
@hopperblue934
@hopperblue934 Год назад
great bro💖💖💖
@designinput
@designinput Год назад
Hi, thanks a lot for the lovely feedback
@jelisperez7968
@jelisperez7968 Год назад
Thank you for sharing this amazing tutorial. Is it still working? I am having this issue with ControlNet updates: controlnet warning gess mode is removed since 1.1.136. please use Control Mode instead. If I choose the CN v1.1.X IN Ambrosinus tool, Result image differs completely from original image. Also changed directory to point directly to CNet path. Any hint? Is there a way to choose the SD Model? Best
@jelisperez7968
@jelisperez7968 Год назад
I figured out that with the update, CN Depth modes are working as expected, but not Canny mode. I've posted the bug on food4Rhino. Many thanks again
@designinput
@designinput Год назад
Hey, good to hear that it's working :) For me, it was working without any issues. Thanks for your comment!
@Albert_Riseal
@Albert_Riseal Год назад
Awesome! I like it, thanks. Please make a tutorial using blender, if possible
@designinput
@designinput Год назад
Hey, thanks a lot
@shinndin
@shinndin Год назад
Amazing
@designinput
@designinput Год назад
Hi Dina, thanks a lot for your excellent feedback ❤❤
@user-ae5pa
@user-ae5pa Год назад
soooooo good
@designinput
@designinput Год назад
Hi, thanks a lot for your great comment! ❤
@Peter-hn9yv
@Peter-hn9yv Год назад
i got the error in grasshopper saying excepting index was out of range, have you encounter this issue before?
@diegovazquezdesantos4667
@diegovazquezdesantos4667 Год назад
Thank you so much for the clear explanation. I tried to follow this video with the new update of ambrosinus but I was no able too. And when I installed v1.1.9 I was able to utilized your code. Although at the output SeeOUt (LA_SeeOut) an error occurs. “index was out of range” any ideas on how to fix this error?
@designinput
@designinput Год назад
Hey, thanks a lot! I think you just need to generate an image first, after that you will able to see it and the error will disappear.
Год назад
hi, thanks for the video. i check other videos and came to somewhere until I stuck with webui part. my webui-user file looks different than yours. there is "--xformers" and "git pull" lines in yours but I don't have it. unfortunately just copying it as yours doesn't work :) . Dont know what is missing but I can say that it is pretty overwhelming setup for sure.
@designinput
@designinput Год назад
Hey Cankat, Thanks for your comment. "--xformers" is an additional step that you can use if you have an RTX 30 or 40-series GPU; it will speed up the generation process. And the "git pull" comment automatically checks for new updates when you run the SD. So you don't have to have them to use it; the only must is the "--api" to give access directly inside the Grasshopper file. Since it is an early experimental workflow, you are right that it is not so user-friendly. But it will surely develop, and I will share the newer versions very soon. Thank you!
@infographie
@infographie Год назад
Excellent
@designinput
@designinput Год назад
Hi, thanks you!
@sirousghaffari9556
@sirousghaffari9556 Год назад
Hello, thank you very much for your good lessons. In the 3rd minute of the tutorial, you say that I put the GrassHopper codes for you in the description section. But unfortunately I can't find it. Is it possible to guide me?
@designinput
@designinput Год назад
Hi, thanks for the feedback! You can find all the resources mentioned in the video here: designinputstudio.com/this-will-change-everything-in-architectural-visualization-forever/ And you can download the file here: www.notion.so/designinputs/AI-Render-Engine-Template-File-02d34b595f824ca6a9f1339470fb1387?pvs=4
@wido.daniel
@wido.daniel Год назад
Thank you man, this si SO good! to your knowledge, would it possible to use this in Revit through Dynamo?
@designinput
@designinput Год назад
Hey, thanks a lot for the feedback. ❤Hmm, I am not super sure, but I believe there is no extension for that yet. But I am experimenting with connecting Revit to this same workflow with Rhino.Inside.Revit. I will share it as soon as it's ready :)
@wido.daniel
@wido.daniel Год назад
@@designinput that would be awesome!
@soitalwaysgoes
@soitalwaysgoes Год назад
Hello! I checked out your instagram and I would die for a tutorial on how to do those veil textures you did!
@designinput
@designinput Год назад
Hi, oh, thank you for your lovely feedback. Happy that you liked them ❤ I created them with Midjourney v5. Sure, I will do a video about it soon!
@youssefdaadoush8755
@youssefdaadoush8755 Год назад
Thanks a lot for the video, is really incredible, I just have a question, I did everything exactly same and in the generation comes the results regardless of my base image, what could be the problem? otherwise it works directly in stable difussion in web window
@designinput
@designinput Год назад
Hey Youssef, thanks for your great comment! It looks like there is a problem with the ControlNet. Did you enable it?
@韩鹏坤
@韩鹏坤 Год назад
My rhino7 cannot be installed ambrosinus-toolkit,which version of am should i download?
@designinput
@designinput Год назад
Hey, I am also using Rhino7 and was able to use it without any issues with the latest version of Ambrosinus-toolkit, if you are still having issues you may contact the developer.
@oof1498
@oof1498 Год назад
Great! How about if I want to use the same material on the same place but in different perspective?
@designinput
@designinput Год назад
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)
@alexanderaggersbjerg5187
@alexanderaggersbjerg5187 Год назад
Thanks for the great explanation! Got everything up and running:) One quick question, I am having issues working with the depth controlnet. I have downloaded the previous controlnet versions (aside from the new controlnet v1.1 versions) but the depth and canny masks are very bad quality. This is only an issue for me when I use controlnets in grasshopper. Any ideas what the problem may be?
@simongobel2709
@simongobel2709 Год назад
i have the same problem unfortunately .... any answer yet ?
@firatgunesbalci2743
@firatgunesbalci2743 Год назад
Hi, what is your computer hardware configuration ?
@designinput
@designinput Год назад
Hey Fırat, I am using a laptop with RTX3060 (6GB VRAM) and 12th Gen Intel(R) Core(TM) i7-12700H CPU.
@sirousghaffari9556
@sirousghaffari9556 Год назад
In the 4th minute, when you press the start button, it renders without any problem, but it is a problem for me because the SEE OUT code is red and it gives this error ( Solution exception:Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index) can you help?
@11Bashar
@11Bashar Год назад
Have you found a solution yet?
@sirousghaffari9556
@sirousghaffari9556 Год назад
@@11Bashar Unfortunately, I was disappointed in connecting to Grasshopper because I don't notice its errors and there is no explanation about it anywhere.
@lorenzoguadagnucci-e1q
@lorenzoguadagnucci-e1q Год назад
Thank you so much!! I’m just having issues with the resolution of the “depth image” that it creates, its really low and cause of it I can t use my models Can I increase it ? Thank you anyway this tool is amazing 👍
@lorenzoguadagnucci-e1q
@lorenzoguadagnucci-e1q Год назад
being more precise, I probably have problems with the preprocessor I can't change it so it doesn't generate the correct depth image
@designinput
@designinput Год назад
Hey, thanks for the comment! If the image resolution is low from the viewport, you can try printing a view from Rhino with a custom resolution and use it in Stable Diffusion directly. It may help but don't go larger than 1024x1024 it will slow down the process dramatically, once you like one of the views than you can upscale the image later. Hope I understood your question correctly. Let me know if you have any other issues.
@ezzathakimi2201
@ezzathakimi2201 Год назад
Please make a video how to use it in 3ds Max + Corona
@mukondeleliratshilavhi5634
@mukondeleliratshilavhi5634 Год назад
I think it's a great tool for rapid prototyping with less images . It unlocks more possibilities and gives us and the client more variety with less time and energy. The biggest hope is we come to a final image that we might have not even though possible before. But for a final image I think the old method is still king. Who knows next year this time it might be a different story m Will I use it for my next project oh yes but the blender version it's always best to get in early with new technology
@designinput
@designinput Год назад
Hey, thanks for your comment; I totally agree! Hmm, that's interesting; why do you prefer Blender specifically?
@mukondeleliratshilavhi5634
@mukondeleliratshilavhi5634 Год назад
@@designinput there are a few reason. 1) Been open source it was easy access with out restrictions and invest time and resources on it. I'm a freelance/ business owner. It is important I run as lean as possible 2) rapid development : it can do a lot of things and it's ever expanding its reach. I'm able to complete a project in one software with out having to hop on another. Yes it's not as strong as rhino or Max but it's gives great quality. 3) the community: they drive the development and education of the software it's so of owned by us . The amount of tutorial and add on , stores available. There is more but let me park here
@Peter-hn9yv
@Peter-hn9yv Год назад
does this workflow saves the viewport and dimensions of the image?
@designinput
@designinput Год назад
Hey, yes, it saves the image exactly in the viewport size and uses the same aspect ratio for the new image. Thanks for your comment!
@sossiopalmiero3582
@sossiopalmiero3582 Год назад
where i can find the grasshopper file?
@designinput
@designinput Год назад
Hey, you can find all the resources here: designinputstudio.com/this-will-change-everything-in-architectural-visualization-forever/
@danr9277
@danr9277 Год назад
This is great how is the speed of the rendering? Seems very fast.
@designinput
@designinput Год назад
Hey, thanks for your comment! It mostly depends on your GPU, I am using a RTX 3060 with 6GB VRAM, and I can generate a 1024x1024 image in 1-2 minutes.
@Macora3251
@Macora3251 Год назад
Can you get the same results twice if the client wants the exact same render but change just the column material for example?
@designinput
@designinput Год назад
Hey, thanks for your comment! Generating exactly the same image twice can be challenging. But if you want to change a part of it, you can use inpainting to edit it.
@韩鹏坤
@韩鹏坤 Год назад
2023-07-01 22:55:51,129 - ControlNet - WARNING - Guess Mode is removed since 1.1.136. Please use Control Mode instead. What should i do?
@designinput
@designinput Год назад
Hello, I think it should still work but if it doesn't update your ControlNet extension and it should solve this issue. Thank you!
@NMPrecedent
@NMPrecedent Год назад
Can stable diffusion further elaborate the model so that at different views you can maintain the same materials, facades?
@designinput
@designinput Год назад
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. But I am sure we will see some developments about this very soon!
@cgimadesimple
@cgimadesimple Год назад
cool :)
@designinput
@designinput Год назад
@RakanEid-d5w
@RakanEid-d5w Год назад
Hi its looks amazing thank you for that but I tried it and also used the same parameters but unfortunately it generate a different image not the image of the pavilion it change it completely i dont know what i did wrong if you could help me thank you again
@designinput
@designinput Год назад
Hey, thanks for your comment! Probably there was a problem with the ControlNet. Do you have the ControlNet models installed locally?
@RakanEid-d5w
@RakanEid-d5w Год назад
@@designinput hi thank you for replying back yes I already download it but controlnet doesn't work in Rhino it just work in the Browser no idea why
@СтепанКаштанов-в2с
I need a plugin that can give a million likes to this video👍👍👍
@designinput
@designinput Год назад
Hey, thanks a lot for your lovely comment!
@ABCDEFGH-bi5tk
@ABCDEFGH-bi5tk Год назад
Does this work with 3ds Max as well?
@designinput
@designinput Год назад
Hey, not with the exact workflow but it can be possible to use it with an extension. I am not using 3ds Max myself, that's why I haven't experimented with that one. Let me know if you try it :)
@mockingbird1128
@mockingbird1128 Год назад
would this work with revit too?
@designinput
@designinput Год назад
Hey, maybe it could work with the Rhino.Inside.Revit, but I haven't tested it. But you can always take a screenshot and use the SD + ControlNet additionally.
@bixp2k3
@bixp2k3 Год назад
how does it cost
@designinput
@designinput Год назад
Hey, it doesn't cost anything if you already have Rhino, because Stable Diffusion is running locally on your computer.
@pedorthicart1201
@pedorthicart1201 Год назад
I feel it is great and help me with visualization of orthopedic footwear designed through #Pedorthic Information Modeling! Waiting to have time to explore it! Thank you
@designinput
@designinput Год назад
Hey, thanks for your comment! I will share a video specifically about product photography and how to use AI. Thank you!
@pedorthicart1201
@pedorthicart1201 Год назад
@@designinput Waiting for it! Thanks!
@abdulmelikyetkin9721
@abdulmelikyetkin9721 Год назад
#DesignInput can u do this with sketchup
@designinput
@designinput Год назад
Hey, thanks for your comment! Technically yes, I had some issues creating this custom workflow on SketchUp; when I figure it out, I will share it :) Meanwhile, you can try extensions like VerasAI and ArkoAI extensions.
@sabaahmed1261
@sabaahmed1261 Год назад
Does it work with revit ?
@GRUMPNUGS
@GRUMPNUGS Год назад
I know revit currently has one called Veras
@designinput
@designinput Год назад
Hi, I am currently experimenting with implementing this workflow in Revit. I will share a video about it soon :) Thanks for the comment!
@riccia888
@riccia888 Год назад
This is the most cofusiing software ever
@borchzhang2211
@borchzhang2211 Год назад
succes 成功了
@designinput
@designinput Год назад
@remyleblanc8778
@remyleblanc8778 Год назад
nice! wish it was 1000 times more simple
@designinput
@designinput Год назад
Hey, thanks! Haha, I feel you
@iaspace6737
@iaspace6737 Год назад
I NEED SD+SU
@motassem85
@motassem85 Год назад
Looks too complicated for me still prefer 3ds max vray or lumion 😂
@designinput
@designinput Год назад
Haha, totally understand that :) But we will see much easier user interfaces soon, surely!
@shiryu7101
@shiryu7101 Год назад
Hi! Could you tell me why it says “Input image doesn’t exist or is not supported format” even I put png file? Thank you!
@oof1498
@oof1498 Год назад
Great! How about if I want to use the same material on the same place but in different perspective?
@designinput
@designinput Год назад
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)
@oof1498
@oof1498 Год назад
@@designinput thanks bro, appreciate your effort:)
Далее
We tried to compete with AI... [AI vs. ARCHITECT]
14:20
Одинокая сестра
00:14
Просмотров 15 тыс.
Why Unreal Engine 5.4 is a Game Changer
12:46
Просмотров 1,3 млн
I Edited The Same Video on Every FREE Software
15:48
Просмотров 145 тыс.
10 AI Animation Tools You Won’t Believe are Free
16:02
Creating Realistic Renders from a Sketch Using A.I.
6:57
Одинокая сестра
00:14
Просмотров 15 тыс.