I think its really good and i can see the evolution of it in my head as a visualiser. Feels like one day you will really be able to custom load in pre written scripts that perform very specific functions to make it even more as per the experience of working for a client. Basically visualisation will become a bit like computer programming, not necessarily quicker or easier
Thanks for your great quality videos, I just watched the latest ones about AI and image generation, can't wait to try them out. Hope you channel grows :)
love the way you explained it, to the point and easy to follow up. I do have a question hopefully you will read my comment, but I wanted to ask if you already have materials on your pavilion would that somehow redirect the rendering process into what we want acting as a more parameters? ... I hope I am making sense in my comment. again thank you so much I love that you are sharing your knowledge with us shows how amazing you are as a person.
Hey, thanks a lot for your lovely comment! Unfortunately, it is not possible to use materials as a parameter at the moment, but I am sure soon we will be able to have more control over this workflow. Thanks a lot for your kind words
if you tell the client that the last picture is man-made and the same picture you say was created by AI. If the client's attitude does not change, then AI will really change the world.
Thank you for all this amazing information and resources, I will definitely use this for my projects. Subscribed and eager to watch your upcoming videos! Keep it up!
Hey, thanks a lot for your lovely comment and support! I am happy to hear that you liked it! Please share your experiences with me once you try it out!
Wizard! Thank you so much for your concise introduction and other videos. Just came along from the Parametric Architecture course of Aturo Tedeschi who you might know (the grasshopper guy)
Since your video there has been an update to the Ambrosinus, and for some reason I cannot get the port to be available. Do you happen to know what inputs should go into the LaunchSD as it has many more inputs now?
Selam aleykum Omer! Ambrosinus has updated and your sample GH fail is no longer work, could you please share the updated version, if this workflow is still relevant. Thank you in advance
Thank you for sharing this fabulous workflow!! I am about to try it out, and I'm wondering if there are any newer extensions and development you would suggest us to use (since this video is from April, not sure if there's anything new in these 3 months!) Thank you in advance and have a nice day :)
Hey, thanks a lot for the feedback! Of course, there are lots of new developments happening every day, I am trying to stay updated as much as I can and share what I learn. But in terms of this specific workflow, there are new major updates for both Stable Diffusion and Grasshopper extensions. But both should still work fine!
Hey, thanks for your comment! For indoor views, you can try the Depth Model too. Is there any specific parameter you want to ask? Maybe I can help better with that one :)
Hi sir, I was stuck at 2:28 when u clicked on the administrator window. I tried to do it by right clicking webui-user.bat, then click run as administrator. However it just flashes but nothing happens. How do I solve the problem?
Hey, you don't need to run the webui-user.bat file as administrator, you need to run Rhino as administrator. And make sure to add the --api parameter to the .bat file. If you can't start Stable Diffusion inside Grasshopper you can just run it manually and if you have --api commend, it should automatically connect to the Grasshopper plugin.
thanks but by clicking start botton the webui-user.bat wont run by --api. i edit the bat file but after clicking start it wont be able to run it in that way and changes the bat file back to origin
Thank you for the wonderful video. I had one question, so everything in the script works perfectly on my computer but when I connect value list to Mode, I get error. Do you know why this could be? Basically the mode doesn't take any other input apart from 0 - which is the T2I Basic. In my stable diffusion I do see the other models but I am not sure what the error is. The same thing is happening with SAMPLER MODEL, it does not take any input apart Euler A. Any suggestions will be helpful. Thank you.
Hey, thanks for your comment. I am not sure why you can't see the other modes. There was a new update to the ambrosinus-toolkit plugin since I published the video, maybe you should update it to work. I will check the file and upload an updated version soon. Let me know if you are still having problems with it. Thank you!
Hey, haha, yes, that's the most "popular" one nowadays, but I feel like you don't have much control over it. I will share a video soon to compare different AI Render alternatives. Thanks for your comments!
Hey, I was using 1.6 version, you can download it here: www.food4rhino.com/en/app/ladybug-tools But even if Ladybug doesn't work, you can still use this workflow, just you won't be able to see the images directly inside Grasshopper.
Hey, thanks a lot for you lovely feedback! I am using a laptop with RTX3060 (6GB VRAM) and 12th Gen Intel(R) Core(TM) i7-12700H CPU. Of course, for this process, the most important one is the GPU. I will share another workflow how you can use Stable Diffusion without any computer in couple of days.
I have followed everything in the video but when I tried my own model and hit the generate button the AleNG-Ioc battery turned red and doesn't generate anything and the panel connected to the info says "No data was collected" even though the viewport appears in the LB image viewer
It’s actually good but I rather have control over the textures and put them where I want to have them - it’s really not that hard... of course it takes a bit more time but why would you need 1000 renders just to get overwhelmed by the choices you have
Thank you for sharing this amazing tutorial. Is it still working? I am having this issue with ControlNet updates: controlnet warning gess mode is removed since 1.1.136. please use Control Mode instead. If I choose the CN v1.1.X IN Ambrosinus tool, Result image differs completely from original image. Also changed directory to point directly to CNet path. Any hint? Is there a way to choose the SD Model? Best
Thank you so much for the clear explanation. I tried to follow this video with the new update of ambrosinus but I was no able too. And when I installed v1.1.9 I was able to utilized your code. Although at the output SeeOUt (LA_SeeOut) an error occurs. “index was out of range” any ideas on how to fix this error?
Hey, thanks a lot! I think you just need to generate an image first, after that you will able to see it and the error will disappear.
Год назад
hi, thanks for the video. i check other videos and came to somewhere until I stuck with webui part. my webui-user file looks different than yours. there is "--xformers" and "git pull" lines in yours but I don't have it. unfortunately just copying it as yours doesn't work :) . Dont know what is missing but I can say that it is pretty overwhelming setup for sure.
Hey Cankat, Thanks for your comment. "--xformers" is an additional step that you can use if you have an RTX 30 or 40-series GPU; it will speed up the generation process. And the "git pull" comment automatically checks for new updates when you run the SD. So you don't have to have them to use it; the only must is the "--api" to give access directly inside the Grasshopper file. Since it is an early experimental workflow, you are right that it is not so user-friendly. But it will surely develop, and I will share the newer versions very soon. Thank you!
Hello, thank you very much for your good lessons. In the 3rd minute of the tutorial, you say that I put the GrassHopper codes for you in the description section. But unfortunately I can't find it. Is it possible to guide me?
Hi, thanks for the feedback! You can find all the resources mentioned in the video here: designinputstudio.com/this-will-change-everything-in-architectural-visualization-forever/ And you can download the file here: www.notion.so/designinputs/AI-Render-Engine-Template-File-02d34b595f824ca6a9f1339470fb1387?pvs=4
Hey, thanks a lot for the feedback. ❤Hmm, I am not super sure, but I believe there is no extension for that yet. But I am experimenting with connecting Revit to this same workflow with Rhino.Inside.Revit. I will share it as soon as it's ready :)
Thanks a lot for the video, is really incredible, I just have a question, I did everything exactly same and in the generation comes the results regardless of my base image, what could be the problem? otherwise it works directly in stable difussion in web window
Hey, I am also using Rhino7 and was able to use it without any issues with the latest version of Ambrosinus-toolkit, if you are still having issues you may contact the developer.
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)
Thanks for the great explanation! Got everything up and running:) One quick question, I am having issues working with the depth controlnet. I have downloaded the previous controlnet versions (aside from the new controlnet v1.1 versions) but the depth and canny masks are very bad quality. This is only an issue for me when I use controlnets in grasshopper. Any ideas what the problem may be?
In the 4th minute, when you press the start button, it renders without any problem, but it is a problem for me because the SEE OUT code is red and it gives this error ( Solution exception:Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index) can you help?
@@11Bashar Unfortunately, I was disappointed in connecting to Grasshopper because I don't notice its errors and there is no explanation about it anywhere.
Thank you so much!! I’m just having issues with the resolution of the “depth image” that it creates, its really low and cause of it I can t use my models Can I increase it ? Thank you anyway this tool is amazing 👍
Hey, thanks for the comment! If the image resolution is low from the viewport, you can try printing a view from Rhino with a custom resolution and use it in Stable Diffusion directly. It may help but don't go larger than 1024x1024 it will slow down the process dramatically, once you like one of the views than you can upscale the image later. Hope I understood your question correctly. Let me know if you have any other issues.
I think it's a great tool for rapid prototyping with less images . It unlocks more possibilities and gives us and the client more variety with less time and energy. The biggest hope is we come to a final image that we might have not even though possible before. But for a final image I think the old method is still king. Who knows next year this time it might be a different story m Will I use it for my next project oh yes but the blender version it's always best to get in early with new technology
@@designinput there are a few reason. 1) Been open source it was easy access with out restrictions and invest time and resources on it. I'm a freelance/ business owner. It is important I run as lean as possible 2) rapid development : it can do a lot of things and it's ever expanding its reach. I'm able to complete a project in one software with out having to hop on another. Yes it's not as strong as rhino or Max but it's gives great quality. 3) the community: they drive the development and education of the software it's so of owned by us . The amount of tutorial and add on , stores available. There is more but let me park here
Hey, thanks for your comment! Generating exactly the same image twice can be challenging. But if you want to change a part of it, you can use inpainting to edit it.
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. But I am sure we will see some developments about this very soon!
Hi its looks amazing thank you for that but I tried it and also used the same parameters but unfortunately it generate a different image not the image of the pavilion it change it completely i dont know what i did wrong if you could help me thank you again
Hey, not with the exact workflow but it can be possible to use it with an extension. I am not using 3ds Max myself, that's why I haven't experimented with that one. Let me know if you try it :)
Hey, maybe it could work with the Rhino.Inside.Revit, but I haven't tested it. But you can always take a screenshot and use the SD + ControlNet additionally.
I feel it is great and help me with visualization of orthopedic footwear designed through #Pedorthic Information Modeling! Waiting to have time to explore it! Thank you
Hey, thanks for your comment! Technically yes, I had some issues creating this custom workflow on SketchUp; when I figure it out, I will share it :) Meanwhile, you can try extensions like VerasAI and ArkoAI extensions.
Hey, thanks for your feedback ❤You can keep the same seed number for the different views to have similar results. But still, it is not so easy to generate precisely the same materials and textures all the time. If I figure out something for more consistent results, I will share it :)