The results are terrible. The thing transformed the cat into a pile of dirty clothes, the image on the tv screen suddenly is interpreted as an American flag, the holes in the mac case suddenly have a different diameter... It's literally mind-blowingly bad. TopazAI in the other hand can improve size, reduce noise and artifacts in renders without hallucinating and creating weird results.
Yes, Krea Ai is very good for post production but if you put people at very far distance, they are scary and got terrible results. Some items became mixed and make different from original image too.
Thanks for sharing. Before I start watching the video, I really know what you are talking about. It’s impressive. However, my concern is that now anyone can create similar images without much expertise. As experts in 3D visualization, we might face challenges due to this accessibility. The increased accessibility to creating such images could lead to both positive and negative impacts on the industry. On the positive side, democratizing access allows more people to experiment and innovate. However, it might also saturate the market with similar content, making it harder for experts to stand out. Additionally, quality control and originality could become challenges.
Such a good perspective! I am on your side, where the technology are providing high-quality output but in reality to make same standard as the output will more challenging to control the quality at the site.
This using of AI for enhancing renders is amazing! You can run into Stable Diffusion. It will be a new world for you. By the way, your work is always superb!
@@m.nurfathier.s3896 it is free to try but you need pay to use it I felt it to be worth it as I run a interior design and execution company most of my time goes in to production and assemble (cutlist, etc) I have small team I use sketchup to design and screen shot the image and render it using ai .
@@Stan-Ard image to video and photo generation: comfy ui+stable video, animate diff, flux, SD 1.5; music: Suno ai; photo and video upscaling: topaz labz; voice overs: applio rvc fork; offline gpt: lm studio; photo editing: Photoshop, lumiar ai, polarr; audio mastering: iZotope ozone 11 etc. also I test a lot of online services for ai video generating
I also have to say, the AI version is not any better than the original. I do these kind of images as profession and usually all the "guessing" the AI does, goes west. On a small screen (like, where the end customer sees the image) it just looks as one had put clarity or unsharp mask on the image. It makes the image more "juicy" but less elegant.
Not yet, some materials in AI enhanced images need to be replaced with original material or other textures in post production, some of others changes too, like light effects, mirrors, backgrounds, etc.
this is Stable Diffusion. it improves photorealism of details. But if you zoom out image is almost the same for the clients quick glance. I am looking at the video on small screen and difference is minimal, a bit sharper but details look AI . It can be used as idea to improve the 3d model and render, especially good for colours and materials, and give idea to create new models or change small elements or produce low miniature size renders
@@MelosAzemi Thank you! how did you made it in this video? It looks like while you were working in Enscape real-time rendering, there were no rectangular light reflections on the TV screen?