To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/AlbertBozesan/ . The first 200 of you will get 20% off Brilliant’s annual premium subscription.
I could see this working really well with environments and scenes created based on a novel, with the audiobook accompanying the user. Basically listening to audiobook, while in VR with a visual environment based on the novel.
Just a suggestion, to make things better, skip the depth map creation initially. Upscale the image and make the final tweaks, then create a smaller size version of it for depth map. this way you can have more room for tweaking.
Wow; I've been generating skyboxes for a little VR indie game using blockade-labs and I had no idea there was a 360 Lora available for SD! This will make fine-tuning so much easier. 😃
BlockadeLabs ist excellent, I just feel like figuring out and teaching people how to do it independently on their own computer is more valuable in the long run 😄
hello. Good tutorial! i was wondering if i can take the generated space image and somehow base the next image on the previous one so that the details remain the same,,, for example if i want the same room but a view at the side of the couch...
For okay performance you will need a Windows PC with at least 8GB RAM, an NVIDIA GPU with at least 8 GB VRAM, and as much hard drive space as you can (models are quite large).
Great video! Can you recommend the steps it might take to convert this environment into one you can use in the Unreal engine so that a player could navigate within it?
Hey, thanks again for the great tutorial! How would i send this file to someone so they can try it out on their Quest without having to build the environment themselves? Is that possible?
LoRa means additional parameters that come from a finetuning on top of a model, so they cannot be used with any model, right ? just the one on top of which it was finetuned
LoRas can be used with any checkpoint as long as the base model is the same (SD 1.5 vs SDXL for example). But it’s correct that the checkpoint a LoRa was trained on is the one it works best with. This is why older LoRas in general work worse and worse on newer checkpoints.
Excellent video! For some reason my panorama viewer doesn't allow me to rotate te screen. It just shows the generated photo, but I'm not able to pan the screen. Any clue what i did wrong? Everything else works fine.
Hello! I really like your video and tried to follow through the process and I have one question. When creating a skybox image as a scenery, the tree is too large and appears to shrink in the upper part of the image. Is there any way to fix this problem? I used the Asymmetric tiling + 360 Lora. Thank you :)
Thanks, glad you enjoyed the video! The way to fix that at the moment would be manual editing in photoshop, using generative fill and just classic image editing techniques. There’s no shortcut quite yet - the top and bottom of the 360 image is just bad for now.
I have a brilliant idea. How about creating an ai program where you type in a paragraph to describe the environment that you want and the ai creates a vr environment just like how the ai art generator works in the Wombo dream app.
I need help! I think I must have a setting wrong. All my images are looking like a bad oil painting. I have the lora and the model you suggested and the sampling method DPR++2M Karras. what am I doing wrong. I want ultra realistic looking interiors.
Sounds like a more general Stable Diffusion problem unrelated to this specific workflow. Maybe something with a VAE, or using an SDXL model with an SD 1.5 resolution. You could post a screenshot on Reddit? It’s easier to help there.
Thanks for this tutorial! I have one problem, when i click Build and install environment, it says that the APK build was finished, but it doesn't automatically launch on my Quest. I can find the app in the Unknown sources, but nothing happens when i click on it either. Any idea how to proceed?
@@albertbozesan Perfect! It works now :D Also, i tried to use SD upscaler in img2img because it usually does a much better job than the ones in the extras tab, but it created a really weird image with a bunch of connected tiles. I guess it's not meant to be used for equirectangular images. ESRGAN didn't really seem to fix the resolution much. Any idea about alternative upscaling methods?
Any tips on how to create a video like this ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-dbp8tdWjBB8.html so we can experience multiple AI created rooms using youtube VR in Meta? Thank you for posting these informative videos!
Easier than the process I show here, actually. You don’t even need all the depth steps. Edit your 360/180 images one after another in any video editing program, and make sure you set the RU-vid settings to 360. Any 360 video tut should help.
Seriously @@albertbozesan, it's hard to believe someone can be so out of touch that they are hating on VR and the Matrix while on a VR tutorial. Who are these lost souls with nothing better to do but spread negativity?