THIS IS WAYYY TOO INTERESTING TO MISS OUT. And Faraz casually being confidently wrong in 1980 being 33 years ago (when it's actually 43) is another highlight.
You mentioned that the video was super sharp despite being so old. That's because those movies were shot on film! If you can get your hands on the actual film of a movie you can basically remaster it in 4k or larger with perfect quality. That's because film is an analog medium, you don't have pixels in film. Those movies were played on a huge movie screen back then, the only reason why we connect those old movies with low resolution is that there was no need to render those movies for old TVs that were not capable of displaying HD. That's why you can now find old movies, music videos etc. with super crispy 4K quality. In my opinion they look even better than modern digital movies, there's just something about film that modern cameras can't replicate. And by the way, amazing use of this technology! I knew about the existence of this but I haven't though about using it to digitize movie scenes. That's genius!
@@eddiej.l.christian6754 #confidentlyincorrect moment. I think you just misread because they never said digital was sharper than film. It was an explanation of how old film can (depending on film stock used) be as sharp as a modern digital movie.
Imagine how cool it could be with 4DGaussians? It is a recent paper about gaussian splats, but with motion. Yes, you can recreate 3d dynamic scenes from videos with this technique!
@@irfanadamm5819 its basically better for filmmakers in post. once the fidelity gets really good, you’re eliminating a lot of the DP’s job outside of lighting. If you’re able to change the focal length, camera movement and composition in post
@@pva5142 Why would they even want to do that? It's completely inorganic to do that, and the "fix it in post" mentality is gradually being forced out, thankfully. Also "outside of lighting"... so only the most important element of the job.
This is awfully like lucid dreams if you've had them. Very similar restrictions, like you can't go beyond a certain point where it becomes like an invisible border of less details or something obstructing your view
Yeah dude it had to be done we were so curious!!! Also remember when u said try it in VR? We jus did yesterday … it was soo good! Still needs work but definitely exciting!
"That movie was made in the 80s, that was 33 years ago" what. But then again, you can tell these guys aren't the brightest tool in the shed. They are marveling at something your phone has been able to do for a decade now
You guys are excited so much on this concept that its infectious😁 and even I would love doing the exact same stuff like you showed here. What a mind blowing tech 3D Gaussian Splatting is. Will have to upgrade my PC just for playing with this in UE.
Hahahahha It's obvious we are having fun isn't it? I mean it really is exciting being able to create 3D scenes from movies that were shot ages ago. Where does this go next? We've already found videos about 4D Gaussian Splatting which is technically the same tech but in motion so think Gaussians but in video form. Where we are heading is absolutely insane much like video games!
@@badxstudio Oh man! That would be crazy. Can't still process the excitement after seeing this 3D gaussian splat itself and now you say 4D. Imagine those awesome movie scenes of that golden era re created through our vision. Perfecto!!🤟
I wonder how good these results would be if you exactly mirrored the actual video camera path and tracking, and then created 3d models from the point-in-time gaussian splat viewpoints that intersect across all frames.
be interesting to see where this goes. im wondering how long until we see animated gaussian splats and scenes being recorded in 360 from multiple cameras, sync them up and being able to "play" it like a video, or drop it directly into your 3d environment ect.
I actually made a 360 video that could be played and stopped while in VR. I have a RU-vid video of it on my RU-vid channel somewhere. It is The Cable Center VR Virtual Archive on STEAM. In one part of the building I have a 360 video section. You place the bubble of the video onto your face and you are in the video. The controller changes and allows stop and play. No fast forward though or rewind. That would have been cool. The exhibit was made in 2016 so it is a bit dated.
In the Trinity shot did you notice the doubling was gone? Also it was squashed, it's like the 3D point cloud stage undid the doubling by squashing them together causing the whole scene to get squashed horizontally.
@@badxstudio thx man you are always welcome to my city adana the home of kabab, you guys should make a video about render problems in unreal engine after 5.1.1 movie render queue not working well im handling it somehow but many people cant
Imagine taking one of those RU-vid videos where someone filmed san fran or LA in 1920 and with the forward motion create a 3D splat or 4D splat. Somehow the forward video would have to be interpreted as side view for at least 180 view.
I think static and long views of fully CG scenes are probably the best bets of getting anything out of this. Which is at least interesting for the idea that you could then take old somewhat subpar CG shots and recreate the scenes with the same camera movement. At least until it's possible to account for the moving elements completely. Video games are cool and all as a use case but most of them are already 3d and can have their assets ripped, so the only real use case with that is putting funny things inside the scenes.
Imagine if they held off doing The Matrix 4 for just a few more years, they could SHOT for the use of this technology and would have been an incredible revolution from the previous version of bullet time. These tests are amazing but imagine it with a Hollywood budget... someone out there is going to do something insane with it.
First thought in my head. I think NEO bullet scene was the second thing I tried after I installed the software. Did not come out so well either. Gaussian Splatting can be used from now on for scenes like this for better effect.
That shot from the Shining is taken at the top of Mt. Hood ski resort in Oregon. There is a glacier there where you can ski even in the summer time. When I skied there, I instantly recognized the lodge as the one used in all the exterior shots in the Shining.
@@badxstudioI trained there with the Swedish and Korean national teams during the summer of 1991. Glacier skiing in the summer isn’t the best when compared to normal winter powder skiing, but its good for training on the off season.
What's the biggest advantage of using GSplats for video games? I am just wondering if the image sequence taken directly from the video game by panning the view joystick in video games to generate a 3D scene would have any practical benefits
This is a game-changer 4sure! Lovely experiment guys, keep em comin! (plus what great news to find out that, since i was born in 1983, according to ur math, I've only just turned 30 this year! Thank God, i thought i was 40 years old, **phew** such a relief, i feel 10yrs younger!) 🎉🎂🥳
Thanks for trying out movies. I was so excited to see how that would turn out. It's a little trickier than I thought it would be, even with some pretty perfect scenes (limited movement within the scene and rotating camera). Video games into VR is a great next idea, since a rotating camera is as simple as moving the right joystick in most cases. I'd also like to see if you can figure out how to edit the capture, delete unnecessary artifacts, etc. Is that possible?
In our UE video, we showed how you can crop these scenes but we still don't have a precision tool for deleting each ellipsoid in UE (devs have already made it for Unit)
12:35 isn't "tricks" for motion blur, you've just got a what looks like a telecined copy and your player isn't set to detelecine hence the resulting combing effect being present on only certain frames. If you pull a direct copy from the Blu-Ray (or find a better uploaded copy) it won't have this artifacting.
One more thing. Say I want to scan a building with a drone. Could I take interior shots and combine them for an outside and inside model? I assume once loaded into Unity or whatever you could piece the separate splats together.
Imagine crime scene data funnelled into this... They could just photo the room and step into the GSplat room in VR Goggles to explore the space from all angles and sizes.
The weirdness you get from trinity in the matrix is framerate/telecine conversion errors. A properly inverse-telecine'd dvd or non-rate-adjusted 23.976 bluray source would be the ideal source for this shot.
What's kind of cool, is even though AI is used to generate these, you can use some other AI to replace the missing "footage" -- for instance trees and mountain behind. (Sure it wouldn't necessarily be exactly what's actually behind the hotel, but it could generate lost information to give you a reasonable facsimile of what might haave been behind it
We were talking about it after watching Adobe Max, it would be dope to try the video in painting feature. This is definitely something that eventually will be part of the process.
Proposed method gives impossible view directions, I'm truly impressed! I suggest converting some bullet time scenes from RU-vid. This should look amazing!
If I were going to have a talk with host and guest I assume I would do a 360 of them in their chairs first before a 2D video interview was started in order to create a 4D splat of the 2D video? How would you do this?
Would it work for scenes where cameras move across or remain stationary? I'm thinking of using this method to colorize black and white footage, so I'm just wondering
Camera needs to be moving to show clear depth! As of now this Gaussian Splatting tech works on finding points of interest, and if you are not moving your camera the right way you might fail your training!
@@badxstudio All I need really is for the scenes to be rendered exactly how they are for what I want to do, as I'd like to edit each object individually
@@thevfxmancolorizationvfxex4051 I think there are other methods you could use for recolorizing old footage. I don't think gsplats will be the right way to do so, especially with stationary camera.. it can't figure out the depth of a scene without multiple viewpoints. It doesn't create any data that isn't already in the video.
can the gsplats cast and/or receive shadows? like if you put a cube in the middle of the table in that one scene and cast a light, will the table receive a shadow?
Imagine this thing being able to record every frame properly (instead of a few) and you could literally pause in any of them and look at them from any angle without losing detail. If anything, they should be able to fill in the blanks with AI
Yeah that's the thought! We like to believe that very soon AI will b able to help in generating the missing detail and create a complex scene that looks stunning!
if you saw the adobe sneak peak you know what you must do next right? break the videos down into images and use the new feature to create the data that would be behind objects so when you put the images back together you will have multiple videos that would each be used to complement any data not present in one that is present in the other. think its called a clean plate? (idk). Actually check out the sneak peak cause there are more things you could do with it.
in the viewer you get the best results by retracing footage camera path. Press V in Sibr viewer and play a sequence of the original camera path, the result is always good. I have a question, do you get the same quality in unreal as the sibr viewer? thanks for sharing your knowledge!! you guys are awesome!🤟
Hey can you please try this on apollo 11 or other missions? I tried it with apollo 11, but only with the images. But they aren't overlapping enough :( Is it possible somehow to tell the 3DGS, where each photo was taken and the angle? would that help?
this technique would be great for an overhead ring of cameras or partial ring of cameras synchronized similar to the matrix shots - not to take the single output video from per se as you did, but for the filmmakers being able to relight the scene in post
Well as for photogrammetry you get geometry with actual polygons which can be useful for things such as collision! There are other aspects to consider too of course :)