As part of a so-called "proof of concept", a scene for a new Ghostbusters film was produced using real-time rendering. Check out the article here for more details: www.photografix-magazin.de/so...
There are also scenes that were recorded but not added to the game due to a lack of budget, time, and technology, which would be amazing to have added to the game
I see a lot of people saying “it doesn’t look real” don’t get it. This is a new technology that allows for a director to see the post production cgi product in a lower quality version than the final but in real time on set during filming. This could have major effects on the use of these technologies when you can have them integrated in real time with a person’s performance. And not to mention an actor being able to really see were and what their characters are doing animated or cgi. Making for a better performance from all actors and a better ability for a director to see what up till now wouldn’t be seen till wrap and months of post production work afterwards often the actual cause of some of the bad cgi I think. Just trying to make sub par shots “just work”.
I thought I was crazy reading these comments. It’s like no one even watched the video or what the point of the presentation was. This isn’t actually the effects for the movie just creating moving concepts for what it could look like. Without taking up the time and resources when initialing the production of the film.
@@wilmcl9209 Nah your right let’s go back to filming the whole movie and taking that footage and adding cgi raw then when any issues arise they can just “make it work” with lots of sloppy cgi work to force footage to fit a new concept for the scene. Spending potential millions on reshoots if they can’t use existing footage. Also adding to scenes shot and in post production they learn it just doesn’t work like they imagined ending with scrapped script sequences. Not all technological advancements are negative. Maybe just maybe seeing a snapshot of what a scene would look like DURING the initial filming process and not after filming in post production cgi could not only prevent costly reshoots. But could also help marry the practical and cgi like never before allowing for a direct visual representation of what they are attempting for any on camera actors. Cgi isn’t something that should be wholly removed from movies bc without it we wouldn’t get believable flying or no more crossing styles like “Cool World” or “Who Framed Roger Rabbit” even in a world where a movie may use hand drawn animation and computer software to insert a cartoon character like the cartoon cat cop from the last action hero. It would allow for a on set actor to “see” their digital co star reacting with them in “real time” and not having them added afterwards and the actor was talking to a ball on a stick for hours and hours of filming with no real idea of how it would look and be. Too much cgi is an issue but to deny it has any place in movies is as idiotic as someone from the late 1920s complaining about too much long winded dialogue in movies. “Movies don’t need all that talking just pop a dialogue text card up and get back to the action” those where probably something your great grandfather would have said.
True enough; however, people want to see a bit more on the polished production-style of quality in the result when the title says "movie" in it. Seeing this, its no better than any standard video game. Motion, physics, volumetric lighting, all of it screams video game and very little to none of it screams cinema.
People trying to compare this to any regular CGI are missing the point. Most CGI takes hours a frame, this is playing out at real time like a real set would during a take. For actual final shots, they would move this into a rendering pipeline that renders it at a far better quality, but the idea is they can now plan out scenes this way very economically. They can iterate faster, save more on physical shots that might get cut, and deliver movies faster this way
Where was that stated? Never once was that said. They literally are talking about real time rendering. Btw, they actually already do what you're talking about, that isn't new. You're clearly missing the point here.
@captaincaptain2128 if you look closely, the video at the beginning has a text at the bottom of the frame saying that this is a real-time visual effects proof of concept video. It is a game changer.
@@MrGreenAKAguci00 Not really. The technology isn't quite powerful enough to make real time rendering look as good as traditional CGI. Give it a decade.
@captaincaptain2128 it doesn't have to be that good if it's only used during production and to allow the crew to work more efficiently and be more informed about what the end result will look like. Single She-Hulk episode after all the CGI changes they had to do due to the director's indecision and constant script changes ended up costing around 25 million USD. 25 mln per episode, mostly because of vfx. Now that show had many more issues, with the writters room and other things but what Sony shows here makes integrating vfx way easier than before because changes can be implemented and verified in real-time. Then after everyone is happy they will render it for the big screen with all the bells and whistles. What you saw here was not the final release.
@@MrGreenAKAguci00 what you're describing already exists and has existed for nearly 2 decades. This isn't that though. I suggest you do a little more research on this technology instead of just watching one video. There's dozens of articles and stuff about it. You clearly misunderstand the purpose of this tech. It's meant to replace the post production CGI work by essentially using digital puppets and rendering it all in real time. It's not meant to be a reference for the director because that technology already exists and allows them to do just that. This tech is meant to replace post production CGI and allow the directors to edit the final shots that day, such as editing lighting. I understand not everyone knows how filmmaking works, but you must know that the amount of time and effort that went into this test alone is in the millions of dollars, right? The assets need to be made prior to filming, and can take hundreds of hours and millions of dollars, and for what? To be a reference for the director and just get scrapped so that the CGI team can spend hundreds more hours and millions more dollars recreating the same stuff but at a little higher quality? You understand how crazy that is, right!? It's called real time rendering for a reason, it's rendering in real time. Ultimately it's going to replace traditional CGI eventually, but the tech isn't there yet. I suggest you read up about it more, as it's very fascinating tech, but you also clearly don't know how it works or it's purpose.
Just came from a website where they said there are upcoming games & animated projects in the works for Ghostbusters, so this could very well be what they will be like.
This isn't about this footage being the "final product." This is about a filmmaker being on set, looking through the camera, and being able to adjust everything based on the real-time feedback from the monitor/s. This, combined with the recent Screen Craft technology used on The Mandalorian and other shows is pretty huge. We're getting to the point where directors can direct scenes organically and in person the way they would on location...but this time it's virtual.
@@maymayman0 What you see in the video is entirely rendered in the computer, and the director directed it by standing in a warehouse-sized room with lots of equipment, computers, etc. He had a tablet-sort of device, sort of a "camera equivalent" but it wasn't a camera. On the screen of that device and on other monitors around the room, he and other crew members could see what the "3D camera" was pointing at. The director could move this tablet/camera-equivalent around and see what the computer was showcasing in real time. However, no actual cameras were used to make this short film. Everything you see was created without an actual video camera that recorded any sort of live picture. That's why it's called cameraless.
@@andysmith1996 I don't know, dude. I"m not someone who worked on this. My guess is that this was a test and the final renders for an actual movie would be "full pixel quality."
I worked in the VFX industry for 5 years and I totally agree (I actually briefly worked on Ghostbusters Afterlife). Practical effects mixed with today's digital comping abilities is a match made in heaven, yet the default is still always CG. It's a sad world we live in today.
@@heckensteiner4713what’s wild to me is studios are always justifying doing all CG because they claim it’s cheaper, while those same studios will say they can’t afford to do anything because CG is so expensive. So which is it? 😅
Heaven forbid storytelling ever become accessible to the common person. Filmmaking should cost millions of dollars and only be accessible to the elite.
I love it. They should use this for one new preview for the new movie. Before showing other preview about new Ghostbusters movie get fans excited for the new move coming out. I know I can't wait to see it.
Because it was made with a software called Softimage. That software was soo good that it almost destroyed the 3D software industry. That is why Autodesk bought them and killed it.
A lot of it was a mix of practical and CGI. What still makes the effects believable to this day is because they knew how to mix the two together to their strengths.
@@hopperhelp1 100%. The pure CG elements like the wide Brachiosaurus shot and herds of running dinos looks kinda of dated, but the hybrid approach was best!
JUrrasic park did everything right. Also they made use of the Dinos in the dark, and the mix with practical. Hope hollowood soon realized, they need to go back to do both.
@@user-lz5vh9bb5wpeople have been saying that for decades. Nothing beats the weight, the tactile nature, the REALITY of practical effects. Even the best digital approximation is still only an approximation. Diminishing returns means that getting it to perfect will be prohibitively costly.
@@alpinion323 100%. I remember video game designers telling me that their best tech would replace movies within two to three years. And that was 14 years ago.
@@user-lz5vh9bb5w maybe. But even if it could, would you want it to? What’s more impressive? Seeing an acrobat do a backflip into a glass of water or seeing an animation? There used to be a sense of “wow, how did they do that?” But even if your right that will be replaced with “AI make all my eyeball food. Nom nom nom”
Oh come on people comparing with jurassic park, seriously?, understand this not a finished product, this is kind of a sketch of what can be done, still to be worked on and developed, is the basement for future great and amazing things in the industry, in my opinion this looks and feels AMAZING, the render concept, ths inmersive story (NYC streets), the clearly remastered soundtrack, the timeline setting with rusted Ecto 1 from Ghostbusters 3 fiercely facing the Marshmallow Man, the amazing sound effects, I mean, this is actually very enjoyable, thank you Ghost Corps, Pixomondo, Playstation Studios and Epic Games for this incredible project which you can be sure will be rewarded in the future with so much support from the Ghostbusters fans whatever your final product(s) be.
Okay, I think everyone is missing the point here. The important part of this is "real-time". They're able to get high quality effects in camera on the sound stage without waiting for a render, which can take days or weeks depending on the length of the shot and can still require compositing. That said, I don't think they're going to use those real time effects in the movie. First of all, I don't think you're seeing the real time effects here. If you look at the screens on the sound stage, they seem just a little rougher than what we were shown at the start. I suspect that was put through a little render pass to sweeten the textures and effects and lighting. However, even that it's a finished scene yet. They'd need to put real people in the scene, those digital stick figures just ain't gonna fly on the big screen. There's very few particle effects, with dust and paper and debris flying. It's just not a movie quality scene. That said, real time graphics have appeared on the screen more than a few times, because real time rendering is used in the Volume, which was created for the Mandalorian and used in other shows as well. But those are for more or less static backgrounds and they didn't take up the whole scene, as there was usually a human actor and physical set pieces in front of them. Is it possible that the real time graphics could get real enough to be what makes up a final scene? One day maybe. But it would have to be a carefully constructed scene, and one that doesn't betray the unreal-ness of what you're seeing. And I mean "unreal" both in the sense of the technology and in the uncanneyness.
Wow. I remember a few papers about the realtime rendering concept 15 years ago. They thought the technology for doing it this well was a couple of decades away. Got it in a few years early. Nice job.
This is a proof of concept rendering for Jason and the creators to work from. This is not the finished final edited cut of the scene. I think there will be so much more to film than just chases thru city streets.
So this is basically some form of HD layout? Or test-running an effects package in a realtime environment? I checked the link and it's in German, for whatever reason I don't have translation options available. There's a video embedded that explains the staypuff effect in detail but I'm still a bit confused what else is real time, maybe some of the camera effects and I guess the vehicle rig was moved around in real time? The vid name is "SCC - Visual effects using real time technology" I'm still a bit puzzled, because realtime rig setups to preview mocap have been used many times before. But I'm guessing the 'tech demo' stuff here is the traffic sim, lighting, and fluid FX in real time? In the sizzle reel, there's still an artist on scene making adjustments to animation just like on any regular mograph stage, so that stuff still takes 'time' even if it's closer to realtime. I'm not really sure how that would be used-- I guess you'd use all this as some kind of springboard for the final product? In the demo it... doesn't look very dynamic, especially the camera angles so I'm just surprised they're trying to pass it off as some big interesting innovation. BASICALLY any mocap you see on screen is highly HIGHLY adjusted by animators. Like, to the point of the mocap basically becoming a base to the final product after many many many iterations. Realtime performance capture is currently a thing-- so I still don't get why they did all this just do have the demo have.. nice lighting? And some fluid? That doesn't really look too great?
So, I should note that a) this wasn’t for a new Ghostbusters movie at all… it was a stand-alone proof of concept. And b) the output of this process is never what would end up on screen. This is just a way to get a faster, on-set VFX capture that can be fully visualized as it is shot. Once all that was done, it would still go to post-production for performance, animation and timing tweaks, as well as some additional compositing, then would get a full render with ray-tracing (which Unreal’s live render engine is currently incapable of). The shots would then be mixed with (and matched to) actual shot footage, not ever delivered as a full, humanless scene like this. In short, there’s no fear that movies will ever look like this. It’s just a method for making the VFX pipeline faster and more connected to principal photography.
Yeah, but they''re not pitching this as previz or volume capture material. As a proof of concept its clearly intended to say; this is what we can do; final pixel in real time. I work with a lot of volume work and they dont put this much effort into the output lol
Incredible that it is real time. CGI takes are now possible. The title is confusing though, as you still need to get the shots with a camera, even if it is virtual.
THIS IS WHAT WE WANT!!! A fully realized, sandbox style game where we can drive around NYC, NJ, maybe even upstate and PA as the Ghostbusters. Taking jobs in businesses (jewelry stores), subways, restaurants, haunted mansions, Central Park, amusement parks (Coney Island), and sports arenas/stadiums would be wild. Even constructing and eventually piloting Ecto-2 would be epic.
Excellent work by all involved. I wonder if human characters were avoided to prevent the uncanny valley effect. Some parts appeared AI-generated, but in realtime it's all amazing. Wonder what the result would be if they put all settings on epic and render non-realtime to a file in unreal.
this is what is called a pre-vis. it is intended only for the director to see, so they can line up the shots they will require. pre-vis are the modern day version of storyboards. this was never meant to be enjoyed as an actual finished product.
@@FablestoneSeriesWow, IDK. I know what pre vis is of course, although none of the level of productions I've ever worked on had a budget for it. But what I didn't know, is that they did so much work and spent so much just for pre-vis alone. The budget for the movie must have been limitless. But this explains leaving out the humans for know. Still curious about the epic setting and using unreal to do non realtime rendering. Because hey, why not.
Why is everyone using the same Unreal Matrix demo city scene. I appreciate the work and art direction, but this wouldn't hold up as a sequence cut into a live action film. It still has video game quality. Just because you can render the whole sequence in a day, doesn't mean you should. All thoughts aside, still a fun short sequence though :)
3D Animated series are made for the last 25 years at least. After Pixars Toy Story everyone went into 3d animation. I am using Unreal and Unity for the last 10 years and i started with 3D in 1990. The question is what the production is aiming for? Today i am just working real-time. I am not rendering anymore. The possibilities that RT Engines likes Unreal provide are huge because you can almost work like in the real world but in a virtual one, without waiting hours for some usable results. But what makes me think here is the description. Is it about VFX or a "new" production workflow for whole films and series?
I thought they had been a mix up as it looked like a computer game. The Stay Puft Marshmallow Man looked great, probably the only thing to be "cinema quality". But it is only a "proof of concept".
@@jess_n_atx It's good for real time, I guess this is more for the artists and actors making the film and less about what people will see in the final product. Making on the fly changes easier and all that
This is what is called a pre-vis. it is intended only for the director to see, so they can line up the shots they will require. pre-vis are the modern day version of storyboards. This was never meant to be enjoyed as an actual finished product.
I don't know what needs more time or money to realize a 'proof of concept' likes this or a traditionell one with an portion of imagination, what the film/shot can look like...
This is the thrill-a-minute, action-packed, special effects extravaganza I expect from Ghostbusters, without all the boring dialogue and outdated attempts at comedy. You've done it again, Sony.
Still a long way to go with the effects, getting there though... for me it was Stay Puft...in original film when he walked you had reactive effects like fire hydrants exploding or crack lines along the road as he walked giving weight to the character this seemed missing in this sequence.
We are so close to photorealistic. This is still slightly uncanny (not taking anything away from it, it’s technically the best Ive ever seen). Going to be scary when we cant distinguish life from fake. Scary times ahead Im afraid
This doesn’t look photorealistic, and even if it was, it just isn’t as cool to know it was all digital and not recorded for real, it’s just more engaging to know it was all filmed for real