Тёмный
No video :(

3D Gaussian Splatting - Why Graphics Will Never Be The Same 

IndividualKex
Подписаться 76 тыс.
Просмотров 1,7 млн
50% 1

3D Gaussian Splatting explained
original research paper: huggingface.co/papers/2308.04079
twitter (more research): / dylan_ebert_
tiktok (more games/tutorials): / individualkex
tags: 3d graphics, rasterization, gaussian splatting, 3d gaussian splatting

Опубликовано:

 

4 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 2,8 тыс.   
@Johnnyjawbone
@Johnnyjawbone 11 месяцев назад
No nonsense. No filler. No BS. Just pure information. Cheers dude.
@NeuralSensei
@NeuralSensei 11 месяцев назад
Editing probably took hours
@budgetarms
@budgetarms 10 месяцев назад
Yes yes yes
@realtimestatic
@realtimestatic 10 месяцев назад
Not enough explanation for me but pretty short, cut down and barebones although I’d like to understand the math a bit better
@CrizzyEyes
@CrizzyEyes 10 месяцев назад
Only problem is he seems to ignore the fact that this method requires exhaustive amounts of image/photo input, limiting its application especially for stylized scenes/games, and uh... Doom didn't have shadows so I have no idea what he's smoking.
@pazu_513
@pazu_513 10 месяцев назад
probably best format youtube video I've ever seen.
@Jeracraft
@Jeracraft 10 месяцев назад
I've learned so much, but so little at the same time 😂
@jakekeltoncrafts
@jakekeltoncrafts 10 месяцев назад
My feelings exactly!
@AlexanderHuzar
@AlexanderHuzar 10 месяцев назад
"The more you know, the more you realize how much you don't know." -Einstein
@Corvx
@Corvx 10 месяцев назад
Yeah that's the problem with zoomer attention span. Content/information needs to be shortened nowadays to miniscule clips. There are other more lengthy videos about this topic, but sadly they get less views because of the said problem.
@imatreebelieveme6094
@imatreebelieveme6094 10 месяцев назад
If you want to actually understand this stuff you have to sit down with several papers full of university-level math for a few hours, and that's if you already have an education in this level of math you can draw on. Generally if you feel like a popsci video explains a lot without really explaining anything the reason is that they skipped the math. TL;DR: Learn advanced maths if you want to understand sciency shit.
@ShuffleboardJerk
@ShuffleboardJerk 10 месяцев назад
@@imatreebelieveme6094There’s a spectrum. On one end, there are 2 minute videos like this, and on the other end is what you are talking about. I think there can be a happy medium, with medium to long form videos that explain enough about a topic to understand it at a basic level.
@william_williams
@william_williams 10 месяцев назад
That "unlimited graphics" company Euclidean was working with tech like this at least a decade ago. I think the biggest pitfall with this tech right now is that none of it is really dynamic, we don't have player models or entities or animations. It's a dollhouse without the dolls. That's why this tech is usually used in architecture and surveying, not video games. I'm excited to see where this technique can go if we have people working to fix its shortcomings.
@comeontars
@comeontars 10 месяцев назад
Yeah I was gonna say this magic tech sounded familiar
@shloop.
@shloop. 10 месяцев назад
I wonder how easily it can be combined with a traditional pipeline. It would be kind of like those old games with pre-rendered backgrounds except that the backgrounds can be rendered in real time from any angle.
@DuringDark
@DuringDark 10 месяцев назад
@@shloop. I wondered whether you could convert a splattered scene into traditional 3D but I realised mapping any of this to actual, individual textures and meshes would probably be a nightmare. Maybe you could convert animated models to gaussians realtime for rendering, and manually create scene geometry for physics? For lighting, I imagine RT would be impossible as each ray intersection would involve several gaussian probabilities. As a dilettante I think it's too radical for game engines and traditional path-tracing is too close to photorealism for it to make an impact here
@der.Schtefan
@der.Schtefan 10 месяцев назад
Not everything needs to be a game. There are people dying of Cholera RIGHT NOW!
@ntz752
@ntz752 10 месяцев назад
​@@der.SchtefanThis won't help with that though
@wankertosseroath
@wankertosseroath 10 месяцев назад
It could work really well for spatial film making, but for 3D interactive game-engine based applications, it might be an optimisation nightmare to real-time move a bush somewhere when it's made of 3 million gaussians.
@the-coop
@the-coop 8 месяцев назад
Tools already solve this like blender and after effects, why does the editor need to move 3 million gaussians? It can process that separately it only needs the instruction.
@javebjorkman
@javebjorkman Месяц назад
Real time ray tracing but for whatever you just said
@weirdsoupcartoonsyeah
@weirdsoupcartoonsyeah Месяц назад
when have you ever seen a bush move somewhere in a game?
@Exsulator2
@Exsulator2 Месяц назад
Feels like this technique would simply be the result after building the game, so the editing and creation of the game itself would work as normal... maybe?
@WeirdBrainGoo
@WeirdBrainGoo Месяц назад
Wouldn't wanting to move a bush somewhere mean physically moving it and having to retake all the photos of the scene?
@x1expert1x
@x1expert1x 11 месяцев назад
this man can condense a 2 hour lecture into 2 minutes. subbed
@trombonemunroe
@trombonemunroe 10 месяцев назад
Same
@ronmka8931
@ronmka8931 10 месяцев назад
Yeah and i didn’t understand a thing
@gigiopincio5006
@gigiopincio5006 10 месяцев назад
it's in the description, where it says "original paper"
@acasccseea4434
@acasccseea4434 10 месяцев назад
he missed out the most important part, you need to train the model for every scene
@SolarScion
@SolarScion 10 месяцев назад
​@@acasccseea4434It's implicit. He started with the fact that you "take a bunch of photos" of a scene. The brevity relies on maximum extrapolation by the viewer. The only reason I understood this (~90+%) was because I'm familiar with graphics rendering pipelines.
@Q_20
@Q_20 11 месяцев назад
this requires scene-specific training with precomputed ground truths, if this can be used independently for realtime rasterization that could be a big breakthrough in the history of computer graphics and light transport.
@astronemir
@astronemir 10 месяцев назад
Yeah but imagine photorealistic video games, they could be made from real scenes created in a studio, or from miniature scenes..
@xormak3935
@xormak3935 10 месяцев назад
@@astronemir Miniature scenes ... we improved virtual rendering of scenes to escape the need for physical representations of settings, now we go all the way back around and build our miniature sceneries to virtualize them. Wild.
@noobandfriends2420
@noobandfriends2420 10 месяцев назад
@@xormak3935 Or train on virtual scenes.
@StitchTheFox
@StitchTheFox 10 месяцев назад
@@xormak3935 it wont always be like that. We are still developing these technologies. This kind of stuff didnt exist 2 years ago. And many thought AI was just a dream until less than 8 years ago. By 2030 im sure we will be playing video games that look like real recordings from real life
@DoctorMandible
@DoctorMandible 10 месяцев назад
Train it on a digital scene
@AuraMaster7
@AuraMaster7 10 месяцев назад
If this takes off and improves I could see it being used to create VR movies, where you can walk around the scene as it happens
@SupHapCak
@SupHapCak 10 месяцев назад
So kind of like eavesdropping but the main characters won’t notice and beat you for it
@theragerghost9733
@theragerghost9733 10 месяцев назад
I mean... braindance?
@uku4171
@uku4171 10 месяцев назад
​@@theragerghost9733brooo
@ianallen738
@ianallen738 10 месяцев назад
So porn is going to be revolutionized..
@thesenamesaretaken
@thesenamesaretaken 10 месяцев назад
​@@ianallen738what a time to be alive
@turnipslop3822
@turnipslop3822 6 месяцев назад
Man this was so good, please do more stuff. This content scratches an itch in my brain I didn't know I had. So so good.
@DonCrafts1
@DonCrafts1 11 месяцев назад
Concept is cool, but your editing is really great! Reminds me of Bill Wurtz :D
@MotionlessBottle
@MotionlessBottle 11 месяцев назад
Exactly.
@paulwilson4594
@paulwilson4594 11 месяцев назад
Damn beat me to it! Imagine the collaboration of these two
@I3ladeDragon
@I3ladeDragon 11 месяцев назад
First thing I thought!
@Fasteroid
@Fasteroid 11 месяцев назад
YUP.
@wedontexist369
@wedontexist369 11 месяцев назад
You mean it originally came from Bill Wurtz
@MenkoDany
@MenkoDany 10 месяцев назад
This technique is an evolution, one could say it's an evolution from Point Clouds. The thing that most analysis I've seen/read is missing is that the main reason this exists now is because we finally have GPUs that are fast enough to do this. It's not like they're the first people who looked at point clouds and thought "hey, why can't we fill the spaces *between* the points?" EDIT: I thought I watched to the end of the video, but I didn't, the author addressed this in the end :) It's not just VRAM though! It's rasterization + alpha blending performance EDIT2: you know what I realised after reading all the new things coming out about gaussian splatting. I think most likely this technique will first be used as background/skybox in a hybrid approach
@zane49er51
@zane49er51 10 месяцев назад
I was one of those people and ran into overdraw issues. I can't imagine how vram is the limiting factor rather than the sort and alpha blend.
@MenkoDany
@MenkoDany 10 месяцев назад
@@zane49er51 How long do you think before we have sensible techniques for animation, physics, lighting w/ gaussian splatting/NeRF-alikes?
@TheRealNightShot
@TheRealNightShot 10 месяцев назад
Just because we have gpus that are powerful enough now, doesn’t mean devs should go and cram this feature in mindlessly like they are doing for all other features recently, completely forgetting about optimization and completely crippling our frame rate.
@imlivinlikelarry6672
@imlivinlikelarry6672 10 месяцев назад
How could one fit this kind of rasterization into a game, if possible? This whole gaussian thing is going completely over my head, but would it even be possible for an engine to use this kind of rasterization while having objects in a scene, that can be interactive and dynamic? Or where an object itself can change and evolve? So far everything is static with the requirement of still photos...
@zane49er51
@zane49er51 10 месяцев назад
@@MenkoDany I have not worked for any AAA companies and was investigating the technique for indie stylized painterly renders (similar to 11-11 Memories Retold) If you build the point clouds procedurally instead of training on real images, it is possible to get blotchy brush-stroke like effects from each point. This method is also fantastic for LODs with certain environments because the cloud can be sampled less densely at far away locations, resulting in fewer, larger brush strokes. In the experiment I was working on I got a grass patch and a few flowery bushes looking pretty good before I gave up because of exponential overdraw issues. Culling interior points of the bushes and adding traditional meshes under the grass where everything below was culled helped a bit but then it increased the feasible range to like a 15m sphere around the camera that could be rendered well.
@SteveAcomb
@SteveAcomb 10 месяцев назад
Perfect video on an absolutely insane topic. I’d love more bite-sized summaries like this on graphics tech news!
@johnnykyr3403
@johnnykyr3403 Месяц назад
i love how condensed and straight to the point this is
@why_i_game
@why_i_game 11 месяцев назад
Would be great for something like Myst/Riven with premade areas and light levels. It doesn't sound like this method offers much for dynamic interactivity (which is my favourite thing in gaming). It would be great for VR movies/games with a fixed scene that you can look around in.
@iamlordstarbuilder5595
@iamlordstarbuilder5595 10 месяцев назад
I just realized it's also probably not great for rendering unreal scenes like a particular one I have trapped in my head.
@ZackMathissa
@ZackMathissa 10 месяцев назад
​@@iamlordstarbuilder5595Yeah no dynamic lighting :(
@peoplez129
@peoplez129 10 месяцев назад
You don't really need to render different lighting, you can merely shift an area of the gaussian to become brighter or darker or of a different hue, based on a light source. So flashlights would behave more like a photo filter. Since the gaussians are sorted by depth, you already have a simple way to simulate shadows from light sources.
@theteddychannel8529
@theteddychannel8529 10 месяцев назад
@@peoplez129 hmmm, but that's not how light works. It doesn't just make an area go towards white, there's lots of complex interactions to take into account.
@archivethearchives
@archivethearchives 10 месяцев назад
I was just thinking this. Wouldn’t it be nice if computer gaming went full circle with adventure games like this becoming a living genre again?
@barchuk422
@barchuk422 11 месяцев назад
the highly technical rapid fire is just what i need. the editing and the graphic design is just great, keep it coming i love it!
@Igzilee
@Igzilee 10 месяцев назад
While being way too technical for any normal person to understand. He immediately alienates the majority of people by failing to explain 90% of what is actually happening.
@cosmic_gate476
@cosmic_gate476 7 месяцев назад
Has to be the cleanest and most efficient way I got tech news on youtube. Keep it up my dude
@oneveryfishyboi331
@oneveryfishyboi331 Месяц назад
I feel like I just took a shot of information, this is the espresso of informative content on RU-vid
@MitchellPontius
@MitchellPontius 10 месяцев назад
Wow. Never seen someone fit so much information in such a short timeframe while keeping it accurate and especially easy to take in. Way to go!
@pavelkalugin4537
@pavelkalugin4537 10 месяцев назад
Two minute papers is close
@xhbirohx2214
@xhbirohx2214 10 месяцев назад
fireship is close
@pvic6959
@pvic6959 10 месяцев назад
hes the bill wurtz of graphics LOL
@philheathslegalteam
@philheathslegalteam 10 месяцев назад
We have done this for 12 years already. It’s not applicable to games. Everyone trashed Euclideon when this was initially announced, and now because someone wrote a paper everyone thinks it’s a new invention…
@MinerDiner
@MinerDiner 10 месяцев назад
Clearly you haven't seen "history of the entire world, i guess" by Bill Wurtz. This video feels heavily inspired by it.
@jrodd13
@jrodd13 10 месяцев назад
This dude is the pinnacle and culmination of gen z losing their attention span
@neocolors
@neocolors 9 месяцев назад
I've watched it on double speed
@ayebraine
@ayebraine 9 месяцев назад
I'm 40, and I hate watching videos instead of reading a text. Even if it's a 2 minute video on how to open the battery compartment (which is, frankly, a good use case for video). I really don't want to wait until someone gets to the point, talks through a segue, etc. This is closer to reading, very structured and fast. Wouldn't equate it with short attention span.
@wcjerky
@wcjerky 9 месяцев назад
Ahh yes, another case of the older generation hating on the younger generation. I remember the exact same said about Millennials and Gen Y. A missed opportunity for unity and imparting useful lessons. Please see past your hubris and use the experience to create.
@jrodd13
@jrodd13 9 месяцев назад
@wcjerky bro I am the younger generation hating on the younger generation 💀
@Woodside235
@Woodside235 9 месяцев назад
I'm not sure I agree. This video to me is like an abstract high level of the concept. It gets to the point. This is in stark contrast to a bunch of tech videos that stretch to the 10 minute mark just for ads, barely ever making a point. It's a good summarization. Saving time when trying to sift through information does not necessarily equate to short attention span.
@tubetomarcato
@tubetomarcato 9 месяцев назад
most compelling way to present, my recall is much higher from the way you make it so entertaining. Kudos!
@BigJemu
@BigJemu 9 месяцев назад
this is the content i need in my life, no filler. thanks
@rodrigoff7456
@rodrigoff7456 10 месяцев назад
I like the calm and paced approach to explaining the technique.
@GotYourWallet
@GotYourWallet 10 месяцев назад
In the current implementation it reminds me of prerendered backgrounds. They looked great but their use was often limited to scenes that didn't require interaction like in Zelda: OOT or Final Fantasy VII.
@robotba89
@robotba89 10 месяцев назад
My thoughts exactly. Ok, well now do the devs have to build a 3D model to lay "under" this "image" so we can interact with stuff? And what happens when you pick up something or walk through a bush? How well can you actually build a game with this tech?
@Nerex7
@Nerex7 10 месяцев назад
The funniest thing about Ocarina of time was: There was no background. At least, not really. What they did is they created a small bubble around the player that shows this background. There is a way to go outside of the bubble and see the world without it. I bet there's some videos on that, it's very fun and interesting.
@gumbaholic
@gumbaholic 10 месяцев назад
@@Nerex7 Are you talking about the 3D part of the OOT world or do you refer specifically to the market place in the castle with its fixed camera angles?
@Nerex7
@Nerex7 10 месяцев назад
I'm talking about the background of the world, outside of the map (as well as the sky). It's all around the player only. @@gumbaholic It's refered to as a sky box, iirc.
@gumbaholic
@gumbaholic 10 месяцев назад
@@Nerex7 I see. And sure OOT has a skybox. But that's something different than pre-rendered backgrounds. It's like the backgrounds from the first Resident Evil games. It's the same for the castle court and the front of the Temple of Time. Those are different from the skybox :)
@NOVAScOoT
@NOVAScOoT 10 месяцев назад
Never thought id see the video version of an abstract before, very well done though. really does feel like ive just stepped into a researchers room right when they're about to finish up their 6 years of research and put it all together in one weekend without sleeping on 18 cups of extra caffeine coffee.
@Kazumo
@Kazumo 8 месяцев назад
Please be the 2MinutesPaper we deserve (without fillers, making a weird voice on purpose and exaggerating the papers). Good stuff, really liked the video.
@SAGERUNE
@SAGERUNE 10 месяцев назад
When people begin to do this on a larger scale, and with animated elements, perhaps video, ill pay attention. If they can train the trees to move and the grass to sway, that will be extremely impressive, the next step is reactivity which will blow my mind the most. I dont see it happening for a long time.
@yuriythebest
@yuriythebest 10 месяцев назад
exactly. these techniques are great for static scenes/cgi, but these scenes will be 100% static with not even a leaf moving, unless each item is captured individually or some new fancy AI can separate them, but the "AI will just solve it" trope can be said about pretty much anything, so for now it's a cool demo
@drdca8263
@drdca8263 10 месяцев назад
@@yuriythebestIs there any major obstacle to like, doing this process on two objects separately, and then like, taking the unions of the point clouds from the two objects, and varying the displacements?
@somusz159
@somusz159 10 месяцев назад
​@@yuriythebestYeah, and the wishful thinking exhibited in that cliche is likely really bad for ML. Overshilling always holds AI back at some point, think of the 80s.
@theteddychannel8529
@theteddychannel8529 10 месяцев назад
@@drdca8263 if i'm thinking about this in my head, one thing i can think of is that the program has no idea which two points are supposed to correspond, so stuff would squeeze and shift while moving.
@Rroff2
@Rroff2 10 месяцев назад
Yup - as soon as any object or source of light moves you'll need ray tracing (or similar) to correctly light the scene and that is when a lot of the realism starts to break down.
@Koscum
@Koscum 10 месяцев назад
Very much a niche and limited method, that will become more practical and usable once it gets integrated into a more traditional rendering pipeline in a similar way that path tracing is still very much impractical for full scene rendering, but becomes a great tool if some scope limitations are applied and it gets used to augment the existing rendering model instead of replacing it.
@StarHorder
@StarHorder 10 месяцев назад
yeah, this doesn't look useful for anything that is stylized.
@NapalmNarcissus
@NapalmNarcissus 8 месяцев назад
You deserve every view and sub you got from this. Amazing editing, quick and to the point.
@dekatonkheir
@dekatonkheir Месяц назад
Jeesus it's been so long since Iv'e seen something so direct and un-bloated that i almost got whiplash. Cheers very much
@ThereIsNoRoot
@ThereIsNoRoot 10 месяцев назад
Please continue to make videos like this that are engaging but also technical. From a software engineer and math enthusiast.
@gigabit6226
@gigabit6226 10 месяцев назад
I recognize that default apple profile icon!
@user-on6uf6om7s
@user-on6uf6om7s 11 месяцев назад
At the moment, photogrammetry seems a lot more applicable as the resulting output is a mesh that any engine can use (though optimization/retopology is always a concern) whereas using this in games seems like it requires a lot of fundamental rethinking but has the potential to achieve a higher level of realism.
@fusseldieb
@fusseldieb 11 месяцев назад
Like I saw in another video (I believe it was from Corridor?), this technique is better applied when re-rendering a camera recorded 2D path, using this technique, and then have a new footage but without all the shakiness of your real 2D recording. Kinda sucked to explain it, but I hope you got it.
@spooky6oo
@spooky6oo 10 месяцев назад
Quick and informative and I love your editing style
@Ano_Niemand
@Ano_Niemand 10 месяцев назад
video explanation is packed, couldn't even finish my 3d gaussian splatting on the toilet
@kunstigsmart
@kunstigsmart 11 месяцев назад
the man who made a 7 min video worth of something to a 2 minute barrage of info - I like it
@user-co3nl9co5g
@user-co3nl9co5g 11 месяцев назад
This looks a lot like en.wikipedia.org/wiki/Volume_rendering#Splatting from 1991, I wonder if there is any big difference apart the training part, also I know everybody said the same, but your editing is so cool. It's so dynamic, yet it manages to not be exhausting at all
@francoislecomte4340
@francoislecomte4340 10 месяцев назад
It is close but the new technique optimizes the Gaussians (both the number of gaussians and the parameters) to fit volumetric data while the other one doesn’t, leading to a loss of fidelity. Please correct me if I’m wrong, I haven’t actually read the old paper.
@user-co3nl9co5g
@user-co3nl9co5g 10 месяцев назад
​@@francoislecomte4340 You are totally correct :)
@lunarluxe9832
@lunarluxe9832 10 месяцев назад
im impressed youtube let you put a link in the comments
@sethbyrd7861
@sethbyrd7861 10 месяцев назад
Love your editing and humor!
@joshuascholar3220
@joshuascholar3220 10 месяцев назад
You left out every useful detail. If a gaussian is just a blob sprite, how do you do lighting, shadow, reflection etc.
@WaleighWallace
@WaleighWallace 10 месяцев назад
I absolutely love this style of informative video. Usually I have to have videos set to 1.5x because they just drag everything out. But not here! Love it.
@Psythik
@Psythik 10 месяцев назад
Honestly, this video is *perfect* for people like me with ADHD and irritability. No stupid filler, no "hey guys", no "like and subscribe". Just the facts, stated as quickly and concisely as possible. 10/10 video.
@grantlauzon5237
@grantlauzon5237 10 месяцев назад
This could be used for film reshoots if a set was destroyed, but in a video game the player and NPCs would still need to have some sort of lighting/shading.
@kamimaza
@kamimaza 10 месяцев назад
This is a great application for Google Street View as opposed to the 3D they have now...
@fernandojosesampaio9173
@fernandojosesampaio9173 10 месяцев назад
HDRI Is a lighting map that can be applied into any virtual space object, you just take a 360 photo of the ambient.
@MrRedstoneready
@MrRedstoneready 10 месяцев назад
lighting/shading can be figured out by the environment. Movie cgi is lit by using a 360 image sphere of the set
@KD-_-
@KD-_- 10 месяцев назад
Plants also won't be moving so no wind
@NoLongo
@NoLongo 10 месяцев назад
I watched a video about AI that confirmed this is already a thing. Not at this fidelity but tools already exist to reconstruct scenes from existing footage, reconstruct voices, generate dialog. Nothing in the future will be real.
@alexhorlak3241
@alexhorlak3241 10 дней назад
I wish all information was formatted like this. My new favorite channel
@BryGuy418
@BryGuy418 18 дней назад
Why did this lock my attention in so well? Keep up the great work!
@ScibbieGames
@ScibbieGames 11 месяцев назад
It's more niche than photogrammetry because there's no 3D model to put into something else. But with a bit more work I'd love to see this be a feature on a smartphone.
@EvanBoldt
@EvanBoldt 11 месяцев назад
Perhaps the process could be repeated with a 360 degree FOV to create an environment map for the inserted 3D model. Casting new shadows seems impossible though.
@fusseldieb
@fusseldieb 11 месяцев назад
@@EvanBoldt "Casting new shadows seems impossible though." => It really depends. If your footage already has shadows, it'll be difficult. However, if your footage DOESN'T contain shadows, just add some skybox/HDRi, tweak some objects (holes, etc) and voilá.
@EZhurst
@EZhurst 10 месяцев назад
photogrammetry really isn’t that niche considering it’s used pretty heavily in both AAA video games and film
@noobandfriends2420
@noobandfriends2420 10 месяцев назад
You can create depth fields from images which can be used to create 3D objects. So it should be possible to integrate it into a pipeline.
@randoguy7488
@randoguy7488 10 месяцев назад
@@fusseldieb If you think realistic graphics require only "some skybox/HDRi and tweaking some objects" You have a lot to learn, especially when it comes to required textures.
@isaacroberts9089
@isaacroberts9089 9 месяцев назад
Congrats on making like a solidly informationally dense video, we like this!
@rommix0
@rommix0 Месяц назад
hehe... dense. as in dense layers. get it? lol
@charlie891
@charlie891 10 месяцев назад
within a couple of years we could have photo-realistic video games, which is amazing
@Gameboi834
@Gameboi834 10 месяцев назад
This is the first time I've seen an editing style similiar to Bill Wurtz that not only DIDN'T make me wanna gouge my eyes out, but also worked incredibly well and complimented the contents of the video. Nice!
@pigmentpeddler5811
@pigmentpeddler5811 10 месяцев назад
so true bestie
@ratastic
@ratastic 10 месяцев назад
i did want to kill myself a little bit though just a little
@thefakepie1126
@thefakepie1126 10 месяцев назад
bill wurtz without the thing that makes bill wurtz bill wurtz
@hadrux4643
@hadrux4643 10 месяцев назад
bill wurtz explaining something complicated, kinda goes in one ear and out the other
@pigmentpeddler5811
@pigmentpeddler5811 10 месяцев назад
@@thefakepie1126 yeah, without the cringe
@onerimeuse
@onerimeuse 11 месяцев назад
This is the power thirst of super complex graphics algorithms, and I'm totally here for it.
@MrZhampi
@MrZhampi 8 месяцев назад
I've never learned so much about a subject I never heard about before in such a short ammount of time. Incredible!
@XrL72
@XrL72 10 месяцев назад
This is like if Vsauce and Bill Wurtz had a lovechild. Nice video!
@culpritdesign
@culpritdesign 11 месяцев назад
This is the most concise explanation of gaussian splatting I have stumbled across so far. Subscription achieved.
@eafortson
@eafortson 10 месяцев назад
I cannot overstate how much I appreciate this approach to delivering information concisely. Thank you sir.
@Brian_Sauve
@Brian_Sauve 9 месяцев назад
I literally don't care about the subject, but the way you did this... possibly the greatest YT video of all time. 2 minutes flat. No nonsense. Somehow hilarious. Kudos!
@iaial0
@iaial0 Месяц назад
I had to watch this with subtitles because I'm at a restaurant and it feels like I just came out of a washing machine
@MikeMorrisonPhD
@MikeMorrisonPhD 11 месяцев назад
So fun to watch. I want all research papers explained this way, even the ones in less visual fields. Subscribed!
@Critters
@Critters 11 месяцев назад
I wonder how we'll get dynamic content into the 'scene'. Will it be like Alone in the Dark where the env is one tech (for that game, it was 2D pre rendered) and characters are another (for them, 3d poly). Or if we will create some pipeline for injecting / merging these fields so you have pre-computed characters (like mortal combat's video of people). Could look janky. Also I don't see this working for environments that need to be modified or even react to dynamic light, but this is early days.
@edenem
@edenem 11 месяцев назад
well, it could already work in the state it's in right now for VFX and 3D work, even though you can't as of now inject models or lighting into the scene (to my knowledge), you could still technically take thousands of screenshots and convert the nerf into a photoscanned environment, and then use that photoscanned environment as a shadow and light catcher in a traditional 3D software, and then use a depth map from the photoscan as a way to take advantage of the nerf for an incredibly realistic 3D render, that way you can put things behind other things and control the camera and lighting, while still taking advantage of the reflections and realistic lighting nerfs provide
@Danuxsy
@Danuxsy 11 месяцев назад
Yes the issue here is that these scenes are not interactable because the things in them are not 3D objects, they are mere representations from your particular perspective. Dunno how they would solve those problems (which is probably why we won't see it in games anytime soon if ever)
@z0rgMeister
@z0rgMeister 10 месяцев назад
I didn't understand a single thing but I think you're passionate about it so I'm going to do the dad thing and fully support you.
@bilboboi
@bilboboi 9 месяцев назад
Short form information blast. Just long enough to scratch my ADD parts, i love it
@porrasm
@porrasm 10 месяцев назад
For now it’s niche. I imagine it could be used in games blended with traditional rendering pipelines. E.g. use this new method for certain areas that need a light level of detail.
@judahgrayson7953
@judahgrayson7953 10 месяцев назад
more than just that niche - video production could utilize this in rendering effects
@agedisnuts
@agedisnuts 11 месяцев назад
welcome back to two minute papers. i’m your host, bill wurtz
@DreamingInSlowMotion
@DreamingInSlowMotion 9 месяцев назад
Omfg, the crossover we didn't know we needed
@bofuuu
@bofuuu Месяц назад
lmao my thoughts exactly
@ezdeezytube
@ezdeezytube 10 месяцев назад
Video topic aside, your style of delivering info is top notch mate!
@smallcheesebread6531
@smallcheesebread6531 27 дней назад
Very good video i love how much info you packed in such a short amount of time
@yaelm631
@yaelm631 11 месяцев назад
Your video is spot on! HTC Vive/The Lab were the reasons why I got into VR. I loved the photogrammetry environments so much that it's my hobby to capture scenes. Google Light Fields demos were a glimpse of the future, but blurry. These high quality NeRF breakthroughs are coming much earlier than I thought it would be. We will be able to capture and share memories, places... it's going to be awesome! I don't know if Apple Vision Pro can only do stereoscopic souvenirs capture or if it can do 6dof, but I hope it's the latter :,D
@orangehatmusic225
@orangehatmusic225 11 месяцев назад
We all know what you use VR for... you might want to not use a black light around your VR goggles huh.
@mixer0014
@mixer0014 11 месяцев назад
The best thing about this technique is that it is not a NeRF! It is fully hand-crafted and that's why it beats the best of NeRFs tenfold when it comes to speed.
@orangehatmusic225
@orangehatmusic225 11 месяцев назад
@@mixer0014 Using AI doesn't make it "hand crafted"... so you are confused.
@mixer0014
@mixer0014 11 месяцев назад
@@orangehatmusic225 There is no AI it that new tech, just good ol' maths and human ingenuity. TwoMinutePapers has a great explanation, but if you don't have time to watch it now, I hope a quote from the paper can convince you: "The un- structured, explicit GPU-friendly 3D Gaussians we use achieve faster rendering speed and better quality without neural components."
@NickJerrison
@NickJerrison 11 месяцев назад
@@orangehatmusic225Ah yes, the only and primary reason people throw hundreds of dollars into VR equipment is to jerk off, so viciously in fact that it would splatter the headset itself. For sure, man, for sure.
@bogsbinny7124
@bogsbinny7124 11 месяцев назад
this could be done using images from hyperrealistic renders instead of irl photos too, right? to move around in a disney cgi level environment that wouldn't be possible in realtime normally
@sebastianblatter7718
@sebastianblatter7718 11 месяцев назад
cool idea.
@Lazyguy22
@Lazyguy22 11 месяцев назад
You'd need a large number of those renders, and wouldn't be able to interact with anything.
@chrisallen9743
@chrisallen9743 10 месяцев назад
So long as the renders basically functioned as they are required (as in, enough of them, from enough different points) and can be converted into whatever format is used to create this...i dont see why not.
@chrisallen9743
@chrisallen9743 10 месяцев назад
You could use something that uses Ray Tracing to create a scene, and once the dots fill in to 100%, you have your screenshot/picture, so then you move the camera to the next position, and allow the dots to fill in. Rinse repeat, and then you'll have RTX fidelity.
@jcudejko
@jcudejko 10 месяцев назад
@@Lazyguy22 Yes, but you could add in bits redrawn as polygons with their own textures that could be interactable I'm thinking like the composite scenes from the late 90s fmv games
@alphahurricane7957
@alphahurricane7957 9 месяцев назад
whoah this was fast, i think that if this dude makes a 2 hour essay video i may even learn real happiness
@anuragparcha4483
@anuragparcha4483 6 месяцев назад
This is exactly what I wanted. Keep these videos up, subbing now!
@thejontao
@thejontao 10 месяцев назад
Interesting!!! I always think back to when I was doing my degree in the 90s, and Ray Tracing was this high end thing PhD students did with super expensive Silicon Graphics servers, and it was always a still image of a very reflective metal sphere on a chess board with some kind of cones or polyhedra thrown in for kicks. It took days and weeks of render time. About 25 years passed between when I first heard of ray tracing and when I played a game with ray tracing in it. I might not be 70 when the first game using a Gaussian engine is released, but I wouldn’t imagine it happens before I’m 60. Still very interesting, though!!!
@dddaaa6965
@dddaaa6965 10 месяцев назад
I don’t think it’ll be used at all personally but I’m stupid so we’ll see, I don’t see how this is better than digital scanning if you have to do everything yourself to add lighting and colission, someone said it could be used for skyboxes and I could see that.
@richbuckingham
@richbuckingham 8 месяцев назад
I remember exactly the ray-tracing program you're talking about, in 1993/4 I think a 640x480 image render took about 20 hours.
@memitim171
@memitim171 7 месяцев назад
Ray tracing was also popular on the Amiga, these chess boards, metal spheres etc would crop up regularly in Amiga Format, (I've no idea how long it took to render one on the humble Amiga) some of them were a bit more imaginative though and I remember thinking how cool they looked and wondering if games would ever look like that, tbh I'm a bit surprised it actually happened...I'm not that convinced I'm seeing the same thing here though, how does any of this get animated? It's already kinda sad that it's 2023 and interactivity and physics have hardly moved an inch since Half-Life 2, the last thing we need is more "pre-rendered" backgrounds.
@niallrussell7184
@niallrussell7184 7 месяцев назад
You didn't need an SGI. I bought a 287 co-processor for an IBM PC to do raytracing in late 80s. Started with Vivid and then POV raytracers. By mid 90s we were using SGI's for VR.
@MrEricGuerin
@MrEricGuerin 11 месяцев назад
the issue is it does not have the 'logic' - so no dynamic light, no possibility to identify a gameobject directly like : 'hey do a transform of vector3 to the bush there' => no 'information' of such a bush. Let sees where it will bring us of course, but it looks more like a fancy way to represent a static scene. Probably it can be used in an FX for movie, where you can do something on a static scene, and you do something with 3D stuff inside this 'scene' I do not know ...
@fusseldieb
@fusseldieb 11 месяцев назад
Isolate the object, tweak it and export it. Should be doable...
@charlotte80389
@charlotte80389 10 месяцев назад
@@fusseldieb the lighting wouldn't change when you move it tho
@DKLHensen
@DKLHensen 10 месяцев назад
Subscribed: because you can condense quality information in 2 minutes, no bs, just to the point.
@jacquesbroquard
@jacquesbroquard 3 месяца назад
This was amazing. Thanks for the humorous take. Keep going!
@BluesM18A1
@BluesM18A1 10 месяцев назад
I'd like to see more research done into this to see how to superimpose dynamic objects into a scene like this before it has any sort of practical use in video games but for VR and other sorts of things, this could have lots of potential if you want to have cinema-quality raytraced renders of a scene displayed in realtime. Doesn't have to be limited to real photos.
@bentweedle3018
@bentweedle3018 8 месяцев назад
I mean it'd be pretty simple to do, use a simplified mesh of the scene to mask out the dynamic objects then overlay them when they should be visible. The challenge is more making the dynamic objects look like they belong.
@briannamorgan4313
@briannamorgan4313 10 месяцев назад
I was in college back when deferred shading was just being talked about as a viable technique for lighting complex scenes in real-time. I even did my dissertation on the technique. Back then GPU's didn't have even close to the memory needed to do it at a playable resolution, but now pretty much every game uses it. I can see the same thing happening with 3D Gaussian Splatting.
@PartOfTheGame
@PartOfTheGame 2 месяца назад
That was the best 2(ish) minutes of listening to someone describing a new thing I've ever spent in my life. Also, this would be amazing if it could be implimented into games/VR/AR.
@reallyWyrd
@reallyWyrd 10 месяцев назад
I appreciate that the video took the minimum amount of time possible.
@Klaster_1
@Klaster_1 11 месяцев назад
Great video, do you plan more like this? Terse, technical, about 3D graphics or AI. Basically, 2 minute papers, but with less fluff.
@myusernamegotstollen
@myusernamegotstollen 11 месяцев назад
I don’t know much about rendering but this sounds so smart. You take a video, separate the frames, do the other steps, and now you have this video as a 3d environment
@redcrafterlppa303
@redcrafterlppa303 11 месяцев назад
This would be an amazing thing for AR as you could use that detailed 3d model of an arbitrary room or place you are in and augment it with virtual elements. Or to have photorealistic environments in VR without needing a AAA production team. Imagine making a house tour abroad in VR where the visuals are rendered photorealistic in real time
@fusseldieb
@fusseldieb 11 месяцев назад
@@redcrafterlppa303 "Or to have photorealistic environments in VR without needing a AAA production team" -> This already exists. Some people use plain old photogrammetry for that
@AerialWaviator
@AerialWaviator 25 дней назад
Beyond the Gaussian Splatting, great Audio Splatting with the QnA style format to answer all the "ya but (key points)". Such an exceptional but concise rendering of edited snippets.
@IndividualKex
@IndividualKex 25 дней назад
i love the term “audio splatting”, thank you
@Milkjiest
@Milkjiest 8 месяцев назад
i like your editing style, it keeps my adhd brain entertained
@rich1051414
@rich1051414 11 месяцев назад
Replacing computation with caching. Clever caching, but it is what it is. This is the very first solution a programmer will use to computations taking too long. Can we cache it instead. Gaussian splatting is a way to cache rendering for use later. But it takes *all if the ram* to do it.
@DrCranium
@DrCranium 11 месяцев назад
So, in a sense this is an “optimization technique”: currently - for photogrammetry, but give it some traction and spotlight (like with “async reprojection outside of VR”) - and that’ll find its way into game engines.
@rich1051414
@rich1051414 11 месяцев назад
​@@DrCranium Imagine the world as being full of 3d pixels of 0 size. How large can you actually make those pixels before you start to notice? This is basically what splatting is. The smaller the pixels, the better things look. But since those 'pixels' are in 3d space, you dont have to update them as long as nothing otherwise has changed. It's less useful in dynamic scenes, though, which is where the technique starts to fall short. But I suppose filters could be stacked on top, or lower resolution realtime rendering, with prerendered splatting used as hinting to upscale the fidelity.
@Smesp
@Smesp 10 месяцев назад
I've seen other, much longer, videos on this and wondered how it works. Now I know. After 2:11. This is the first time I pay money to show my gratitude. Great work. Keep it up! Liked and Subscribed. THANKS.
@bud389
@bud389 10 месяцев назад
It'll be used for specific applications only. If you want to use that with video games you'll still have to give geometry to all the objects and environments, meaning it will still need to render all of that with polygons.
@GabrielLima-pi4kw
@GabrielLima-pi4kw 10 месяцев назад
It probably can be used with most fps games (for things that won't have to move, interact or have colisions) and far or non-interactible, scenarios from games.
@PuppetMasterdaath144
@PuppetMasterdaath144 10 месяцев назад
yes then its the same thing as that scam all those years ago, it only renders a scene no objects
@DuringDark
@DuringDark 10 месяцев назад
@@GabrielLima-pi4kw It's just not worth it. For a city block a la Dust 2 you'd need a physical set, any changes to the scene would require a reshoot which would require the same weather or you'd end up with different lighting, it would clash stylistically with trad. 3D assets, there'd be much less vfx or lighting available, you'd need _two_ rendering pipelines ballooning dev time and render time, you'd need a trad. 3D substitute anyway if you want users with slower systems, shooting at gaussians would give no audiovisual feedback...
@ZeroX252
@ZeroX252 10 месяцев назад
Actually, you can use the point cloud rendered objects for the visuals entirely and then rough low-poly objects for collision mapping and skeletal work exclusively. The bigger problem is that this technology needs to know where the camera is and then render the data for that camera. You would have to render extra frames in every direction that the camera could move, otherwise the rendered viewport would always be playing "catchup" (You'd get an unclean, not-yet-decided face until the render catches up - think texture pop-in in games that use texture streaming.)
@ZeroX252
@ZeroX252 10 месяцев назад
You can see this prominently in this video - look at the lack of depth for the curvature of the vase or the edges of the table. You can see that as the camera pans the "sides" of those objects blit into existence after the camera has moved. It's very obvious on the bicycle tires as well.
@ragnarlothbrok6240
@ragnarlothbrok6240 10 месяцев назад
I wish everything on RU-vid was this concise.
@SameBasicRiff
@SameBasicRiff 10 месяцев назад
well, how much vram do you need? and could nVidea's new compression algo's for regular texture streaming (a 4x savings on VRAM) help at all?
@thibaultghesquiere
@thibaultghesquiere 10 месяцев назад
I love how short and concise this was. Well done mate! No BS, cut to the chase
@IMAComedy
@IMAComedy 10 месяцев назад
I've never been at a higher peak of the Dunning Krueger effect in my life.
@todayonthebench
@todayonthebench 10 месяцев назад
I saw something similar years ago with similar impressive results. (also just a point cloud being brute forced into a picture.) However, the downside of this type of technique is its memory requirement, making it fairly niche and hard to use in practice. For smaller scens it works fine, for anything large and it starts to fall apart. Beyond this we also have to consider that these renders are of a static world. And given the fairly huge amount of "particles" making up even a relatively small object, then the challenge of "just moving" a few as part of an animation becomes a bit insane in practice. Far from impossible, just going to eat into that frame rate by a lot. Most 3d content is far more than just taking world data and turning it into a picture. And a lot of the "non graphics" related work (that graphics has to wait for, else we don't know what to actually render) is not an inconsequential amount to work with as is. Moving a few tens of thousands of polygons around as a character model walks by isn't trivial work. Change those tens of thousands of polygons into millions of points (to get similar visual fidelity) and that animation step is suddenly a lot more compute intensive. So in the end, that is my opinion. Works nice as a 3d picture, but dynamic content is a challenge. Same for memory utilization, something that makes it infeasible for 3d pictures too.
@krallopian
@krallopian 11 месяцев назад
It looks awesome, but my first thought is, how does real-time lighting match up in a scene like this? What if I shine a flashlight, or shoot a bright projectile, does my in-game prop - vehicle, person, gun - accept lighting?
@AfonsodelCB
@AfonsodelCB 11 месяцев назад
what you're looking at is basically a video. the technique generates new frames of video depending on the angle you're trying to view things from, based on the data that was captured in the real world. there are no 3D models, no materials, no lighting, it's just some program spitting out frames based on the angle you're approaching it from. think 360 video but you can move a little bit within a confined area. for traditional rendering methodologies to play nice with this, lots of development has to happen still, and it might never be done with this particular approach
@sirens3237
@sirens3237 8 месяцев назад
This was amazingly well put together
@IgorFranca
@IgorFranca 4 месяца назад
Thank you for your objective presentation. Very interesting indeed. I'll check later for more.
@LeonTalksALot
@LeonTalksALot 10 месяцев назад
10/10 intro, literally perfect in every way. I immediatly got what I clicked for and found myself intrested from 0:02 onwards.
@ibbles
@ibbles 11 месяцев назад
Reminds me of Euclideon ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-iVsyB938ovY.html in the sense that animations seem difficult to do. Would you need to train the Gaussian splats for every bone / skeleton configuration? Do they interpolate well?
@theohallenius8882
@theohallenius8882 11 месяцев назад
Only compared to Euclideon this is something that isn't proprietary and everyone can benefit from.
@ScibbieGames
@ScibbieGames 11 месяцев назад
Unlike this method, Euclideon was done with Voxels
@spilledcoffeeandbeans
@spilledcoffeeandbeans 2 месяца назад
This video has "entire history of the world, i guess" vibes and I love it
@isaacgary6801
@isaacgary6801 9 месяцев назад
This guy is too good for this level of subs... Subbed right away
@Zangettsu_ZA
@Zangettsu_ZA 10 месяцев назад
Why can't ALL videos be as informative like this one in such a short span on time. Well done!
@LoLaSn
@LoLaSn 10 месяцев назад
Because they want ad money
@womp47
@womp47 10 месяцев назад
get a longer attention span and youll be able to watch longer videos like this and actually learn stuff
@LoLaSn
@LoLaSn 10 месяцев назад
@@womp47 You think the problem with 20 minute long videos where maybe a quarter is about the topic is people's attention span? Funniest shit I've read in a while
@AndrewARitz
@AndrewARitz 10 месяцев назад
because this video isn't informative.
@womp47
@womp47 10 месяцев назад
@@LoLaSn when did i say videos "where a quarter is about the topic"
@ToxicNeon
@ToxicNeon 10 месяцев назад
I wonder if something like this could eventually be combined with some level of animation and interactivity - in games that is. That would be wicked.
@sun_beams
@sun_beams 10 месяцев назад
You could walk through it. It's not actually lit and it's not geo so there's nothing to rig. It's basically 3D footage. I would think this could be very useful for compositors when they need to fill in background data when vfx builds a brand new camera angle that wasn't filmed. This is almost useless as cg data because there's just nothing you can do with it except move through it. This isn't like normal maps, which are a way to utilize lighting and simulate new additional things on stuff that has been created. You can't create these fields, you can only capture them. I take that back, you might be able to create them but you'd need to have modeled, textured, and lit your scene so that you can capture the field from your scene. Again, could be useful to compositors but more of a nuisance for cg. It's cool though
@dddaaa6965
@dddaaa6965 10 месяцев назад
PROBABLY NOT, that's why I kept thinking this is virtually useless (videogame wise), there is no interaction besides looking at a static enviroment.
@ddenozor
@ddenozor 10 месяцев назад
You can maybe do great skyboxes or non-interactable background objects, but is it worth doing for those?
@MarineBoyGame
@MarineBoyGame 7 месяцев назад
Wow, a techie faster talker who didn't lose me along the way. Excellent video!
@sapphyrus
@sapphyrus 8 месяцев назад
Now this is that Enemy of the State "rotate the camera and zoom to the bag" surveillance scene.
@theDragoon007yaboiCJ
@theDragoon007yaboiCJ 10 месяцев назад
I've always wondered about this. All my life since i started getting into videogames graphics from when I was a kid. I knew it had to be possible and watching this is like the biggest closure of my life.
@Sc0pee
@Sc0pee 10 месяцев назад
But this tech is not to be used in games/movies/3D printing, because these are not mesh 3d models. These are 100% static, non interactive models. Games need mesh models to be interacted with, lit, animated etc. It's also very demanding on the GPU. It's something that fe. Google Maps could use in the future.
@Suckassloser
@Suckassloser 10 месяцев назад
Surely there'd ways to make them non-static? Maybe a lot more computationally demanding and beyond current consumer hardware, but for example you could develop some sort of weighted bone system that acts on the points of a cloud model in similar way that is done on 3d meshes? And I imagine ray/path tracing could be used to simulate lighting? etc. I'll admit I don't have a strong grasp on this technology so I could be completely off base, but I feel that these are only static because the means to animate, light and add interactability is just yet to be realised (and suitable hardware to support this), just like how it would not have been originally with 3d meshes@@Sc0pee
@zombeaver69
@zombeaver69 10 месяцев назад
​​​@@Sc0pee if this technique generates 3d objects, that is usually what I would call a mesh. Static meshes and images both are used in games often, and we have editing capabilities for more complex requirements like animation. You're definitely right about the GPU thing tho
@oBCHANo
@oBCHANo 9 месяцев назад
Somehow, based on this comment, I doubt you know literally anything about computer graphics.
@Dodanos1
@Dodanos1 10 месяцев назад
WHY EVERYBODY DOESNT MAKE INFORMATIVE VIDEOS IN THIS FORMAT? Fast, clear to the point, no filler stuff, just pure info in shortest amount of time.
@imveryangryitsnotbutter
@imveryangryitsnotbutter 10 месяцев назад
You should watch "history of the entire world, i guess" if you haven't already. It's 20 minutes of this kind of rapid-fire semi-educational delivery.
@n_tas
@n_tas 10 месяцев назад
5 second clip of something that happens later in the video "HEY WHAT'S GOING ON GUYS today we are doing a thing but first I want to thank this channel's sponsor NordVPN...."
@mikaelsjodin
@mikaelsjodin 10 месяцев назад
this is the first video from this guys I've watched and I'm hooked. Like, damn son.
@JoshLange3D
@JoshLange3D 10 месяцев назад
Love this approach to explaining. Very fun to take in.
@chrispysaid
@chrispysaid 9 месяцев назад
You're like Bill Wurtz without all the fun jazz
@BurgerDan
@BurgerDan 10 месяцев назад
This video gave all the information neccesary in the shortest time possible while still containing entertaining editing. Bravo sir!😂
@BlunderMunchkin
@BlunderMunchkin 7 месяцев назад
If it provided all the information necessary you could sit down and write the code without looking at any other resource.
Далее
How Games Have Worked for 30 Years to Do Less Work
23:40
3D Gaussian Splatting! - Computerphile
17:40
Просмотров 127 тыс.
Symmetrical face⁉️🤔 #beauty
00:15
Просмотров 767 тыс.
The REAL Reason Unreal Engine VFX Looks FAKE
6:58
Просмотров 386 тыс.
Testing My Speech Jammer In Public
11:26
Просмотров 2,9 млн
How I made a 3D Level in a 2D Game
24:28
Просмотров 3,8 млн
I Made an AI with just Redstone!
17:23
Просмотров 863 тыс.
The Strange Graphics Of LETHAL COMPANY
15:59
Просмотров 788 тыс.
I Created a Game Engine Just to Optimise This
4:50
Просмотров 997 тыс.
10 weird algorithms
9:06
Просмотров 1,2 млн
3D Gaussian Splatting from Hollywood Films!
18:25
Просмотров 103 тыс.