Тёмный

coding a really fast De-Noising algorithm 

8AAFFF
Подписаться 7 тыс.
Просмотров 45 тыс.
50% 1

in this video, I coded a denoiser for raytracers.
It is really fast because all it does is blur an image (with a few extra steps).
GitHub repo (improvements are welcome :D)
github.com/mar...
music:
1 - Hotline Miami OST - Inner Animal - Scattle
2 - Hotline Miami OST - Blizzard - Light Club
3 - Throttle Up - Dynatron
mentions:
coding adventures guy:
• Coding Adventure: Ray ...
prob the best raytracing explanation ever:
• How Ray Tracing (Moder...
another good video that helped me:
• I made a better Ray-Tr...
NVIDIA comparison from:
• Ray Tracing Essentials...
thx for watching :)

Опубликовано:

 

28 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 126   
@peppidesu
@peppidesu 7 месяцев назад
there is also a technique called manifold exploration, if you really hate your sanity.
@drdesten
@drdesten 7 месяцев назад
lol. Is it still correct that no one except the researchers has implemented it?
@PolyRocketMatt
@PolyRocketMatt 2 месяца назад
@@drdesten Probably... The main reason being there are just way better alternatives available these days, since that original paper dates back to 2012... Remember, the original Metropolis algorithm also wasn't implemented by anyone except the original authors until Kelemen introduced primary sample spaces... It's all a matter of perspective...
@blacklistnr1
@blacklistnr1 7 месяцев назад
Cool! An interesting thing to try: - Render just the edges of an image (or do a highpass in Photoshop/Krita to extract the edges) - Run this to fill the cells for a cool looking filter
@Pockeywn
@Pockeywn 7 месяцев назад
omg i need to see this i might have to try this myself
@blacklistnr1
@blacklistnr1 7 месяцев назад
@@Pockeywn Please do, I'll watch your video too :)) I expect it to be somewhere between a median filter and some voronoi cells (like a stained glass filter with varying sizes), but I am curious about its specific artifacts and look
@griffinschreiber6867
@griffinschreiber6867 7 месяцев назад
I really like where this channel is going! Edit: training a neural network to do denoise might be interesting.
@gorgolyt
@gorgolyt 7 месяцев назад
That's what Nvidia does.
@griffinschreiber6867
@griffinschreiber6867 7 месяцев назад
@gorgolyt I know, I just thought it might be an interesting project.
@gazehound
@gazehound 7 месяцев назад
I love "funny Eve Online moments" as a description for intergalactic alien space war
@starplatinum3305
@starplatinum3305 7 месяцев назад
bro made me cry bc this video's good af 😭😭😭
@UCFc1XDsWoHaZmXom2KVxvuA
@UCFc1XDsWoHaZmXom2KVxvuA 7 месяцев назад
Brooo this video looks and sounds mesmerizing 😵‍💫😵‍💫 loove it
@MrTomyCJ
@MrTomyCJ 7 месяцев назад
The look produced by this algorithm reminded me of how shadows look on rtx games. That made me wonder if the denoising algorithm in some real time raytracing applications is somewhat similar to this one.
@chaosminecraft3399
@chaosminecraft3399 7 месяцев назад
Damn, that is quite the denoising work you did 😳
@FractalIND
@FractalIND 7 месяцев назад
thanks a lot for the explanation, im working on my own 3d renderer without an api like opengl or vulkan and i searched a lot for an ideea for an algorythm but every webside explains it in words normal people can understand but no exact step by step guide for how it works
@mauriciodanielromano7001
@mauriciodanielromano7001 3 месяца назад
Go go gadget pixel enhancer
@meinlet5103
@meinlet5103 7 месяцев назад
now I know why image sensor on dark places is noisy
@ThankYouESM
@ThankYouESM 16 дней назад
I'm creating a Python raytracer that simply uses PIL to generate photo realistic image at AAA pocessing speed. However... looks like I will soon need Ai to create the precalculations since I have to also work on a whole lot of other Python projects by contracts. Basically how it generates the images is by double layering images... the first buffer RGB images are 128x128 tiles PIL blurred at 50 of 9 main colors... and the 2nd buffer is RGBA which is exacly like the first buffer except resized to 32x32 to sharpen the 1st buffer image... each combination pre-rendered to be sorted in a Python dict() to be laid out by the Wave Function Collapse.
@hugomatijascic5778
@hugomatijascic5778 7 месяцев назад
Hello, Really interesting approach ! Maybe you could correct the unwanted blurring effect by applying a sharpening kernel convolution onto the noisy patch areas after the denoising alorigthm part ? Idk if that would help to get better results ...
@8AAFFF
@8AAFFF 7 месяцев назад
That might work I can even try modifying the kernel responsible for the blurring to also try preserving edges so its not just a normal gaussian blur Cool idea tho :)
@accueil750
@accueil750 7 месяцев назад
Ahh my ULTRAKILL neurons are firing
@jakemeyer8188
@jakemeyer8188 7 месяцев назад
I wanted to fork this a month ago, but got tied up with an emergency work project. I'm not sure if you're still working on it, but I definitively want to have a look at the code and see if I can contribute.
@bbrainstormer2036
@bbrainstormer2036 7 месяцев назад
It looks almost dreamlike. It could be used in a stylistic way, rather than in a realistic one. Also, it wouldn't have been difficult to generate some noisy images with blender, and I'm kind of curious to see how well it works "in the field"
@Johnsonwingus
@Johnsonwingus 7 месяцев назад
actually now all you need is a sharpening algorithm and then youll have comparable quality to the nvidia dev channel
@Y1001
@Y1001 2 месяца назад
Are you applying teh blur on top of everything? You should only have it fill in the dark pixels in an additive manner.
@kipchickensout
@kipchickensout 7 месяцев назад
until we have AI filling in the missing pixels
@gorgolyt
@gorgolyt 7 месяцев назад
Literally how denoisers work already.
@kipchickensout
@kipchickensout 7 месяцев назад
@@gorgolyt Oh, I mean more of that tho, with levels of SUPIR
@sandded7962
@sandded7962 7 месяцев назад
I am edging to this rn. I was never this close to Bussin... Looking forward to the continuation, that edging session would be wonderful
@vandelayindustries2971
@vandelayindustries2971 7 месяцев назад
Awesome video! Maybe some feedback: the audio volume is really low :)
@8AAFFF
@8AAFFF 7 месяцев назад
thanks :D ill increase it next time
@ThatOneUnityGamedev
@ThatOneUnityGamedev 7 месяцев назад
I've used scratch for 3 years and it's VERY slow. but yes its a lot faster at certain things than python is
@Codefan321
@Codefan321 7 месяцев назад
I can confirm that Scratch can indeed often outperform python
@mikkelens
@mikkelens 7 месяцев назад
Not a very difficult feat for most programming languages. Scratch (javascript) is pretty fast with V8.
@madghostek3026
@madghostek3026 7 месяцев назад
Cool video, I wonder what would happen if raytracing engine, instead of rejecting a ray that ran out of bounces, took the colour it gathered but with lower intensity, maybe use that as a base for the blur filter so the black islands aren't so black. After all it carries some information.
@8AAFFF
@8AAFFF 7 месяцев назад
yeah i think that's also how they add ambient glow :)
@AntonioNoack
@AntonioNoack 7 месяцев назад
That introduces a systematic error, and therefore is a biased algorithm. Photorealism tries to stay unbiased though.
@kerojey4442
@kerojey4442 7 месяцев назад
Thanks, that's was very educational.
@drdca8263
@drdca8263 7 месяцев назад
Edit: I should have watched until the end of the video before commenting. Silly me. You even said “before you ask”! [strikethrough]Something I wonder if might make sense, is, if the rays that time out and don’t reach a light source, instead of being black, are instead a not-a-color value? Because like, that way you can distinguish between pixels that are black because showing a completely absorbing surface, and pixels that ran out of bounces.[/strikethrough] Another idea: what if each surface could be treated as a light source, but only if running out of bounces, and where the light-source properties of the surface was based on how it was generally illuminated? (Like, maybe based on the total brightness of the rays that hit that surface first in a previous frame, divided by its surface area?)
@jayrony69
@jayrony69 7 месяцев назад
the voice is so quiet
@fayenotfaye
@fayenotfaye 7 месяцев назад
Couldn’t you use an anti aliasing style method?
@xorlop
@xorlop 7 месяцев назад
I didn't catch it, how long does the final algorithm take per image?
@8AAFFF
@8AAFFF 7 месяцев назад
depends on the resolution and how bad the noise is (then it takes it more steps to denoise) but i calculated that on 1920x1080 image it can run at around 250-300 FPS when on an RTX 2060
@laurensvanhelvoort3921
@laurensvanhelvoort3921 7 месяцев назад
Cool!
@Kyoz
@Kyoz 7 месяцев назад
🤍
@Idiot354
@Idiot354 7 месяцев назад
holy shit youtube fkd the volume of this video
@AntonioNoack
@AntonioNoack 7 месяцев назад
RU-vid doesn't adjust the volume of videos afaik.
@Looki2000
@Looki2000 7 месяцев назад
The problem is that black pixels are not the only cause of noise in path tracers. Path tracers sample multiple rays per pixel. Some of these rays are finding their way to the light and some don't. This results in pixel colors calculated using multiple averaged ray samples to be sometimes dimmer and sometimes brighter compared to the neighboring pixels. They may look like black pixels in some circumstances, but they are actually not most of the time. Just look at the sides of the spheres where they are well lit by the big light.
@8AAFFF
@8AAFFF 7 месяцев назад
Yeah ur right If i could somehow mark those out-of-place pixels for denoising then it would probably work on this type of noise. But yes probably an even bigger challenge is actually identifying the pixels that need to be smoothed out
@shadamethyst1258
@shadamethyst1258 7 месяцев назад
​@@8AAFFF It's unfortunaly *really* hard to mark those out-of-place pixels; common tools like blender provide heuristics to cut off rays that would be too bright, which does help in reducing the number of fireflies (pixels that are overly bright compared to the ground truth), but it does change how the image looks. With any monte-carlo technique, we haven't yet found a generic, simple and efficient algorithmic approach to denoise the output. The best we have is to split the output into as many channels as possible, and feeding all of that information to a neural network, which then guesses the ground truth, and then applying some filters on top of the NN's result to clean up the image and account for different amounts of samples per pixel.
@locinolacolino1302
@locinolacolino1302 7 месяцев назад
@@8AAFFF If you still plan to implement a denoiser with a Path Tracer, another interesting approach is Spectral denoising: view the problem less as an image manipulation problem and more like a signal processing problem, having the wavelengths of rays in the pathtracer as the input for the denoiser as opposed to pixel colours, temporal stability should also be better this way.
@Antagon666
@Antagon666 Месяц назад
​@@shadamethyst1258Wavelet denoising in SVGF is pretty good. Main benefit is leveraging other information like albedo, normals etc. to better estimate noise.
@Landee
@Landee 7 месяцев назад
Im really hype for the ultrakill bot
@jeffreyliu2289
@jeffreyliu2289 7 месяцев назад
the what?
@jimmyhirr5773
@jimmyhirr5773 7 месяцев назад
The raw raytracing output looks like salt-and-pepper noise. A common way to remove salt-and-pepper noise is with a median filter: read N pixels around a center pixel and output the median pixel. Wikipedia also says that when there is only "pepper noise" (that is, random black pixels), it can be removed with a contraharmonic mean filter.
@wilsonwilson3674
@wilsonwilson3674 7 месяцев назад
0:10 a common optimization in modern path tracers is Next Event Estimation, which means that a light source is sampled at every ray intersection point before proceeding to the next bounce. The idea is that, in most cases, a point on a surface will either 1) be exposed to at least one light source, or 2) be indirectly lit by a nearby surface that is. It's in a class of techniques called Multiple Importance Sampling, and it's a deeeeep rabbit hole if you wanna fall into it at some point lmao. Not sure how pertinent/interesting this info is to you but I figured I'd toss it your way.
@SomeRandomPiggo
@SomeRandomPiggo 7 месяцев назад
Wow, this was so much better than I expected it to turn out!
@fbiofusa3986
@fbiofusa3986 7 месяцев назад
Next video you should train a CNN to take the image and output the denoised image. Training would be really simple, generate a bunch of scenes with noise, save the image, then shoot more and more rays to denoise. Then train a CNN on the low vs high amount of noise. You can do it all on the GPU as it’s just kernel functions and dot products!
@superyu1337
@superyu1337 7 месяцев назад
That’s what NVIDIA's OptiX Denoiser does afaik
@cinderwolf32
@cinderwolf32 7 месяцев назад
Interesting that Nvidia's denoiser was able to completely change the imahe with the horse!
@stio_studio
@stio_studio 7 месяцев назад
Next time you can use something called Voronoin't. Does what you do on the first iteration but only goes thru the image once
@8AAFFF
@8AAFFF 7 месяцев назад
i looked it up and yes its pretty much a better version of the first implementation
@shadamethyst1258
@shadamethyst1258 7 месяцев назад
Hmm, what you're describing sounds a lot like the voronoi cell pattern over the nonzero pixels, with the Manhattan/taxicab metric. Reading up on OpenCV's document, what you've been trying to implement can be done with the `dilate` operation, together with some simple masking. Alternatively, you could have taken any blurring convolution filter, then compute `image = image + blur(image) / blur(mask) * mask`, where `mask[x, y]` is 1 when the pixel is black. In both cases you wouldn't need to then normalize the entire image, which I believe is the cause of the weird artifacts you were getting: a black pixel surrounded by white pixels anywhere in the picture would cause the normalization step to divide the brightness of the entire convoluted image by 8.
@FreakyWavesMusic
@FreakyWavesMusic 7 месяцев назад
interesting approach, but your gain is too low, please put your voice to at least -3 db
@Tobiky
@Tobiky 7 месяцев назад
looks sick, thanks dude
@cameronkhanpour3002
@cameronkhanpour3002 7 месяцев назад
Great video! Cool to see someone making their own denoising algorithm instead of doing the shortcut of importing scikit :). Maybe try running IQA metrics like MSE/PSNR or SSIM to help quantify to the viewers how good your image enhancement is.
@WalnutOW
@WalnutOW 7 месяцев назад
Cool. This is kind of like morphological dilation
@Beatsbasteln
@Beatsbasteln 7 месяцев назад
that was extremely fascinating. you're great at visualizing the concepts that you wanna describe. however your voice sounded pretty dull compared to the sound effects. if i were you i'd consider slapping a fast compressor or an exciter on those vocals to bring up more speech intelligibility out of the highend
@8AAFFF
@8AAFFF 7 месяцев назад
thanks :) i checked out your channel u have some great audio tips i ll def take into account
@cube2fox
@cube2fox 7 месяцев назад
This gives me an idea: Modern generative image models like Stable Diffusion support "inpainting". That is, they can complete missing parts of a given image. This suggests the diffusion models could simply inpaint all the missing (black) pixels from the noisy image. This would be quite slow but the resulting quality should be very high.
@somdudewillson
@somdudewillson 7 месяцев назад
It's generally wayyy more effective to use a much smaller, specialized denoising neural network. However, you are technically correct - generative image models like Stable Diffusion are actually denoising networks that are so absurdly good at their job that they can 'denoise' a high-quality image from literal pure noise.
@cube2fox
@cube2fox 7 месяцев назад
@@somdudewillson Yeah diffusion models should be able to handle much heavier noise than specialized models.
@lyagva
@lyagva 7 месяцев назад
As I remember noise appear because of a different reason. IRL when light bounces off of a mirror it have the same angle before and after. But every other rough object work a bit differently: light bounces with the same angle, but the angle is measured from a rough (really smally curved) surface, wich on approximation makes light bounce in random direction (randomness is relative to object's roughness). As of Ray Tracing/Path Tracing, rendering mirrors is a peace of cake as it requires only one ray shooting for one pixel. But the things get pretty hard when working with rough materials, we have to shoot many many many lights rays and every time randomise their bounce angle, then find an average color of all rays and get output. We are getting the noise exactly because IRL light comes from the source of light with pretty high resolution, but in RTX we shoot rays from the camera and can't calculate all of the light falling on object, so we have to simulate many bounces and approximate the color. (Correct me if I'm wrong)
@sloppycee
@sloppycee 7 месяцев назад
With ray tracing, illuminated points should never be black, since at each bounce you need to calculate each light source's direct contribution by shooting rays at each light. Black noise is typically seen in path tracing, where the stochastic nature of light source direction can result in some points just randomly not receiving a ray from the light source.
@lyagva
@lyagva 7 месяцев назад
Dynatron - Throttle Up. I wasn't expecting this song to play in... Well... Any video...
@squirrelcarla
@squirrelcarla 6 месяцев назад
really amazing, i learned so much from this video, thank you
@AlisterChowdhuryX
@AlisterChowdhuryX 7 месяцев назад
This looks a lot like pull push (OpenImageIO calls it push pull) a reasonably common algorithm from 1999. Used for denoising and filling holes in textures. You create a mipchain down to 1x1, then merge under until you get back to your original format (unpremulting the alpha along the way), the idea being you preserve detail where you have it and fill in the missing data with blurred neighbours.
@owencmyk
@owencmyk 7 месяцев назад
Using a shader instead of a convolution would fix the problems with it not looking right. Also can't wait for the ULTRAKILL bot
@FishSticker
@FishSticker 6 месяцев назад
Okay is scratch ACTUALLY FASTER THAN PYTHON or are you fucking with me
@CharlesVanNoland
@CharlesVanNoland 7 месяцев назад
It's normal for dark areas (dark because of a lack of light, not because of material color) to be blurrier because they will have less ray intersections. This is the situation with all denoisers.
@kuklama0706
@kuklama0706 6 месяцев назад
Try applying Minimum and then Maximum filter, thats faster than Median.
@memetech-
@memetech- 7 месяцев назад
Can’t you just force alpha max post blur / use average of non-black neighbours?
@notapplicable7292
@notapplicable7292 6 месяцев назад
Hand written techniques are great but this is one of the few time AI techniques are genuinely unparalleled. I highly recommend looking into even the most basic AI denoising techniques, that are absurdly effective
@mikkelens
@mikkelens 7 месяцев назад
write it in scratch/js if thats so much faster lol. Even lua would be a giant leap for making it “reallt fast”. You can do an order of magnitue more work in the same time/less memory, especially in a compiled language. If you’re using indices for access/mutation of arrays then rust is a decent choice for this bc it looks similar to python (and can interface easily with your actual scripts).
@mehvix
@mehvix 7 месяцев назад
v solid video small nit: render matplotlib w/o axis
@theshuman100
@theshuman100 3 месяца назад
love the video. but i feel like someone who has no experience with the 3D pipeline shouldnt be optimising it. you get wierd assumptions like noise in raytracing only comes in black
@raconvid6521
@raconvid6521 7 месяцев назад
0:21 From experience this might not the case, since noise can still be seen with ray-marching without any reflections. My theory is that the noise is actually caused by the bit limit essentially giving objects a rough surface so some rays get stuck in the tiny sized crevices. I haven’t looked into blender’s source code specifically so I’d take this with a grain of salt.
@that_guy1211
@that_guy1211 7 месяцев назад
ah yes, there are images that are made so that they "poison" AI image generators, now with this, we can noise, and then de-noise images to un-poison our AI image gens! Great coming 8AAFFF!!!
@gorgolyt
@gorgolyt 7 месяцев назад
Nvidia's denoisers use deep learning trained on pairs of noisy images and original images, you ain't gonna outperform those. Your account of raytracing noise seems somewhat erroneous or incomplete (to my limited understanding). The noise usually comes from the finite random sampling used to scatter rays. It's not about failing to hit a light source. In fact if you fail to hit a light source, that's information you want to use rather then try to repair.
@deeepanshhh
@deeepanshhh 7 месяцев назад
30 seconds into the video and i liked and subscribed at the same time, great video 🙌....
@Povilaz
@Povilaz 7 месяцев назад
Very interesting!
@davutsauze8319
@davutsauze8319 6 месяцев назад
8AAFFF: I fear no man, but that thing... *whatever demon is editing their videos* it scares me.
@honichi1
@honichi1 7 месяцев назад
i mean probably cant keep up with nvidia's denoisers in blender, but this looks better than a lot of other stuff ive seen
@mysticdraguns
@mysticdraguns 7 месяцев назад
אחלה סרטון אוהבים אותך וורקין
@pax5072
@pax5072 7 месяцев назад
Nvidia might exaggerating there image they known for doing that.
@mirabilis
@mirabilis 6 месяцев назад
My eyes see noise in dark areas IRL.
@rodrigoqteixeira
@rodrigoqteixeira 7 месяцев назад
Is it of my phone or does the video have no sound??
7 месяцев назад
now in a video
@TeamDman
@TeamDman 7 месяцев назад
Nice sfx!
@besusbb
@besusbb 7 месяцев назад
lol
@adansmith5299
@adansmith5299 7 месяцев назад
"raytracing lore" lmao
@xskii
@xskii 6 месяцев назад
1:42 tbh idk either editor
@legreg
@legreg 6 месяцев назад
How not to make a denoiser :D
@ThylineTheGay
@ThylineTheGay 7 месяцев назад
Amazing 'neighbours having a party at 11pm' vibes to the music 😅
@8AAFFF
@8AAFFF 7 месяцев назад
Yeah if you played buckshot roulette its also similar
@bubbleboy821
@bubbleboy821 7 месяцев назад
I wish you would have gone into convolution at 3:27! Maybe make a separate video on those?
@int16_t
@int16_t 7 месяцев назад
How do we deal with fireflies though?
@8AAFFF
@8AAFFF 7 месяцев назад
if i could detect them (maybe using some algorithm to detect very rapid brightness changes) i could add them to the "mask" and they would be filled in. ur right tho, i didn't think about them
@hwstar9416
@hwstar9416 7 месяцев назад
why are you even using python?
@8AAFFF
@8AAFFF 7 месяцев назад
Thru pytorch with gpu So its technically running on c++
@marcellonovak7271
@marcellonovak7271 7 месяцев назад
give your editor a raise
@8AAFFF
@8AAFFF 7 месяцев назад
i am the editor thanks XD
@Fatherlake
@Fatherlake 6 месяцев назад
i like the results, the slight blur makes it look dreamy
@Leo_Aqua
@Leo_Aqua 7 месяцев назад
Very nice video. I might try this too
@im-nassinger
@im-nassinger 7 месяцев назад
so cool
@roborogue_
@roborogue_ 7 месяцев назад
this looks so cool
@roborogue_
@roborogue_ 7 месяцев назад
this happens to be a random interest of mine and it’s very cool to come across it being covered so thank you
@tonas3843
@tonas3843 7 месяцев назад
i read the title as "coding is a really fast de-noising algorithm" and it kinda makes sense
@adicsbtw
@adicsbtw 7 месяцев назад
I imagine it's already been implemented, as it would just make sense to my brain, but could you not take pixels that bounce off into the void and try to perform some check to see what percentage of light that hits that spot would bounce hit the last surface it hit, and bounce in the direction it came from when it hit that spot? Or is that an operation that's just too expensive to compute within a reasonable amount of time? Perhaps if you could bake some information though, it might be easier to perform, and could help with realtime raytracers at least by just being ok with a bit of extra error, at the benefit of possibly significant quality improvements at similar sample counts Also, most raytracing denoisers have access to far more data than just the color for example, they usually have access to a normal map of the entire camera view in tangent space (relative to the camera's perspective), so you know where the edges of objects should roughly be, which can help make object differentiation much cleaner there's also usually some depth information which helps with this as well
@VioletGiraffe
@VioletGiraffe 7 месяцев назад
This is amazing, your 3rd version works so well, I'd never guess such a simple algorithm (conceptually) would be so good. Of course, nowadays image generation with neural networks is all the rage in tasks like this.
@alexdefoc6919
@alexdefoc6919 7 месяцев назад
Question : What if we cast shadows and invert the colors? Basically light everywhere and inverse?
@AntonioNoack
@AntonioNoack 7 месяцев назад
You can happily try that, but you'd need the maximum value for light at a point. Unfortunately, you can imagine a theoretical focusing lens around every single point individually, with a set of mirrors if it's on the backside. Every pixel (on its own) could be extremely bright. Which then would always make the image wayy to bright, and we'd have to apply denoising in reverse: reducing too bright pixels.
@KX36
@KX36 7 месяцев назад
you invented CSI's "enhance" function
Далее
AI Discovers Faster Algorithms
19:30
Просмотров 210 тыс.
Compiled Python is FAST
12:57
Просмотров 111 тыс.
ПОЮ ВЖИВУЮ🎙
3:19:12
Просмотров 872 тыс.
ОБЗОР НА ШТАНЫ от БЕЗДNA
00:59
Просмотров 115 тыс.
I made a better Ray-Tracing engine
17:38
Просмотров 253 тыс.
I made a Compression Algorithm for Heightmap Terrain
16:00
When Optimisations Work, But for the Wrong Reasons
22:19
How do non-euclidean games work? | Bitwise
14:19
Просмотров 2,4 млн
i made an ULTRAKILL AI
14:23
Просмотров 22 тыс.
Nobody Cares About Your Coding Projects
11:02
Просмотров 109 тыс.
What Is A Graphics Programmer?
30:21
Просмотров 426 тыс.
Teaching myself C so I can build a particle simulation
11:52
Dear Game Developers, Stop Messing This Up!
22:19
Просмотров 715 тыс.
ПОЮ ВЖИВУЮ🎙
3:19:12
Просмотров 872 тыс.