Тёмный

OpenGL - deferred rendering 

Brian Will
Подписаться 96 тыс.
Просмотров 32 тыс.
50% 1

Code samples derived from work by Joey de Vries, @joeydevries, author of learnopengl.com/
All code samples, unless explicitly stated otherwise, are licensed under the terms of the CC BY-NC 4.0 license as published by Creative Commons, either version 4 of the License, or (at your option) any later version.

Опубликовано:

 

5 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 27   
@krytharn
@krytharn 5 лет назад
Good video. Quick note: to solve the issue of rendering the light volume when the camera is inside it (7:55), the standard solution is to only render the back faces (and cull the front faces).
@CodeParticles
@CodeParticles 2 года назад
@Brian Will, with all due respect I apologize for being 3+ years too late to comment on this terrific video. But according to Joey De Vries, he mentions that one of the disadvantages of deferred shading is that, "Deferred shading forces us to use the same lighting algorithm for our scene's lighting..." mentioned in his Deferred Shading page about halfway down on disadvantages. However, it could be alleviated by including 'More material-specific data' in the G-buffer. I'm encountering this tricky situation where I have a ground object I don't want affected by specular lighting, but all my teapot objects is okay with specular. And so I'm not sure how to approach this unique scenario on only allowing specular on teapots and not with my ground object in the final buffer...
@arsnakehert
@arsnakehert 2 года назад
Damn, this was a great video, I like how you're showing every relevant part of the code in its due time, very useful video to use even as a kind of reference
@ramoncf7
@ramoncf7 3 года назад
Thank you for the whole openGL series, you've helped me a lot to understand many concepts.
@jeroen3648
@jeroen3648 3 года назад
Thank you for this video, it really helped me understand the differences between deferred rendering and forward rendering
@Mcs1v
@Mcs1v 5 лет назад
Nice and detailed video! ;) You can save a lot of memory and memory bandwidth if you don't store a separated "position layer" in your gbuffer, because you can recalculate the positions from the depth buffer (and you write and use the depth buffer anyway).
@briantwill
@briantwill 5 лет назад
Does the extra fragment work outweigh the bandwidth? I'd think for high-end rendering on higher end hardware , the computation cost would outweigh the bandwidth savings.
@Mcs1v
@Mcs1v 5 лет назад
@@briantwill The main problem with the deferred rendering is the memory bandwidth cost which is huge. Doing some math on the GPU is most of the time costs less than abusing the memory itself (there are exceptions, ofc;)). Reconstruct the position from the depth is just one multiplication and it's faster than sample a 32F texture. In other hand, you need to write the texture (which costs more than a sampling itself) and for the position, this implementation do it twice (once for a depth and once for a position buffer). You can do the same with the normals: convert it to screen space and save more bandwidth With screen space normals, you only need the X/Y values. You can save the Z channel and save extra precision with it, because you loose precision with the ability to store normal values for polygons which are backfaced to the camera
@eddek6141
@eddek6141 4 года назад
@@Mcs1v nice!! Thx
@user-dh8oi2mk4f
@user-dh8oi2mk4f 3 года назад
@@Mcs1v how can you store a normal with only 2 values?
@Mcs1v
@Mcs1v 3 года назад
@@user-dh8oi2mk4f Hey! You can convert it the "screen space" (in screen space you only need the vertical and horizontal normal), and you can convert it back to 3d normal after that
@kampkrieger
@kampkrieger 2 года назад
Oh my god. This video is sooo crisp and clean cut raw all-ini information. True genius!
@movax20h
@movax20h 5 лет назад
Just a question, at 5:40, I am not OpenGL expert. Does it make difference to use glBlitFramebuffer here (with nearest and same source / destination dimensions which disables resizing basically), vs using glCopyTextSubImage2D or glCopyImage ? I think the primary intent of glBlitFramebuffer is to do resizing of buffers and converting texture formats. I know for the fact that on some older hardware and older drivers glBlitFramebuffer can be slower.
@keptleroymg6877
@keptleroymg6877 Год назад
Because it's hard to find what I need
@raghul1208
@raghul1208 2 года назад
Excellent
@isaacyoungyxt
@isaacyoungyxt 28 дней назад
Well explained! Thank you so much!
@franesustic988
@franesustic988 4 года назад
Amazing video! I do have a funny hypothetical question tho. Let's take a fixed camera with 2D background system( such as RE2 of FF9). How would someone go about having a pre-rendered G-buffer, and make it so that the first pass just skips the static elements and only updates the buffer where dynamic 3D objects are found??
@oonmm
@oonmm 3 года назад
Sorry for a very late answer, and also for the fact that I have never done this. But you could simply just render the static background to the screen buffer, and then render the 3D objects first to a texture - that you then render the texture to the screen buffer on top of the background.
@gnorts_mr_alien
@gnorts_mr_alien 2 года назад
how would light occlusions work with this? like if there is a model between the light and the target model, that light's contribution might be zero but there is no way to find out about that from this setup I presume. So is this the job of a separate shadow pass? Thank you for the series by the way, amazing content.
@Mazarhan
@Mazarhan 4 года назад
excellent
@andreafasano4755
@andreafasano4755 5 лет назад
Does it make any sense to have another gbuffer that stores values indicating which shader to use for each pixel? In this way it is possible to use multiple fragment shaders right?
@briantwill
@briantwill 5 лет назад
Yeah, you might put more pixel info in the gbuffer, including a value that governs which shader should process that pixel. You wouldn't necessarily need a separate gbuffer but rather an added attachment of the same gbuffer, or an added channel on an existing attachment. Keep in mind though that, skipping over code with a branch doesn't really spare us the work on the GPU except in cases where all 64 cores happen to skip the code; if just one core doesn't skip the code, all other of the 64 cores will have to wait. So your idea is doable, but it requires branching in the light pass shader and so has this performance drag. It generally won't be quite as expensive as processing all pixels with all N of your light pass shaders, but it'll often be close.
@pytchoun140
@pytchoun140 3 года назад
hello can you share source code ?
@tezza48
@tezza48 5 лет назад
Captions look like this. Good video though :)
@zentyrant
@zentyrant 3 года назад
Came from annie's video
@Cheesecannon25
@Cheesecannon25 4 года назад
9:30 Did you just mistake something 2d for 3d?
Далее
OpenGL - PBR (physically based rendering)
12:47
Просмотров 31 тыс.
Find The Real MrBeast, Win $10,000
00:37
Просмотров 22 млн
Тренд Котик по очереди
00:10
Просмотров 317 тыс.
Physically Based Rendering // OpenGL Tutorial #43
17:31
LIGHTS! // Hazel Engine Dev Log
12:36
Просмотров 32 тыс.
Volume Tiled Forward Shading
7:57
Просмотров 19 тыс.
Deferred Shading [Shaders Monthly #14]
31:35
Просмотров 1,2 тыс.
Why you should never use deferred shading
30:14
Просмотров 14 тыс.