The editing in this video is so subtly incredible. The bent pins on the CPU and never ending GPU killed me. Well done Philip. Really a masterclass in sneaking jokes and fun details into the video.
Thank you for secretly releasing probably the most easily understandable guide to AA in graphics and how to parse them for actual tangible results and not data on a spreadsheet.
More people need to see that bottleneck graph section. My god it's irritating how many people treat bottlenecking like it's literally Satan himself, when the reality is your system is pretty much bottlenecking most of the time, and it just depends on what you're doing/playing.
Every chain is as strong as it's weakest link - if that weakness is 4x the strength you will ever need, it's still very much a weakness, but it's down to personality how much that bothers you 👌
Most of the time the system is not bottlenecked but all resources are underutilized. The graph shown is not a very good example of bottlenecking, because there is literally no perceivable difference between let's say steady 144fps and 500fps (for many the lower bound is even 60fps). The system is doing pointless extra work just because it can. When people talk about bottlenecking, it refers to a perceivable effect caused by the disparity in performance levels between different components.
@@romaliop strongly disagree. Bottlenecking is completely separate from what you personally deem to be perceiveable. That's a ridiculous statement and has never been true.
Having read about AA over the last 20 years, this is the best overview video of the current tech. The only thing missing is talking about the effect on moving objects (image stability)
I never get a chance to post a comment so early. So I'm going to use this moment to say I really appreciate all your work Phil. I don't even play CS:GO anymore, but I'll still watch your videos for their informative and well presented design 😌
I second this. I follow for Phil, almost anything he makes. His voice is kinda comforting. Not to mention his Helluvalot and sleek yermom jokes :) Cheers Philip! Wish you all the best!
Yep it's cool that you can get nicer looking yt videos on a 1080p screen by selecting 1440p. I already knew that but great that Phil noted it in the beginning of the video! Ps why the fuck have you gotten a 3090 but not a better monitor for it... 1080p 144hz ain't bad but paired with a 3090...
I knew this trick from a while. I've been playing an handful of games at 1.75x resolution from my 1440p screen and the difference is astonishing. Goodbye TAA, DLSS + DSR is awesome and doesn't blur anything.
@@jesse291 Good question, real-time live-rendered ray tracing in a beta animation program is the answer (also because it was the only 30 series card being sold locally) you can pull about 10fps at 480p which is adequate, it takes a second when you stop moving or make a change for the scene to get a clear image. The monitor can wait till I can save up again since I already have one.
easy to understand and factually incorrect on several counts. you cant do MSAA with 2 samples per pixel. its only 4, 16, 64, etc. youtube looks better in 2160p than on 1080p on a 1080p screen because the bandwidth youtube uses for 1080p is too low to generate decent image quality on that resolution. streaming in 2160p and then downscaling gives better image quality because the bitrate for 2160p streams is higher, not because downsampling magically makes the quality better. this is made by someone who barely understand how to use a computer, for people who barely understand how to use a computer.
@@mopilok okay but then why make this whole video instead of simply saying that 4x MSAA gives better image quality than no anti aliasing, which is common knowledge
One thing to note about the "sweet spot" is GPU latency. When approaching 100% GPU usage, the GPU gets a bit clumsy and input latency goes up. So even though the sweet spot has the same FPS, it's actually a less responsive experience that can have a negative impact on your skills in a competitive game.
Yes, i experienced the same. Not too good for competitive shooters, but fine for pretty much anything else. I’m actually using it for pretty much any other game my system can handle at 4K using a 1440p screen. For me, this was most visible playing Mafia 3. The game looks super 💩 at 1080p and 1440p but stunningly good at 4K even on a 1440p screen. It’s worth trying on many engines if the system can handle. Mostly it’s also worth lowering other graphical settings as the better edges due to Downsampling are actually worth more than higher texture quality e.g.. and you can almost always lower or even get rid of any AA setting.
Cs go do not have a sweet spot on higher end pc's right now. I dont have fps cap and i cant get more than 50% gpu and 20% cpu usage. Source engine just cant utilize a full potential of modern hardware
Been running above native res for a long time. I do wish LCDs could act like CRTs where display resolution didn't matter, though. Fancy upscaling wasn't ever necessary on CRTs because most resolutions looked equally as "good" in terms of how the display handled it.
once you have a 8k screen the pixel canvas will be so dense that you will be effectively be able to use any resolution with very little loss in quality. won't be perfect as CRTs are, but good enough that you won't notice the bluriness.
BOAH ... this talk about CRT is wrong on so many levels! Please work on your attention span and watch more than the first 3 minutes of the video. 1. NONE of the issues super-resolution/downsampling/supersampling are fixing can be addressed by the CRT! 2. If your CRT is blurring the picture it simply is either a very crappy model or run at the ragged edge of its pixel clock ... if not even overclocked (been there, done that, killed a 17" CRT doing so ... refresh rate was more important than sharpness for me back in the day ... everything below 100Hz while gaming gave me dizziness ... everything below 85Hz gave me downright vomit reflexes ... like a bad VR setup) Back in the day at the end of the CRT era there were very good 19" CRT monitors from Samsung (and Iiyama and probably others which were even more expensive ... outside my budget at the time) which were laser sharp at 1280x1024 at 85Hz. Running the same monitor at 1280x1024 at 100Hz by contrast produced a blurry mess which couldn't even hold its geometry over the whole screen and had to be degaussed every couple of hours running that edgy. I ran that CRT monitor at 1024x768 at 100Hz and used the freed GPU power for 2x or 4xSuperSampling AA (which in effect IS downsampling ... done since 1999 ... MSAA wasn't even a thing back then ) ... no dizziness and WAY better image quality than 1280x1024 ... On the CRT there are the same jaggered edges, shimmering/interrupted small geometries and blurred out alpha-details on textures far away or on steep viewing agles! If you want the effect of a CRT which was overclocked until it almost shits it self, use FXAA ...
also the fact that 4K is a higher bitrate than 1080p on youtube, so that's probably the main factor it in looking much better than youtube 1080p on a 1080p display.
@@3kliksphilip Incredibly, I'm also having this issue, but I think it's an issue with the Revision mod rather than the engine. The same frame drops occur in the final area of Area 51, when you're looking at Bob Page from the opposite side of where you entered the room from, on a balcony. It's only in that specific area, looking in that specific direction where I get nearly 10 FPS, which makes me think there's some specific game object that's being rendered from that point which is causing the frame drops, and is probably true in the MJ12 ocean lab level as well.
@@3kliksphilip sounds like a shader is doing heavy per pixel calculations over the surface (water shader maybe?) and as the game is old, it isn't using modern optimizations to render fast enough, thus the performance hit is exponentially higher when you bump the resolution (and thus the amount of pixels to render)
@@3kliksphilip Well, I'd also say that old game engines use old graphics API such as DirectX 8 or even 7, and older OpenGL versions. Not counting that most of the old games takes advantage to only one core of a CPU (CPU frequency matters more than cores and threads quantity), reason why people want source ports of old games, so that they can take advantage of new graphics technologies and run those games better. Plus, probably loads of other code stuff that have not been made in the past because they were just not meant to be used or thought that they would've made a difference in the future. Not to mention how CS:GO's performance would be if it would be using DirectX 11 API instead of 9. Then there are certain games and engines that are mostly CPU dependant (CS:GO is a good example) unless you set it at a resolution that's sufficiently high to force it use your GPU power, which is pretty much explained in the graph you showed in this video.
One of my favourite things in Titanfall 2 was the addition of dynamic temporal super sampling based on gpu load and frametimes and I still wonder why more games dont add such an amazing feature
@@shakal_ it was the dynamic resolution setting games already have had but it just downsampled when you had frame rate overhead, literally the normal process but the slider goes past 100%
OMG! you just changed my life! So, I just recently bought the bendable $2,000 Corsair OLED gaming monitor and was considering sending it back because the text in windows was very pixelated. It's amazing in games and for HDR video, but in windows the text just doesn't look good. After watching this video I enabled VSR and then changed my resolution from 1440p to 2160p and the text looks so much better! This is amazing. Thank you so much for making these types of videos.
I clicked on the 1080p quality option and it turned your video into a slide show. Such an amazing contrast! I suddenly had time to really appreciate every single one of frames, it really improved my enjoyment of this content :D
19:20 at this point, I'll adding a bit of informations here: The tree becomes more solid at longer distance because that's how the shader in game works with mipmap. Lower mipmap (lower alpha mask for the leaves is the same way as lowering resolution of the texture map) in result the alpha mask will become more solid and can't see through. This works on every games, even now we always have to use mipmap to save resource from afar. This plus the LoD of the mesh and the resolution give this kind of blocky mess. Good video btw
11:08 a few people have done some fairly scientific latency testing in a number of games and generally found the best performance when GPU load is kept to around 80% or below; above 80%, latency tends to go up faster than FPS goes down. ensuring that the GPU does not spend a lot of time at or above 80% will also greatly improve noise and thermals.
Nvidia Reflex and AMD Anti-Lag aims to reduce/remove the increase in latency when you're closing in on max GPU usage. Have used and tested NV Reflex and can say it works in the supported games. Don't know much about how well AMD's solution works (who also popularised it first).
no such thing. there's a ton of variables. the specific gpu, cpu and rest of pc used, the specific game (and version, software) used. Case used, fans, room temperature. etc even then, what do you mean with "best performance"? yeah, no such thing
They meant best performance as in highest FPS/lowest latency as there are certain games where the latency increases when you're GPU bound. Maybe their choice of wording might have not been the correct thing but the overall context makes sense.
This was interesting and educational. That said, it does seem only really relevant/useful for people still on 1080p monitors, or maybe 1440p. I have a 4K monitor and can't see myself ever using this since downscaling from 8K just seems impractical for current GPU's even if you wanted to - 4K is already hard enough. Might be relevant in the future though.
even for you the Deep Learning DSR (DLDSR) of Nvidia could help since it can use your 4k or slightly below 4k resolution and pump it up to 6k to downscale from there. At quite little performance impact. :)
I wish I would've had this information and my current pc 10 years ago, better late than never! Stopped playing CS a long time ago, but I still watch your content for entertainment purpose, your videos are so well thought, you never fail to disappoint.
WOW! How did I not know this? The image quality looked much better. Like running a game at native. Thank you! YT vids look better now. I suppose I should have known though. It's the same principle as recording and mixing audio at a higher bit depth and rate only to down sample the master. I also want to add that one reason for going with lower quality graphics in FPSs is to reduce the amount of distractions.
The amount of work you put in your videos is as amazing as always. Glad to hear the old MVP anthem for the sponsor though, can't wait for the next case opening millionaire episode and valve adding your music kit.
Man, this reminds me of the PS3/360 era of gaming where PC hardware was so advanced that you could run modern games with supersampling/downsampling. I played the original dishonored at basically 4k (Using Sparse Grid SuperSampling) and it looked absolutely incredible. Honestly it probably looked better on that 1200p panel than it does playing at native 4k on my TV lol.
CSAA was actually a more optimized format of MSAA where it stored a reduced amount of color samples, compared to coverage (coverage being which color sample is affecting the pixel how much), so you could have much smoother edges, and even though it could only have like 4 color samples max on 16x CSAA for example (I dont remember the exact number unfortunately), it yielded performance comparable to MSAA 8x. Unfortunately support was dropped when the Pascal architecture on NVIDIA came around (GTX 1080 Ti included). I remember playing around with CSAA in half life 2 then being confused at why the option was gone when I got my 1080 Ti :(
It doesn't play nice with the architecture. You *can* still enable it through Nvidia Inspector, but it'll crash without a compatible GPU. It also doesn't offer tangible benefits over what we see as 8x MSAA on a current gen GPU which is actually not the same algorithm as what 8x MSAA used to be (and this goes for all GPU vendors).
I really appreciate these long, well explained, well edited, and well written videos. This one in particular I think is such a quality video. I never really knew what bottlenecking was too, so your explanation really helped me finally understand it. Thanks for the great lunchtime video!
I don't do a lot of gaming but I make digital drawings; when I make an illustration that is meant to be in 1080p, I still make the canvas in 8K and then export it in 1080p because I thought it looked nicer. Glad to see I am not crazy
The thing is that what you seem to be looking for is simply texture multisampling. Supersampling is pretty much considered a sin for real-time renderers. Instead, anisotropic filtering does exactly what you are looking for and really really well. But devs usually implement the alpha part the cheap way: using thresholding no need to mess with blending fragments in the correct order. This obviously results in aliasing near these texture holes. Now by ray-tracing the primitives containing transparent parts on top of the opaque (rasterized) ones, we obtain the same level of quality of supersampling (texture-wise) for much, much cheaper. Ray-tracing is quite a fancy & easy solution to alpha blending. There is some rewrite potential of these parts of legacy renderers, but gamers need to actively ask for it. The superior stage is shader multisampling altogether which, well, is pretty much indistinguishable from supersampling. There is some work done on locally increasing the sample count but at the end of the day, this is really similar to running the same shader code multiple times per pixel.
I often return to this video, and I don't really play cs:go; this video explain AA and downscaling so well that anything else sounds confusing and I need THIS video to remind me how they work like
ive been playing dead space 2 on pc again and the AA method used back in 2011 is terrible and forcing it in the control panel just hits the frames way too hard, I ended up just enabling super resolution and im rendering the game at 4k and downscaling to 1080p, it doesn't look as crisp as forcing 8x MSAA but it looks much better than native and still holds 165fps.
@@myztklk3v I'm on the opposite side of that. It's been about a decade since I played with a friend (I don't remember if we did 1 together, or if it's just single player until 2) so I'm holding off until I see the remake. Just like Bloodborne's Remake (or could be Bloodborne 2, but considering it 's being made by Bluepoint and most likely coming to PC I think it's remake) I don't want to complete the game right before the new one comes out so I don't immediately know what to do and have a bit of time to remember/wander around.
I make use of the downsampling effect for JPEG images too. I realized that when targeting the same image file size it is better to a) use a higher image resolution with lower quality than a b) smaller resolution with higher quality, because the scaling algorithms when presenting the image on a viewport smaller then the image, usually yields better results for B. Only recently I discovered that A can have better results when using a sharpening filter prior saving.
testing my settings was hell of a journey as my game crashed every single time I changed the resolution, aswell as sometimes i changes MSAA settings. I don't have a potato pc tho
Tried to explain this to my friend and he called me a looney conspiracy theorist. He was an IT specialist with experience in game development, goes to show a degree doesn't give you intelligence. (CPU/GPU limited part)
An "IT specialist with experience in game development" should definitely understand what a bottleneck is, maybe you two had some misunderstanding going on..
I have also noticed that DLDSR also helps get rid of shimmering effects as well. I have been sort of using it as an alternative to TAA in some games where it is super blurry and has ghosting.
The thing that got me to appreciate just how good downscaling can be is vr. I upgraded the hell out of my pc so it could handle vr, I went overboard and can now play vr games at 1.9x the internal resolution of the headset. If you've got the extra frames above your refresh rate to spare then definitely try squeezing some more quality out
So much research and effort in one video. It's great to see people being excited about such niche things. My favorite video of yours so far man, keep up the good work!
I think the future is in combining: downscaling in geometry and textures, upscaling for raytracing, ssao and so on. Also 4k and 5k displays will solve this problem, running in native resolution.
I imagine what DLDSR is doing is rendering at the resolution they claim it's rendering at, upscaling using their AI magic to the resolution they say it "looks like", and then downsampling back to your monitor's resolution.
Mr kliks man, at this point u make a lot of us realize what we can do with our PCs, but I think you’re at the point where you should develop a software or partner with a software that tells us what quality settings we should play each game with. You can start with easy computing settings but when u can define more important qualities you can add those qualities to the software. I can help to a certain extent, but most of ur fans trust you way more than most creators, and I think you have a future in this technology
probably buried, but just wanna say I appreciate this video even though I dont play csgo. I have been using your method to play competitive rocket league downscaled from 4k to 1080p for almost a year now. As rocket league is light to run it is very easy to still push 300 fps with this method. The game looks so much better and I feel I have significant better understanding of downfield opponent's orientations and movements. your the goat ty!
Wow, this explains why I always had the impression that watching videos in 4k DOES look better on my 1440p monitor than watching them in 1440p. I always thought this has to be a placebo effect, though, so it's nice to know it's real.
A good 1440P video will be created from the downscaled 4K version though, and hopefully with a downscaling algorithm that is at least as high quality as what a realtime downscaler would use. If you are talking about RU-vid videos though...maybe RU-vid doesn't use a very high quality downscaler. They have to process massive amounts of videos, so who knows.
@@keithwilliams2353 the video is not actually being downsampled when you play it above your monitor's resolution. it just squashes the video into whatever window size it's given, and then it's usually not filtered at all unless the video driver has something that's specifically designed for that. if you play a video above 1080p on a 720p display, thin lines with harsh contrast are usually going to be massively aliased. Especially noticeable when somebody records a CRT monitor in a video. In most cases it'll look great and perfectly crisp, but then it's given a black line that's 1-3 pixels thin and it suddenly looks like you're playing shadow of chernobyl with AA turned off.
@@Shpoovy I'm pretty sure my hardware accelerated video playback is using something more complicated than a point filter to downscale a video that is higher res than my monitor. My GPU offers realtime color correction and denoising, like they wouldn't be bothered to at least implement a billinear filter?
@@Shpoovy Sure, but I'm pretty sure almost all modern PC discrete and integrated graphic solutions have support for hardware accelerated output scaling. Maybe you won't get it on a chromebook or and old tablet, or a device that is misconfigured?
I had an entire blog dedicated to high quality screenshots from games I've taken and they were all done with downsampling :) There used to be a great pc screenshot thread on neogaf with tons of amazing quality screenshots doing the same. Some people even mod old ass games to force support for like 8k resolutions on games from the 90s and get beautiful screenshots that nobody else can get unless they do they same. I actually love it so much. Very highly suggest taking a lil sightseeing tour through Karnaca in Dishonored 2 while downsampling to the maximum of your computers abilities.
Okay, here's a quick history of AntiAliasing. In the beginning, there was AA. This was produced by rendering to a buffer that was several (2x or 4x) times greater in size then the native display resolution, then downscaling it to native. It was horribly inefficient, and took twice (or four times) as much work as rendering natively, but it looked fantastic. Then came "Multisampling AA". This worked by rendering at native resolution, then mixing in small parts of surrounding pixels into each pixel to smooth jaggies. It worked, and was highly efficient as it was only rendering at native resolution, but it made everything blurry, and couldn't add back in pixels that were lost when they were rendered. As companies played with it, different sampling patterns came about (including Quincunx - 5 pixels in a diamond pattern, the most simple sampling you could get), with different numbers of pixels averaged. Original AA was still around at this time, and was renamed "Super-Sampling AA". Then Matrox came out with a fantastic system called "Edge AA". This rendered at native for the whole scene, then used the depth buffer to detect edges of objects and re-render them with SSAA (up to 16x!). Because it was only working on the edges, it wasn't as intensive as full screen SSAA, and it didn't make everything blurry like MSAA. It didn't work on objects with alpha channels, like chainlink fences and leaves for example, but where it did work, the effect was astounding. All the 3D companies copied this and put it into their MSAA engines for improved speed and visual fidelity (leaving flat textures largely unblurred). Next, NVidia worked out that if you ran an edge detect photoshop filter followed by a blur filter on those edges, you could get cheap AA using just texture shaders. This was quickly adopted as it made things look slightly better for "free", and was called FXAA (And/or TXAA when they started using previous frames to help with the edge detection, T standing for "Temporal"). This was also adopted in games with their own implementations and in AMD's drivers too. All these systems still used native resolution, and so couldn't reconstruct data that was missing between the pixels, like original SSAA did. NVidia introduced DSR, which was basically a way of dynamically rendering at higher-than-native resolution for quality (or lower-than-native for performance, a trick borrowed from consoles). This allowed Super-ish sampling (as it wasn't a whole multiple of the native resolution like SSAA) again at the cost of having to render higher resolution screens. Now NVidia has taken their DLSS system, which uses AI/Machine Learning to try and fill in data where there wasn't any, to create a version of Super-Sampling. In the same way it (for performance) renders at a lower resolution then upscales with AI, this new system renders at (or above) native, upscales it with AI, then downsamples with smoothing like SSAA used to to construct an anti-aliased image, and called it DLRSS. If the AI upscaler recognizes objects like leaves and upscales them accurately, this will work well. Unfortunately, it seems that it isn't, so it's not much better than MSAA. In short, for best visual quality use SSAA or DSR (which will incur a heavy performance hit), followed by DLRSS if your system supports it and can't do SSAA or DSR, followed by MSAA, followed by FXAA if you have no other option.
2 questions, 1. how do I find the framerate that my cpu and gpu can run seperatly. 2.Should downscaling be used with SSAA/DSR to provide better visual quality/detail or should it be used by itself.
I am using a 3200G Vega 8 machine and a 1366x768 monitor output but I render the desktop at 1600x900 resolution with AMD's VSR (Virtual Super Resolution). It enables me to render the UI of my decade old games at a more comfortable resolution than my monitor's native resolution. I can also push it to 1440p but it is heavier to render on my vega 8 and makes text less readable on my monitor.
i have the same rig with same monitor and ive always struggled with sharp edges since anti-aliasing is so demanding i will try the VSR tho i hope it won't lag too much
What splits everyone's views on whether or not upscaling or downscaling matters is due to how people care about their gameplay experience. for example, if someone cares more about the story, they wouldnt care as much about how fast the screen goes if they care about the gameplay, they would want a good image either way, everyone has their own way of wanting to experience a game. No need to argue with someone that they should experience what you experience
Every time I watch one of your videos, I just think of the old days on CS:CZ and CS:S. 10 odd years now. Great to see you're still around and hey! 1m+ subs now! Congratulations!
"Better" is a very ominous statement. MSAA is exponentially faster than downscaling, not to mention the computational pressure to render sub-pixel geometries that just look nicer without being crucial to the game itself. This statement also advocates for upscaling+AA when gaming on a 8K monitor.
I finally get how this works. It's not anti-aliasing so much as it's saying "see that palm tree? It's only rendering at 320x240", but your display is capable of showing more pixels of detail at that location, so downsampling allows you to get more information out of a lower resolution base. Running a game at 1080p isn't actually 1080p for every object on screen. But running at 4K downsampled means all the smaller elements on screen that lost detail in the renderer get that detail back.
I loved this video. Fwiw the big problem with aliasing for me is when the camera moves and the aliasing artifacts make it look like something is moving when it's not, which is really bad in competitive games. Or worse, leaves in the distance moving causing a flashing artifact.
Valve could have made the tree leaves look correct with anti-aliasing - they have a variable in materials named $allowalphatocoverage that they don't seem to ever use, but it makes $alphatest materials able to be anti-aliased.
Few things, MSAA can affect foliage if it the alpha cutouts are forwarded to its coverage and also theres a reason textures also look better on higher res and that is because the Mipmaps are biased against the higher res.
you are missing Supersample, it will apply that 8x MSAA to textures with alpha too, it can be force in the nvidia control panel, some games will be compatible like Fallout 3 (and in general older APIs than DX10)
DSR is not working correctly without smoothing on uneven resolutions. 33% smoothing therefore should be a good compromise for all resolutions. DLDSR on the other hand works correctly with all levels of smoothing. Smoothing in DLDSR is actually deep learning sharpening, the lower the slider the more aggressive the sharpening. Therefore, I would argue, 66% is about the optimum with DLDSR. You want the neural network to add some details but not to get into ,"painterly" territory.
I'm not sure why but this video is endlessly rewatchable and I love it. Would be very interested in seeing you SS more games to reveal "hidden details".
GREAT VIDEO-AA & MSAA & FSAA makes me remember in the beginning when enabling it any game was struggling to run......forward to 2022 you feel nothing if you enable it.....same story repeats now with DLSS & Raytracing.....in 20 years from now it will be just a feature by default :)
Meh, as no matter how hard the original image is to render, the aa isn't more, so with modern games that litteraly can't run on hardware from that time ( even forgetting driver support) it's quite painless especially at lower resolutions
This was the best downsampling (or downscaling) video I've seen yet. I especially liked the Nice Tree 1 bit; you really showed that DLDSR is NOT getting more detail than the resolution it's based off. I didn't know if DLDSR got more detail and I didn't think of testing it like this. Super cool! What do you think of downsampling (eg. 1.78x DLDSR) then downscaling (eg. 80% resolution scale) to get better image quality and not as significant of a performance impact as 1.78x DLDSR? I tried this exact thing in Hunt Showdown but found that the image was actually MORE JAGGY with the aforementioned scaling soup than native 1080p. I wonder why this is so 🤔... I'd expect the opposite as the net resolution is still higher than 1080p.
My guess( I never played it nor am I some sort of expert) is that possibly the downs downscaling wasn't perfectly even or something with the engine itself.
I get called a freak of nature by my friends for preferring playing FPS games with low resolutions and very bad aliasing. The reason is that I always find it extremely easy to spot moving targets or distant enemies when visually they're so obtrusive. I honestly do believe that it is significantly easier to see opponents. The only times that I'm held back by my resolution are in cases with things like bars or bushes, and in my opinion, competitive FPS games shouldn't be bloating their maps with unnecessary details like that anyway, or at least they shouldn't be doing it in places where enemies can be.
Very good video. I've always known that watching a RU-vid video, for instance, in 4k on a 1080p screen will look much better than watching it at 1080p, but I hadn't realised quite how much of a difference it could make until I watched this video. The same goes for running CS:GO at a higher resolution than your native screen too, although with that being said I play on 1280x1024 stretched lol.
Phenomenal video as always, Philip! Just wanted to chime in regarding the clarity of alpha textures because I literally just faced this issue today while learning Unreal Engine 4 - game engines heavily rely on mip-maps (a chain of progressively lower-resolution copies of a texture) that they switch to as the texture gets further away. Their primary purpose is to avoid shimmering on distant textures that can't be stably resolved at your current resolution, meaning at higher resolutions you use higher-res mips. While generally imperceptible by design, it's impossible to miss on alpha textures as once-clear leaves morph into a blobby mess. This can be mitigated by adjusting the LOD bias setting for your 3D application via Nvidia Control Panel or AMD's equivalent, forcing it to use higher res mips at the cost of potentially increased VRAM usage. Definitely something to keep in mind when employing upscaling and especially downsampling in the future!
I'm CPU capped in Death Stranding at about 60fps.. so I boosted my game resolution to 7K and downscaled it to my monitor and hottdamn does it look PRETTY now 😍. Objects in the distance are way crisper and easier to see, same with grass and foliage. Downscaling ftw
I bought 3080 for my bday some months ago but I play on a 1080p 60hz monitor so I use downsampling on every game I play, It just looks insanely sharp its awesome
@@vahek.6187 yes I know but its very hard to get a monitor where I live, a decent 1440p monitor cost almost twice what I paid for the 3080, so I guess Ill wait for now
If a game has a render resolution setting (ex. red dead redemption 2, mirror's edge catalyst, warzone), does it make sense to use the control panel DSR (at 12:39), or is it better to use the in game setting. I'd assume it would end giving the same result, but maybe the Nvidia's Control Panel gives better performance or something