There's a way to add AMD's FidelityFX to a lot of Steam games as well. A youtube channel I follow did a video on it: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-DpFvpUViJag.html He focuses on VR content, but there is no reason the mod wouldn't work on games that aren't VR. If someone happens to have an older Nvidia card that doesn't support DLSS, FidelityFX should still work for you. Try on multiple games; some games don't see large FPS improvements, apparently. You should probably avoid tinkering with modifying games that have anti-cheat systems as well. If you still want to try, at least research it first to see if someone else did it, and what problems they might have had.
@@tr1gger810 Doing this in Yakuza Kiwami 2 atm and got a 40% ish performance increase from upscaling 1080p to 1440p with sharpening set to 1. Couldn't see hardly and quality loss even in side by side images.
@@noahleach7690 exactly. It is time to accept that this is the new minimum. These crybabies want better games, but do not or cannot invest in their systems. If you want to play a game with more graphics quality, you need hardware with more features. Simple logic.
He's much more pleasant to listen to compared to Linus. Feels like I'm watching an adult instead of a 14 year old who just slammed a monster energy drink.
pro-gamer: "I literally cannot function without this update" also pro-gamer: "I need to take screenshots and compare side by side to notice the difference"
@@mryellow6918 not really, at a professional level you want to maximize framerates to keep up with high refresh rates on monitors. If your game isn't running as many frames as possible you lose the advantage of high Hz counts on a monitor. It's pointless. Name one game where having max settings and making it look as good as possible, and lowering framerates as a result gives you a competitive advantage at a professional level
@@kyler247 many games offer graphics options like shadows that are useful on. Or in csgo where you get more visibility through mollys on medium rather than low
@@TheGameBoyss Depends what you have to work with. it might get the font wrong, but making something completely unreadable readable? i can see that happening.
@@TheGameBoyss You can however "guess" which data would be there most likely, which in the case of video games can give sufficient results to look good. Especially with common textures.
@@asmosisyup2557 no, you're wrong and the other guy is right. You simply can't add data to an image. What you see is what you get. At most you can get the image a little more refined but that's it. Whatever you're talking about doesn't make logical sense.
@@TheScrubmuffin69 Did you watch the video, the whole point is that they can add the missing data (with an AI) to make it a higher resolution image. that is the entire point of dlss. Please go watch some videos of what AIs can do nowadays itll blow youre mind. Also "you simply can't add data to an image" is just stupid. ever heard of photo editing or photoshop? crazy thing is you can actually add data to an image and you dont even need any fancy stuff, just a PC with paint on it edit: im assuming you actually mean you simply cant add the "true real life data" to an image of real life. but thats also slowly becoming wrong as more research is done
DLSS should already be built in games to utilize the most recent version. Install DLSS with the graphics driver, and have the game just reference those files.
I just wish we could run our own games through the ai algorithm to make our own dlss profiles. Useful for unsupported games or games with modded textures. Granted, it might take a month on one rtx card, but the option would be cool.
@@choppag1984 Not cringe, it's a bot. Report it for sexual content or just ignore it, the same way like youtube does with all the reports i made in the past. I reported this particular bot multiple times in the last weeks and nothing happens. Well played youtube, indeed.
Can we all take a moment to appreciate our boy Anthony on his weight loss journey? He's looking smaller these days and I know he's been working at it for a while. Keep going bud!
"how to upgrade your graphics for free" *Only if you already have an RTX card ..... Edit: now the title is "This easy mode makes your GPU faster". That's even worse, it doesn't make your GPU faster at all.
The DLSS options should have full steam integration, as well as any other digital service. It's just such an incredible feature that it needs a spotlight position
I want to see Fidelity FX get implemented into any game with just an app. Its a dream most likely, but that would be one of the biggest thing for PC gaming if that happens for sure.
The fact I watched the whole thing and found it fascinating, before remembering I have a GTX 10 series card so can't use any of it, reminds me how engaging Anthony truly is.
I would love to see what the performance hit for enabling DLSS on a non-RTX card would be, since we have that option with RTX itself. Would 1080p Ultra Performance even run in realtime?
@@maciejjabonski833 I don’t think you understand how this works , you NEED tensor cores. And if it would drastically worsen performance why would you even use it if it’s purpose is to do the exact opposite
@@maciejjabonski833 Just a matter of time ALL NVIDIA proprietary stuff eventually loses to an open version of it from competitors. Remember physx? LOL Gsync is already went this route as well. Nvidia needs to improve image sharpening as well. AMD's version of it is so much better. It does wonders in tons of games I play.
would be really interesting, yes, I've wondered this too haha. ofcorse, the cuda cores aren't super fast with that one operation, but they can do a shit ton, here albeit without acceleration for machine learning, so yeah it could be possible!
That's all great and all but it does require you to have a brand new and very expensive GPU to even make use of DLSS..... Guess my 980Ti isn't going to get any faster any time soon
The real time AI upscale tech already exits as per Oculus if you look at their "Neural Supersampling for Real-time Rendering" paper and while there still is a traditional renderer for the graphics the input image was only 960x540(PSVita internal res btw) and the output was 3840×2160(4K) and you cannot discern the difference. Now granted this was made for VR so there may be some information enrichment coming from the disparity map generated specifically for the VR render output but this most likely still originates in the depth buffer. What this means is that its entirely possible that at some point we will be able to run games at insanely low resolutions and then upscale via lets say tensor cores allowing the GPU to process insane effects giving us realistic cloth/hair/fluid/soft body physics and hell probably much faster ray/path tracer performance.
A traditional game engine will still be needed to make some kind of governing content like basic 3d structures for level designs etc. But once AI filters become mature, traditional methods like textures and maybe even 3d objects will be replaced with tags just saying "this surface is grass, concrete, steel etc." that the AI part will use to fill in content.
The difference between 1.0 and 2.0 is some sort of recurrent convolutional network architecture that incorporates the temporal dimension. RCNNs are already SOTA for most video applications, unless Nvidia starts playing with adapting vision transformers. But the number of model parameters gets pretty big pretty fast with those. So I’m guessing DLSS 3.0 will come with a need for larger GPUs or some revolutionary improvement in model architecture/efficiency.
I might have asked this one before, but: Is there a podcast done by Anthony? I would love to get a PC news podcast or sth like that with Anthony. I just love his voice.
"How long until AI does the bulk of the rendering? Those are questions we can't answer yet." Two Minute Papers: "Just 1 or 2 more papers down the line!" There's actually a lot of work on a final realism post-process that basically takes care of the last steps. Tesla is starting to use it to create training data that is mostly photo-real. But there is also other work where scene textures and details don't have to be drawn out, they can simply be described in rough terms, and the AI generates the textures and details on-the-fly. It has a bit to go, but it would be a totally different world of graphics.
SO the cool thing here and the last comments Anthony made is that, it's a massive area right now in AI to work with simulated worlds to train models which will work in the real world. Meaning a whole second industry is getting into the real time graphics market. Mainly because in a game you have perfectly labelled data to train on, I don't know where this will take us, but I expect DLSS will see constant development as it works on producing the most realistic graphics possible to train models which will work in the real world. Look at the GTA5 AI work so kinda get a sense of where this might go. They took data from driving around European cities and images from GTA and attempted to convert the GTA scene into a photo realistic image.
There's a few devs actually implementing DLSS of their own accord, then Nvidia makes a blog post to brag about it. Pretty much free advertising. But honestly, I'd rather have Nvidia sponsored titles than AMD ones, since for the latter you get weak ray traced effects so that the AMD GPUs can run them without shitting themselves, and no DLSS at all.
I can't wait till retrospective AI upscaling. I know modders and devs have been doing it, I wonder if anyone will make a general program that can run to upscale the textures in the install files or in conjunction with the game to store the upscaled textures into the VRAM.
I'm guessing the 560p part of the video that is upscaled is the very zoomed in part around 1:18. If you were going to do it, it would need to be zoomed in to not loose detail in face, since the facial area would be like 100p wide in the view used in most of the video. It's a very short segment of the video, so it's less likely that people will notice on the first watch It has no details like text in focus that would make it easy to tell if something was off Since the 560p line was recorded, it means you knew about it while shooting the video and that smirk makes me think something was up there
If dlss requires each game to have its own data set to be trained on, then what would happen if you took another games data set and applied it to a different game? Makes me wonder if you could use dlss to create a psychedelic type hallucination effect with a similar feel as the ai generated oil paintings. Imagine if you could somehow merge that set with a games dlss set
With War thunder specifically, it has file modification anti cheat. It wont *always* ban you for changing files (depending on what you actually change) however if the hash on the file doesnt match up, it will disable it.
Basically, Nvidia needs to separate DLSS from it's game a little bit. Rather than having it installed inside the games core files, install it within Nvidia's files. Then allow people to download DLSS separately if they want, with an installer that default installs the latest DLL for each major revision, with a "customize install" option allowing for you to manually select which version you want per major revision. Bonus points if they have a link to "Upgrade Notes" when you click a [?] beside the version name in the installer.
So on the opposite side of the field, The fact that you can utilize FSR on pretty much every game in Linux has blown me away. Playing Yakuza Kiwami 2 atm with FSR sharpening set to 1 and upscaling 1080p to 1440p and even in side by side images I could hardly see a difference between native 1440p and FSR upscaled. Was an amazing performance uplift with no loss in quality although I have heard in some games it can make UI's look bad due to the effect being applied after UI scaling and post processing.
Have been thinking, instead of a purely Linux focused channel, wouldn't it be better for Anthony to have an all-round "hacky" channel. In which he could thinker/hack all kinds of thinks, from Linux to old consoles, and thinker with all kinds of devices to make them do things beyond their initial purpose. This would be an slightly more technical channel, but I am sure Anthony can share the workload with Alex or an other members, the main issue if they can make it "fun" enough for the average member of the community.
I know It's ridiculous haha. The problem isn't so much hardware, as hardware has progressed much faster than games. it's the increasing Frame-rate and resolution expectation. Just as consoles struggled Good graphics at 1080p 60fps, instantly the demand was 4K 120fps, and now people want 240fps and 8k. You can't have that superficiality and have Game substance That insane obsession with just two superficial parameters was never an issue before, so graphics progressed in balance, along with gameplay substance like euphoria, physics, ballistics, animation, particle effects etc Now Most of the resources is spent just on frame-rate and resolution. You couldn't make another GTA4 because any potential- substance within the gameplay would make it impossible to achieve, alongside those fps and resolution expectations. That's partly why GTA5 was so basic and lacking in comparison, simplified physics like euphoria and vehicles, because they had to achieve 60fps lol. Why on Console last-gen looked so trash! Personally I would prefer Gaming to be moving towards dynamic ballistics and localised damage, fully destructible environments, Advanced AI, Communication with AI npc's, Better dynamic animation- euphoria, Proper choice based narrative gameplay, etc But Instead we have 4K 120FPS on two decade old game mechanics, lmfao
I guess, there is also a room for such video, but for FSR this time. It's integrated into Proton on Linux. To actually get it, you'll need to build the needed version of Wine yourself, but still. This basically adds FSR to ANY game and ANY gpu. Compatibility is a question, obviously, but if it's of interest, I could maybe attach a few articles on how-tos.
Just started playing Death Stranding and the AA in the Director's Cut didn't work, I tried everything to get AA to work even overriding it in Nvidia control panel, but everything was flickering, finally I decided to try DLSS, and it worked like a charm, except the ghosting was insane, so then I decided to use DLDSR with DLSS and suddenly the game had no ghosting aliasing or anything unpleasant I'm new to RTX, but oh man is it awesome. I upped the DSR factor to 4.0x with DLSS on quality on a 1080p monitor and now the game looks insane, super crisp with no flicker or ghosting in sight, my GPU usage is around 80%, but the Wattage is only 80 at 60fps, that's still lower than my GTX 970 at 30fps.
3:30 this part is useful for getting good frames and good image because if you use like 900p on 1080p monitor it looks bad on Windows but setting it to 720p, 540p, 360p it looks better
I wonder if a future revision could solve the motion vectors issue like in F1 by employing a hybrid of native and upscaled rendering. Upscale the parts it's confident will look correct, but render natively the elements which are most prone to artefacting. We've already seen DLSS tend to be pretty great with almost everything so 99% of the frame would probably still be able to be rendered at the reduced resolution.
Thanks Anthony, always appreciate your videos, you always deliver quality content. I bet all of your previous employers are very sad now for letting you go man
Dang title fooled me again. I thought the video would be about auto-overclocking. How does super sampling make your gpu faster? 9:22 "We also can't answer.....who our sponsor spot is! Or, rather, we can!" Such a missed opportunity.
Been thinking lately, looking at the enormous 3090 (in videos, not in person) and I reckon it won't be long before something that huge is a relic. I'm hoping that physical gpu tech will downsize and scaling software will take over. It wouldn't happen quickly but that's my hope I mean, every other bit of tech has gotten smaller over the years innit
GPU tech seems to accelerate much faster than other avenues of computer tech. I could see that happening within the next 2 or 3 GPU generations. About 3 to 5 years I guess
It's honestly not that big in person. Mine isn't ~that~ much bigger than the 1060 it replaced. And the main reason for the size is cooling, not the actual board. Just a lot of fins and fan. Plus, as long as something small performs well, people will always make something bigger that performs better. Especially for home workstations, where size is a nonissue.
--- My main game is Warzone and I LOVE the fact that I can get high FPS at 1440p with DLSS. The game is very CPU-bound but I only have a 2060 Super so I love the fact that I can give my GPU headroom to work on HD textures while the AI does the upscaling and I get the advantage of 133% as much visual data in any given area, than at 1080p. --- I don't notice any of the ghosting you're talking about in either Warzone or the Cold War campaign, even in my recordings (editing/recording software interpolating frames doesn't count), but this is once again at a slight (2-5ms/frame) CPU bottleneck (5600X btw) and playing a first-person shooter with no motion blur, not a racing game. --- I think AMD's SAM or the Resizable BAR support could be the next best thing. If all games were coded to fully bypass the OS and run entirely through the game files and hardware it could be huge, even if it was only on RTX cards (here's me hoping they bring it to 2000-series lol). Game Mode on Windows still seems to be hit or miss, and input lag is inconsistent between games and drives and CPU/RAM usage.