nvidia is sadly very greedy with VRAM, they use it as a crutch to push buyers into higher tier cards if they want more vram. If they went the AMD route and just gave people 16gb on mid-range cards already I think it would really benefit nvidia's reputation.
I find it funny that NVIDIA purposely design them with just enough VRAM to the point that when they panic, their design only allows for double the amount, so 10GB is not enough to 20GB which is plenty enough, 8GB to 16GB, 6GB to 12GB
You can always get someone who knows how to solder and upgrade by removing the 10x 1GB memory moduales and soldering on 10x 2GB memory modules to get 20GB of vram.
@@pozytywniezakrecony151 By sheer stroke of luck I scored _two_ Radeon VII cards for $200. needed some tlc in the form of dusting and replacing fans, true (~$40 for two sets, just to be certain). More surprised they still had their original warranty stickers intact. Though not as much as with the still impressive idle temps. Those same crypto bros still want $300 for a 3060. And unfortunately those listings crowd the marketplace and eBay algorithms like all those baity gaspy videos clogged the RU-vid algorithm for much of 2021 and 22.
12GB one is a hidden gem. 90-95% performance of RTX 3060 for 2/3 the price (and it's quite a bit younger than normal 2060 which makes the VRAM chips dying issue virtually non-existent)
@@GewelRealthe extra vram also means it can use frame generation. Older cards with extra vram are still capable as long as you don't mind dlss + frame generation.
@@Kenobi-vu9mbMeh. It hurt Intel but what it screwed over is the consumer. Alot of games are still single threaded or threaded with maybe a couple cores 😅
How do you see the 10GB version outperform the 12GB version and not go: "hmm.... That's actually impossible, since the other card is better in every single metric."
The 10gb EVGA card actually has higher base and boost clocks than the RTX 3080 12gb despite having fewer cores. It varies from game to game but some engines prefer faster clock speeds to more cores.
I am thinking it's possible that it's down to the cooler on the 12gb card or even contact between the GPU and the cooler, had that issue with my new Zotac 4070Ti Super 16GB, hot spot was high. Took the cooler off and applied thermal grizzly and dropped the temps 12C, not only that but the fan speed was also lower. So cooler running and quieter, win win in my books. Still it was a brand new card I shouldn't really have to do that.
absolutely. With my 4070ti which has 12GB, I regularly get it up to 11GB+ at 1440p with full RT applied. With VRAM you won't often notice average fps changes but you will notice stutters as it swaps between shared memory and GPU memory. Or with texture swaps in certain games (like Control).
At 1440p/Ultra/RT/FG you do need 12GB. That is why 4070/Ti has 12GB. Nvidia is smart and they gave players enough VRAM for their games, but not more than enough to use these so-called "midrange" GPUs for some other VRAM-heavy tasks (AI modelling, etc.).
I'd just like to point out Control specifically has known texture issues, particularly with long texture load times, even with the semi-official raytracing patch/mod by one of the devs that lets you boost them by up to 150%. But it is also VRAM hungry. I pulled it out and dusted it off for Spooky Month this year and I couldn't run high textures at all, with or without raytracing on my 10 GB card just because it got close to the limit. Even dropping the resolution and other settings didn't help, it would load them in, but it would drop them to low after the first cutscene or loading screen, and it wouldn't even bother to load the textures for the shelter doors at all, they looked like a PS1 environmental texture lol Fortunately Control's medium is most other games high, especially with a little RT on top.
I have a RTX 3060 12GB, I'd quite happily swap it for the 10GB version of the 3080. vRAM isn't the be all and end all of what a graphics card can do, it's just one of many factors. More and more people are falling back into the the mindset of a game isn't worth playing if you can't run it at, or very near, maximum settings. On the flip of that more game companies are making games for PC gamers that mostly don't exist, targeting the highest end and relying on upscaling and frame generation to fill in the gaps. The GTX 970 lasted me near a decade of use, I expect the 3060 will do the same. Don't be afraid of lowering the settings.
people could learn a lot from using a lower-end rig for a period of time. i used a 12100f + rx 6600 after owning a 5800x3d + 3080 for a good while, and you start to realise at some point that games are totally playable when they're not totally maxed out!
"More and more people are falling back into the the mindset of a game isn't worth playing if you can't run it at, or very near, maximum settings" If we were talking about budget cards, that opinion would hold weight. We're not. These are 80 class cards. Not low end.
@@Kenobi-vu9mb a game looking good at max and the scaling ending there < a game looking good at high/medium and having the scaling go higher for future hardware
@@Kenobi-vu9mb The 3080 is now four years old and has been replaced as the latest and greatest, not only by the 12GB version but by a whole new generation of cards. Despite coming out two years later that 12GB version is still four year old tech. Is it then a surprise that they can't run games at 4k High? Perhaps it's time to lower the settings a bit.
They are doing the same thing with the 5070, it will launch with 12GB and then an 18GB version will be released after the idiots…I mean customers spend 700 on a 12 gig card yet again
Question.. what game right now uses over 12gb of vram at 1080p-1440p. I have never seen anything over 10gb without dlss. Hell I still have an 8gb 3060ti and I do struggle maxing out some games but for the most part since I only run 1440p I really wonder what game will use more than 12 if I'm doing semi ok with 8.
@@ivann7214 ratchet & clank does for example, playing it at 1440p, native res, high settings and it sits at 11,3GBs of vram for me, tho i use RX 7700 XT and amd tends to utilize more vram than nvidia for some reason, probably the way the card handles the image rendering, cuz it is difrrent than for nvidia.
@@quinncamargo My home theater personal computer is equipped with an Ay Em Dee Ryzen Five Thirty-six hundred ex and an Ay Em Dee Ahr Ex fifty-seven hundred ex tee. But it has only sixteen gigabytes of dee-dee-ahr-four random access memory.
Even if equal in performance any of the 12gb versions found used on ebay should at least have less wear and tear from mining since they were released later when it became less profitable.
Wear and tear are from poor conditions during usage and lack of maintenance. Such electronics don't really have a "resource" in its usual sense given proper care. So it heavily depends on how well the previous owner wanted the card to look/perform when he finally decided to get rid of it.
3080 10gb owner here and the card is still delivering for me at 1440P. I also split my play time between older games from before 2020 and newer games up to 2024. The first tier of games I can run maxed out high refresh. The newer ones I may have to kick on DLSS but im totally fine with that. Still a great experience. Ill see next year if there's anything is worth upgrading to but I typically look for 100% performance to justify upgrading and anything at that level will cost a kidney.
It's an Ngreedia giving us scraps moment. They wanted you to buy the 3090 if you wanted more VRAM. Don't expect anything less than such tactics. YES some of us DO need that VRAM: non-gaming workloads.
There are always give and takes with GPUs. AMD skimps on hardware also. They skimped on dedicated RT and AI cores. Basic hardware that a RTX 2060 and Arc a310 have. Of the past 12 games that Techpowerup has done performance reviews on only one uses over 11gb at 4k ultra. It uses 12gb at 4k ultra. 6 of them have RT always on. 8 run noticably better on Nvidia GPUs than AMD counterparts that are usually equal. Like 4070S vs GRE. 2 they tie and 2 slightly favor AMD. All had DLSS quality looking better than "native" TAA. Both companies skimp on something. Just hope what your card skimped on isn't what modern games are being developed around.
I use a 5k monitor and could definitely use another 4gb of vram. (3080 12gb owner) I bought it over the 10gb version back in the day because I figured I’d need the extra vram and I’d already like more. There are games like ratchet and clank that run fine at 5k with medium textures but if I turn it up to high I max out the vram.
In PCVR, 16GB is the only way to go. Especially when you are trying to hit native output on high-res headsets. I have a Reverb G2, which basically require 6000x3000 to get proper 1:1 output... Even if games use dynamic- or fixed resolution scaling I often hit 13.5GB or above. And it's so nice to not have any stutters in VR. It's a shame high performing GPUs like these have 10 and 12GB.
Was about to say that. PCVR ATM is really hard to get into. I got a A770 LE just for the VRAM/cost years ago, thinking it was temporary. It's 2 years later and there's STILL barely any >12gb cards under $600 or so, especially Nvidia cards (NVENC VR). Waiting on RX8000.
@@koolkidgamr6260 amd is a good alternative, i can play cyberpunk in vr with a 6800 xt on my quest 2 with headtracking and all and id recommend 7800 xt for the improved raytracing or wait for 8000 series. Nvidia stagnating rn because lack of competition the last 2 generations.
@@DragonOfTheMortalKombatthis. Most people freak out about allocated memory because it can max out the buffer. But it’s really the utilization that matters.
@@puffyips this is what happens when you watch number that you don't understand, instead of playing the game. Your brain starts to rot and you haluscinate a story. Many such cases!
@@snoweh1 Yes, many gamers spend way too much time looking at performance numbers and far too little just enjoying games. That said, my 16GB 6800XT never causes textures to randomly disappear or characters to look like something out of Minecraft. I just enjoy the game maxed out and don't ever worry about such things.
When i used to work for CEX i upgraded from my 8gb rtx 3070 to a 12gb 3080 just because the earlier was struggling at 4k ultra due to its limited vram... Funnily enough now i am finding certain titles now i am having to drop to 1440p at ultra due to 12gb not being enough LOL Performance wise though it definitely does hold up
If you're gaming on a 4K display it would probably look better if you gamed at 1080p intead of 1440p because of the 1/4 scaling. Meaning 1080p goes into 4K (4x) evenly where the 1440p does not.
Im using rtx 3080. Its not as bad. All you need to do is switch to FSR + Frame generation. Sure its not as sharp quality as dlss, but it helps boosting the fps by almost double. Especially in 4k. Thanks though Iceberg for detailed review
@@redline589 I meant for the same price, like the 3070 ti also should have more than 8gb, but we know it’s nvidia’s way to make their cards obsolete faster
I snagged my ftw3 3080 12 gig right after the mining boom went bust and I've been using it to play at 4k and run an LLM on my machine. Finally got a game, Space Marine 2, where I had to take a few settings off ultra to get a stable 60 fps, otherwise it was a solid 48. It's a beast of a card that I plan to have for a few more years, and overclocking is still on the table to boot. I can easily get it to match 3080ti performance.
I just got my hands on a Vega 64, upgrading from an R9 Nano. 8 gigs is definitely still enough to play games made by devs that actually care about players
tbh from what I'm getting from this is newer dev companies are being more sloppy with optimizations and using excess assets for minimal fidelity gains while hiding behind "building for the future" promises. while games like black myth wukong and supposedly starwars outlaws are heralded as intense visual experiences, I find it difficult to argue why they are more demanding to run than a game like Cyberpunk 2077 which still has amazing visuals and beautiful ray tracing all within an open world environment -from a game that came out almost 4 years ago. on a 3080 10gb Cyberpunk can run at max settings 1080p close to 100 FPS without ray tracing and still around 60 FPS with custom settings for ray tracing depending on the rest of your hardware. While I would give Wukong a pass as it was developed by a smaller company from china thats more accustomed to mobile games, star wars outlaws came out from one of the biggest game development companies in the industry (despite its short comings). After a little digging around around, it all really just seems like dev companies are spreading their teams too thin across story, mechanics, and setting development, not investing enough into quality and optimization control, and are trying to shove off the responsibility to hardware developments. With Epic releasing Unreal 5.5 for devs with much better shader optimizations, I'm hoping the industry starts moving towards 3rd party game engines rather than forcing the devs to do virtually everything from the ground up. I am a bit worried that Epic has basically all the power in the premium game engine space with these recent developments, but if Valve, and Ubisoft can shift their gears I wouldnt be surprised if they came up from the dust as well
Im perfectly happy with my 6800xt compared to anything from nvidia, i got it at an insane price, and frankly I dont want to support nvidia at this point.
Both were available when i got my 3080. I never thought the 2gb made the difference, but the bus width was more attractive... $100 difference between a zotac 10gb and msi 12gb... seemed like an easy choice in oct 2022.
Depends on what you mean by redeemed. I paid $630 for my RTX 3080 FE in January 2021 (I had a Best Buy discount), and it's served me very well for almost 4 years now. There's almost no chance I would have gotten a 12GB model anywhere near that price by the time it came out, and I haven't been limited by the 10GB since I mainly play Elden Ring. That said, there's almost no denying that you'd see a tangible benefit in many games from the extra 2GB using high resolutions and more demanding settings.
@@randysalsman6992 try hogwarts legacy or god of war ragnarok at ultra settings, you're fps will be unstoppable after few minutes of gaming or when you get to a new place in theap
@@randysalsman6992 There's nothing wrong with his build. It's called poor memory management in both the OS and the game. I can verify it as I've also tested the 3080 10gb
The only game game I've run into VRAM issues on a 10GB 3080 was Ratchet & Clank: Rift Apart. Sometimes it runs just fine in 4K DLSS Quality with RT reflections, AO and high textures, while other times it struggles even without RT and on low textures. TLoU used to problematic too, but ever since they've patched it it's all fine. Biggest surprise would be Silent Hill 2, this thing runs it as fast as a 7900XT... crazy. My favorite GPU right after a 1080Ti
The significant difference in some games that don't even max out the 10gb vram is due to the increase of bandwidth (320bit vs 384bit bus). The difference will only increase from here, in 3 years we'll easily see 15-20% difference in lots of games at 1440p.
Best use scenario for these types of cards: Gaming at 1440p, AI, 3D Rendering for small or personal or fragmented projects, Testing, future proofing for high-end media. Bitmining and other misc.
In addition to my gaming hobby I'm a 3d modeler/renderer and this summer I bought at a bargain price a build with an i9 9900k and an rtx 3090, while knowing that this is really dated hardware I can't complain about anything, I can play everything, in some cases I use Lossless Gaming to generate frames but in general the 24gb of vram allows me to increase a lot post processing and resources. In Blender and 3ds max (my favorite 3d suites) the 24gb of ram makes me feel like I'm breathing again after a long immersion in rendering... I am coming from a 6gb 2060. I honestly think you don't need to buy the newest card, but you should buy what works for your needs and purposes. If i needed a card only for gaming i think i was going to buy an rtx 4070 or a 3080.
I found a solid used 3080 12gb FTW3 for $370 last year and with it being the highest end sku along with the strix and no more EVGA (RIP) knew I had to have it, replaced a 3080 10gb EVGA XC3 and sold it to a friend for $300. The combined additions of VRAM, cooling capacity and power limit allows this card to outshine the XC3 10gb by ~10% tune v tune. Pairs very nicely with an Odyssey G7 (1440p 240hz) 32".
The facts are that the majority of people still play in 1080p. The jump to 4k is a huge mark up in hardware. As far as ray tracing, if you actually play the game, it really not noticable. Everyone still frames 4k w/ ray tracing and say, Ooooh, look how pretty. but if you actually play, it really makes little to no difference. The push for 4k and ray tracing is a corporate agenda for us to spend more, when in reality, our hardware is still enjoyable to play the games we play.
my god I remember owning a used 3090 FE for about a year and seeing the power spike up to 400 watts in-game was crazy. Nvidia really did let these cards loose on the samsung node. A few reasons why I replaced mine with a 4080S FE was that the GDDR6X in the backplate side of the FE 3090 was cooking itself to death even if the card was both undervolted and below 65C (mem temps were regularly reaching 88-95C). The coil whine was so unbearable that it stopped me from using my studio monitors for gaming because the coil whine was leaking into the speakers even through a DAC. The 40 series cooler design blows my old FE away with how cool and quiet it is, while sipping less power (200 watts on average in 1440p maxed vs 400 watts in BG3). Fans were quieter because the memory wasn't hitting 100C in extended sessions, and the best part, no coil whine! mine wasn't the best undervolter, but my V/F curve at the time for my 3090 FE was 1785mhz/.800mv and it was running flawlessly until I got my hands on a 4080S FE. OC doesn't scale well with these cards unless you start to push into the 450 watt + territory so keeping clocks locked at around 1800-1860 was the sweet spot between temps, power, and not hitting power limit. Cuts out the most important thing ever that plagues the 3080/3090 series as a whole unless you have a new vbios flashed. Hitting power limit. Hitting power limit means the card will slow down to go under said power limit, and will be felt in-game as a stutter because the card downclocks to hit said power limit i.e. 3080 at 320W and 3090 at 350W at 100% power slider in afterburner.r I distinctly remember the boosting algorithm of the 30 series is that once the card hits 52C, if there is power headroom and cooling headroom, it will add another 15 mhz to the boost clock as long as it doesn't hit 62-65C. So if you set a V/F curve at 1800mhz at .800mv and the card hits 52C, it will go to 1815mhz if the card has power headroom.
This video is very timely. I just picked up two 3080 cards (10GB and 12GB0 for $340 each. Going to pair them with a set of 5800x CPUs i got for $150. Going to replace my computers with 1700x and 1080ti from 2017. I play mostly old games at 1080p so no regrets here :)
Thank you for this, as a early adopter of the 3080 10gb, I was looking for this for a while, still glad I din't choose the 3070 back then but still think the 3080 deserved more vram
i love my 3080 12 GB, and with my 5600X i haven’t really had any issues outside of the PL update to CBP2077 being a lot harder on my CPU. i feel like i’ll be keeping this GPU for a few more years, but will prolly upgrade to a 5700X3D for GTA VI
Price. 10GB models were significantly cheaper than the newer 4070/70 Ti at the end of the crypo boom. Getting a used one in good condition at the beginning of 2023 like i did for $350-400 was a absolute steal. Only thing u need to deal with, is the higher power draw, (but 8pin pcie means would work with an older psu), and lack of DLSS 3 FG (which is redundant if the game supports FSR 3.1). The 3080 is definitely up there with the 1080 Ti as on the GOAT gpus.
I remember when Jensen pulled these out of his oven and I laughed hysterically when I saw that the RTX 3080 only had 10GB of VRAM. Then Red October came and I grabbed an RX 6800 XT when I saw the performance and the 16GB of VRAM. You see, back in the day, during the first mining craze, cards like the GTX 1060 Ti and RX 580 were near $800CAD. Then something inexplicable happened... Out of nowhere, Newegg suddenly had a bunch of brand-new Sapphire R9 Furies. These cards were released two years prior and I was admittedly shocked because they were less than $400CAD. I was admittedly hesitant because I couldn't remember how the R9 Fury performed so I started reading reviews and discovered that the R9 Fury, despite being two years old, was faster than the RX 580. I quickly discovered that the R9 Fury was a monster when it was released, faster than the GTX 980. The card had so much GPU horsepower at the time that it could literally play anything at 1440p Ultra at 70+FPS. Unfortunately, ATi's experiment with HBM gave the R9 Fury an Achilles' heel, the same Achilles' heel that Ampere cards have. nVIDIA made the choice to use more expensive GDDR6X VRAM which meant that they had to give less of it on their GeForce cards to be even somewhat competitive with Radeon. nVIDIA also knew that most gamers aren't smart enough (or just too lazy) to actually research their purchases and just tend to buy nVIDIA by default. Admittedly, nVIDIA was 100% correct in their assessment so they didn't worry too much about it. Just like the aforementioned R9 Fury, having fewer GB of more expensive higher-speed VRAM instead of more GB of more economical VRAM that is slower was proven to be a mistake on the R9 Fury and will prove the same on Ampere cards. Some people like to talk about how "superior" GDDR6X is compared to GDDR6 but it just hasn't shown to make any real difference. If you want to talk about superior VRAM, HBM was in a league of its own with a colossal 4096-bit bus width. Compare that to the 384-bit bus width found on the RTX 4090 and RX 7900 XTX cards of today. I am willing to bet that if you were to take a pair of RX 580s and somehow graft the 16GB of GDDR5 that those two cards have onto something like an RTX 3070 Ti, those 16GB of GDDR5 would out-perform the 8GB of GDDR6X in modern titles and give the RTX 3070 Ti a new lease on life. Sure, the R9 Fury's HBM was impressive, especially when it could run Unigine Superposition at 4K Optimised despite a warning that it didn't have enough VRAM to run the test correctly. Unigine clearly hadn't considered that 4096MB of VRAM on a 4096-bit bus could do things that 4GB had no business being able to do, but despite this, HBM isn't magic and could MAYBE behave like 6GB of GDDR5 because of it's incredible speed. This means that 8GB of DGGR5 was better than 4GB of HBM for gaming. This myth that a lot of GeForce owners fell for (and they do seem to fall for a lot of myths) is that GDDR6X is somehow going to make your GeForce cards superior to a Radeon that "only has the inferior GDDR6". I'm pretty sure that the truth is more like AMD probably bought some GDDR6X from Micron and sent it to ATi in Markam to play with. After considerable testing, ATi would discover that the difference in performance and efficiency between GDDT6 and GDDR6X was minimal at best and not worth the extra cost. ATi knows its market and Radeon owners aren't dazzled by frills, we want maximum performance-per-dollar (which, really, ANY user should want). Micron is the exclusive manufacturer of GDDR6X (and probably GDDR7X) while standard GDDR6 is made by Micron, Samsung and SK Hynix. VRAM is a commodity and the more competition you have in the marketplace, the better the price will be. Since Micron has no competiton for X-rated VRAM, their price remains high. Since GeForce owners have no issue getting fleeced for useless frills, nVIDIA, also knowing their market like ATi does, chose to get more profit from the use of GDDR6X and who can blame them? The proof is in the pudding however as the use of GDDR6X has not translated into any real performance advantages for GeForce cards over their Radeon rivals. Let's take a look at the rankings, shall we?: 1st Place - GeForce RTX 4090 with GDDR6X 2nd Place - Radeon RX 7900 XTX with GDDR6 3rd Place - GeForce RTX 4080 Super with GDDR6X 4th Place - Radeon RX 7900 XT with GDDR6 5th Place - GeForce RTX 4070 Ti Super with GDDR6X 6th Place - Radeon RX 7900 GRE with GDDR6 7th Place - GeForce RTX 4070 Super with GDDR6X 8th Place - Radeon RX 7800 XT with GDDR6 9th Place - Radeon RX 7700 XT with GDDR6 10th Place - GeForce RTX 4060 Ti with GDDR6 We can see from this chart that it has been an almost perfect competition stack going back and forth from place to place with both red and green having five of the top-ten. It's also interesting to note that while nVIDIA does have the most performant card in the top-10 with the RTX 4090, they also have the least performant card in the top-ten with the RTX 4060 Ti. It's also interesting to note that the RTX 4070 Super is faster than the RX 7800 XT. This is because the RX 7800 XT is faster than the original RTX 4070 and while the RTX 4070 Super uses GDDR6X VRAM, so too did the RTX 4070. All of this just goes to show you that having fewer GB of faster X-rated VRAM doesn't translate into any real performance advantage but having less of it can (and will) become a serious hindrance to what you card will be able to achieve in the future. People like to talk about bottlenecks and this is no different. My R9 Fury was held back by its lack of VRAM and its incredible GPU horsepower (for the time) was relegated to high-FPS 1080p gaming far too soon. I bought it because it was half the price of the RX 580 during the first mining craze (because it wasn't efficient enough to mine with) and so I could forgive myself for taking a 4GB card in 2017. After all, in 2015, when the R9 Fury came out, 6GB was considered to be high-end, similar to how 16GB is looked at today. However, I feel sorry for the dumb schmucks who bought the RTX 3070 Ti only to discover shortly after that they had only purchased an overpriced high-FPS 1080p card while those who bought the 3070 Ti's rival, the RX 6800, are still happily gaming away at 1440p.
I felt myself lucky to buy a 3080FE direct from NVIDIA at 650 quid MSRP at the height of the madness. The card is still a beast today albeit only at 1440p. I'm guessing the same amount of money now isn't going to provide any sort of massive upgrade. I should add I think it mined half it's cost back in crypto.
So, I’ve been struggling with this for a while. When I started out, my schtick was that I tested old GPUs on cheap modern PC hardware. The original “reasonably priced gaming PC” got upgraded to the “moderately priced gaming PC” when I started reviewing better GPUs, and now I’m at the point where I’m kinda ready to test brand new GPUs, but I also don’t wanna let down the people who are used to me having a humble test rig. To answer your question, the 7500F holds up really well for the price. I’ll be reviewing the 7800X3D soon, and I’ll have the 7500F figures updated for comparison.
I brought a 6800xt instead of a 3070 back then for the reason of needing more vram..today I'm running into games that are using 16gb at 1440p on high settings(some cod MWIII maps for example have so many textures that they fill it) nvidia doesn't want cards that have the power to continue to be able to. lacking VRAM is a killer, no one wants to play with stutters at key moments
I paid $2300 Cdn for a new rtx 3080 10gb when crypto was hot. I only replaced it when it would run out of vram and the fps would drop like a rock on some player made arcade maps in Far Cry 5 at 4k. I then got a 3090 and the fps no longer tanked on those maps.
I sold my 4090 for $300 more than I payed for it and used the $300 to get a local 3080 10gb. I game at 4k but I’m not playing any sort of demanding games so the 3080 is more than adequate. Maybe I’ll swing for the 5090 when that comes out.
I went from a 1080ti to a 3080 10gb, it was odd to go down in memory, I've only had the memory become an issue with Hogwarts 1440p. It was too much and I had to scale to 1080. I'll be waiting for 5000 or 6000, the 3080 is great, but not my favorite card.
I am still running an RTX 3060 TI 8GB, playing at 1440p, and I havn't played any game that really maxes it out (havn't touched RE4 yet, who knows). Most of the games I play tend to be either slightly older (still catching up on some big games now that they are discounted), or indie games, so I don't see myself needing a newer GPU for a couple more years tbh. Only thing..... I need a new shiny, so maybe there is a new GPU on the horizon just for that xD
Now I'm confused why the RTX 3070/3070 Ti was so crippled by its 8GB of VRAM if the RTX 3080 works well with 10GB. Or are both RTX 3080 models equally crippled?
no, both 3080 models are actually fine today, they're pretty much the card you need these days to 100% sprint games, versus the 3070 and 3070 Ti that are currently within the realms of 99% (at least for now)
Try to get 8GB 3070 Ti as "is 8GB any good" point. That way we know at what point 10GB helps A LOT - running out of VRAM simply feels different than plain perf. difference between models would show. As for "too big for it's own good" - I like Founders card sizes (even if you have to pad mod to get G6X temp under control, along with 0.8V undervolt on cores).
Got my 3080 10gb at launch, upgraded my screen just after, didn't went 4k coz 3080 was never a true 4k gpu. Still got it paired with 5800x3d, doing well but will replace it next year...
If you can, please test rtx3070 vs rtx4060ti in 4k with and without DLSS in games where they can play at +30 FPS or even close to 60. I would like to know how the new one maintains performance in older games like RDR2. TEST IT WITH OPTIMIZE SETTINGS like hardware unboxed settings or digital foundry.
Would a 1080/ti be powerful enough to emulate ps4/xb1 generation of games? Im looking to build an emulation machine as my first PC. I have a PS5 and honestly nothing that has come out/coming out has interested me
I was playing No Man's Sky in VR yesterday and it used 14GB of VRAM on both my 7900 GRE system and 4090 system. That game is relatively lightweight comparatively. 12GB just ain't getting the job done in 2024, but 12 will always be better than 10🤷♂️
I've been saying VRAM isn't some massive concern and more often than not the core will be the limiting factor. I'm not saying you shouldn't get more for your money, but I'd rather have a GPU that's cheaper, uses less power, and is easier to cool than one with more VRAM than the core can really do anything with. Eventually this will change overtime when the next generation of consoles come out and likely will have more VRAM available so games will be made to utilize more. But for now, VRAM really isn't much of an issue.
20GB on my XFX 7900XT Black Edition.... I'm good for a long time. Coming from the Goat GTX 1080ti 11GB. FYI 5000 series will stiff you, on everything they have no competition. I will say the move from my 1080ti to a 7900XT was easy. I am impressed with the drivers and features of AMD. You don't need FSR with Anti lag and fluid frames. I turn that on with 250+ mods, reshade, 4k texture packs, and ray tracing in Cyberpunk 2077 @ 4k high settings optimize. I get easy 60+ fps no lows below that, no ghosting, screen tears, or stuttering.
Why does the 10 GB in the thumbnail seems to be off? Is that on purpose? Besides that I really like the reflection and the thumbnail in general, good job :)
My first thumbnail idea sucked (it was a closeup of a 3080 FE heatsink with numbers 10 and 12 carved onto it. Wasn’t very legible) so I kinda threw this together from an image I made for the video. I only had three hours sleep, so I’m blaming that 🥱
This was the second (then third) most powerful GPU from the past generation. Now it has less or the same memory cache as the current midrange dopped up 60 series models. Nvidia being Nvidia.
Honestly, as a rtx 3080 owner, i would love a breakdown of how the 3080 compares to the rx 7900 and rx 7900xtx. Those cards have great price points, but I would love to see how they actually compare to the 3080. Is it worth the upgrade?
I'm currently gathering data for a "second gen RT" video. I haven't decided exactly which cards will feature in the final product yet, but so far I have benchmarks from the 3080 10 & 12GB 3080 Ti and 7900 XT.
6800 XT owner (which is 3080 perf) had 2 weeks a 7900 XTX. not really satisfied. to much power cons and i dont care if i had eg 60 fps with the 6800 XT and 100 with the XTX.
Nope. Love your videos, Iceberg, but in this case I disagree. The 10GB card was a mistake from jump street. If today you can find instances when 10GB is not enough, then it never was on a card that listed for $700 just four years ago. (Many of us made the point at the time.) If 10GB was _ever_ really appropriate for such a card, and the gap between the 10GB 3080 and the 12GB 3080Ti didn't need filling, why did Nvidia release a 12GB version? Answer: The 3080 should have had at least 12GB all along. The 3080Ti should have had 16GB. And why did the 3060, released a bit later, have 12GB, 2 more than the 3080 and 4 more than the 3070? Value is often neither progressive nor linear. There is a floor that must not be breached. Sometimes 10GB is not enough, but 12GB is. In those cases, the 10GB 3080 is valueless because it fails to meet the need--it's below the floor. If one has hard and fast criteria for a tool, one buys a tool that meets all those criteria. If he compromises his criteria to make the lesser tool suffice, then he has made a deliberate choice to lower his floor. I don't care a lick about ray tracing because the performance hit is always too high and the images are overblown and distracting. But I do like most of the graphics settings turned way up. So when prices returned to some semblance of sanity, I upgraded my 2080Ti to a 6800XT. With 16GB available, I never have to fiddle with settings to avoid overflowing the buffer. VRAM is just not a thing I ever have to worry about.
I've read somewhere that the amount of VRAM is not just decided on a whim, there are certain limitations and restrictions when choosing the capacity. The 3060 having 12gb was, and I'm paraphrasing it right now as best as I can, was due to its memory bus being 196 and not 128. That's why 3080 12gb had better specs, overall, since it required for a higher bus width for the additional 2gb of VRAM.
@@Fr33mx Yes, but all these GPUs are in development for years. There's plenty of time in that cycle to decide on how much VRAM a GPU is going to need for the performance level and to design the circuitry accordingly.
People like to say AMD fine wine now its AMD fine milk, in recent titles RDNA2 has been underperfoming a tier or even two lower without hardware RT. Space Marines 2, God of War Ragnarok, Until Dawn, Silent Hill 2 etc...
Looking back on generations prior to the 3080, it really ought to have had 16GB. The 680 had 2, the 780 had 3, the 980 had 4 and the 1080 had 8. You can also make the argument that the 2080 and the 4080 should have had 12GB and 20GB respectively. Edit: I am just looking at the trend of VRAM increases not the relative performance to VRAM or other factors as such.
The GTX 780 was really part of the same generation as the 680 though. Nvidia's made so many strides with Kepler that they didn't feel the need to release the big boy GPU right away. Also, it doesn't help that their flagship Kepler GPU was not ready until the end of 2012.
I have the RTX 3080 Aorus Master 12GB and i'm glad that i took this over an severly overpriced 10gig 3080 to replace my aging GTX1080 Aorus Rev.2. Got it for 936bucks, which is still quite alot but pretty much the lowest price this model ever had and the 12gigs are *barely* enuff for Cyberpunk with RT Ultra in QHD, but only unmodded, with extra stuff and better textures, the 12gigs wont be enuff anymore quickly. So my next card willbe one with a minimum of 1, but i'd rather wait for the 24Gbit DDR7 versions, so i can get 24GB.
@@Gaphalor Gut für dich! :) Allerdings war das wohl ein Sonderangebot oder "gebraucht sogut wie neu", denn aktuell kosten die nämlich sonst ab 470 aufwärts. Und die 7800XT ist eine Karte der aktuellen Generation, ohne die Nachwehen von Crypro und co und meine 3080 12Gig zudem ein Custom Modell der Spitzenklasse, was den Preis nochmal ein gutes Stück höher treibt, als notwendig wäre für ein eher Basismodell, wie die Gaming OC die stellenweise nur 760Euronen kostete. Daher hinkt der Vergleich doch sehr. Fakt ist zudem, in der optimalen Auflösung von QHD sind beide Karten in etwa gleich stark, alledings hat die 3080 Features die der 7800XT abgehen und im RayTacing ist die 3080 immernoch ungeschlagen und lässt die neuere Karte problemlos hinter sich. :)
Hab meine RTX 3080 12gb für 350€ mit immer noch 1,5 Jahre Garantie bekommen. Der Typ hatte sich kurz nach dem Kauf eine Rtx 4070S gegönnt und anscheinend kurz danach eine Rtx 4070S ti 😂 die 4070S hat er mir auch angeboten (für deutlich mehr Geld)
@@wolfwilkopter2231 Sie war neu, verschweißt in OVP, war ein Fehlkauf von jemand und hat sie günstig abgegeben. Mein Glück. Zum Thema Raytracing: benutze ich nicht und die 3 Spiele in denen man wirklich einen Unterschied sieht spiel ich auch nicht wirklich so oft. Das einzige Raytracing das was taugt ist Pathtracing und da brauchts wohl noch 1-2 Generationen von Grafikkarten bis das Mainstream wird.
@@Gaphalor Jeder wie er meint und ja, da haste Glück gehabt. :) Ich nutze RT überall wo es machbar ist ohne das ich dabei den Diashowtod sterben muss und der Unterschied kann teils brachial sein. Kommt allerdings auch darauf an, wieviel Zeit, Mühe und Technik die Leute noch ins rastern stecken und da alles zusammenpasst... Wenn ich da Jedi Survivor denke, da war RT ja quasi Pflicht, wenn es auch nur ansatzweise gut aussehen sollte und Witcher 3 sieht mit RT auch nochmal um einiges besser und natürlicher aus, DD2 ist eigentlich auch mehr auf RT ausgelegt etc pp. Und mit dem DLSS und FSR 3.1 FG Mod, kann ich sogar CP2077 in QHD mit PT spielen, nicht optimal, aber es geht, aber dafür kommen ja bald die 5000er, dann läuft das auch so.^^
You're about to feel inadequate again, since the 5090 is about to release in January. Also, one day soon, people are going to be talking about that 4090 as you are now with the 3080. 😅 The 3080 is a beast that deserves respect.
@@ConfederateBanshee I didn't mean any disrespect to the 3080 far from it. My Gigabyte Aorus is probably one of the best specced cards around with 3 HDMI and 3 display ports and a little LCD screen in the fan shroud to display system info or play a custom gif. In fact I feel bad selling the GPU as it's a bit like selling a family member 😉but I got an offer for the 4090 I couldn't refuse (30% off the going price for a four months old card in mint condition). The drop of the 5090 coming January won't affect me much as I'm using a 65" 120 Hz 4K tv as monitor and wouldn't notice the benefit of a faster card.
No the rtx 3080 was pretty bad priced and not aged as good. For the 3070 ti price in Japan I bought my rx-6900xt and with today's drivers I outperform my sisters rtx 3090 in most games . And even for me 16gb is barely enough for my games so 10 and 12 GB is nothing
In terms of performance, VRAM really doesn't matter that much as shown in the vid. Barring a few outliers, mainly super intensive new AAA games, GPUs are almost always limited by its rasterization pwoer before VRAM become an issue. Even with RT, which requires DLSS to be playable, VRAM makes little difference after DLSS is applied. The real problem imo isn't that X amount of vram is not enough for gaming. It's always a few outlier games that hogs memory at 1440p+, Ultra quality with RT that calls for higher than 10-12GB, and not many can afford to play at those settings to begin with. However, NVIDIA gimping vram on more and more pwoerful cards while jacking up the price for their entire product stack is truly unacceptable. Lowering setting is fine, but I shouldn't have to compromise on settings with new hardware that's ever increasing in price. That's why I'm sitting on a Titan Xp, runs everything I play just fine at 1080p. New games mostly suck nowadays anyways. I would only consider upgrading at the tail end of a generation when there's enough of a price drop.
This is why in 2020 when i saw the rx 6800 16gb i said yepp that's the one, now today i own it and its perfect, even do ultra rt on cp 2077 with fsr 3 frame gen @ 1440p high settings.
Framegen can also be a bit of a memory hog and probably would've allowed the 12GB model to stretch its legs more over the 10GB card with all the bells and whistles going.
also i feel like something is off about these benchmarks. makes no sense that the 10 GB card is doing better when the 12 GB should perform between a 10 GB and a Ti
The 3080 at launch was the only good deal but games these days even at 1440p struggle with only 10gb and honestly sometimes 12. Mid end Cards should really have 16gb minimum these days
The 10GB 3080 is still fine today, DLSS will keep it going for a lot longer than NVIDIA probably intended. Also nice you can hack in FSR FG quite easily in many DLSS FG games and it works extremely well. So yes, despite the naysayers, it's held up just fine and was a great deal if you picked up one for retail.
I just recently upgraded from a 3060 ti to a 3080 10GB for like 350€. Made all the games I play playable in 1440p and High settings. Super happy with it.
I've had people with 3060 12gb tell me it's better than 3080 10gb despite getting half the framerate, simply because it has more vram. Some people are very delusional.
I wanted a 12GB 3080 a couple of years ago upgrading from a Vega 64 , but it was way too expensive and I had to settle for a 2080ti instead and clock the balls out of it.