For repair, please contact me by following the link in the channel ABOUT page. Buy me a candy at paypal.me/tonynameless Tools, schematics, boardview files etc are available here drive.google.com/drive/folder...
Card owner here. To give a little more context into what exactly happened to the card. I use the card for a 4 monitor setup. It black screened while playing a game called Afterimage from steam. A 2d/3d platformer. I've had the card blackscreen like this on me before but it seemed more like the drivers just crashed since a simple restart of the pc would fix the issue. Just a bit funny to me that some indie game would be the one to take out my card. Not even Cyberpunk. Oh well, I got the card back. Runs good. Thanks for the fix. 👍
Some games are unoptimized for power consumption. There was a crappy free first person prince of Persia rip off that had my cpu above 80c when even cinebench wouldn’t do that
@@FunkyTechy Makes sense. I assumed as much with what happened here. I'll be using riva tuner and afterburner to limit power from now on when messing around with these indie games. Not risking that again.
next time run those shit games with vsync on theres no point of running 400w to displays menus...... overheating this sit and then game over.... with no warranty? the gpu is very new
very cool cinematic intro, and in general i can see a big video quality improvement. the scripted take on graphics card repair is really nice and super refreshing, but what i most enjoyed was how you described the processes you took like how to replace that driver mosfet. i learned a lot! thanks NWR!
Love your videos, very informative plus entertaining :). The One big thing to you are very good at is knowing where to check for voltage plus you have very good skills with soldering and then right temps to and equipment. Need some training on that to even be half as good as you are :). Love to watch new videos keep em coming!
I won this card from a Newegg raffle back at release. Took it apart three times now to get good temps, the stock pads were TRASH. Put it in a sff build and power limited it to 80% and it does well at 4k.
@@alexturnbackthearmy1907 i mean i wouldn't put it past a company to send out a few that needed them but didn't have them. companies fuck up sometimes. you just hope that you didn't get one of those.
All hail to the epic 1080Ti ... this monster still goes strong in my rig after 6 and a half years of running mostly 24/7 with sometimes even several weeks between reboots ... I don't know which obscur rituals were performed in the factory this thing was made but I REALLY hope they still have enough virgin blood and kittens to sacrifice to which galactic entity gave it their blessing to be such a Kaiju
i mean most cards have no issues. some people are just unlucky and get the ones that have issues. i've been lucky and never had an issue with a graphics card. or any component that i didn't cause myself. i bricked an msi 990fx motherboard over 10 years ago during a bios update. replaced it with a 990fx fatality board which i never updated the bios. haven't had an issue since then.
Yeah pretty much this, if I pay 2.5k random currency units for a rig and 1/3 of it is for the graphics card, I'm way more inclined to get it fixed for 100 or 200 bucks then if I would have only paid 1.5k for the rig with the card making up 1/4 of it
They don’t build them like they used to, prehistoric 1080 ti on my pc, and just cleaned it out and changed the thermal paste. Dropped 7-10 deg on gaming. So the original paste had dried up some. Still runs everything I play with it.
Tony can't actually answer the rare cards vs budget cards question because he only sees broken cards and he only sees broken cards people feel are worth repairing ( basically people only feel it worth the effort to save an expensive card these days so his experience is highly skewed). The best person to answer that question would be the RMA department of each card maker.
also lower end cards do not get bought by overclockers who buy expensive models and then destroy them...to send them in for repair if they think they had a golden chip.
i think high end cards break more easily simply because they heat way more, they are heavier and the new 12VHPWR cable is really bad (8pins connectors being better in every single metrics including security compared to this). you can fix the heavy part simply by vertical mounting it (at the cost of all the other PCIE slots) or you can buy the newly released RTX 4090 that don't use the oversized cooler and are merely 1KG now. reminder that when Nvidia finished ordering GPU coolers for this card, they decided to lower the power consumption (and thus the heat generated) but because the coolers were already produced, they slapped them on the new 4090 anyway and that's why you end up with cards bending, i know Gigabyte made a newer 4090 with the real sized cooler but maybe other brands did so as well.
My MSI 7850 power edition lasted 11 years. Not exactly expensive but the components were all top notch, and t spent it's entire life as open bench with periodic dust cleaning.
I have "only" a Gigabyte 3080 Aorus Master. Someone wrote already about the crappy original thermal pads mine were a crumbly mess as I replaced them last year. My thoughts on reliability: it all depends on the operating environment and how good is taken care of the card. Imho you should re-paste and re-pad every two years and if those cards are mounted horizontally they'll need a support thingy if you don't want to experience a cracked PCB. They weigh around 2 kg after all. My PC is always online so I have my card in use 24/7/365 with workloads from just browsing to heavy gaming.
3000 series Gigabyte and more brands have crappy thermal pads that start leaking oil around the PCB and even the fans in the horizontal position, i have one (3080ti) and the first thing i did was changing the thermal pads and adding more in the back to extra cooling, if i lowered the temps when it was new think about the damaged thermal pads and the PCB full of oil.
I'm a mechanical engineer and I know for a fact that making a cooler for a GPU can be super easily achieved with absolutely no extra cost to a precision of around 0,1mm or less. The difference between a flat piece of metal that needs a 1,6mm thick thermal pad and a profiled piece of metal that needs only thermal paste is only about the mold. Flat mold gives flat piece of metal. Profiled mold gives profiled piece of metal that fits very well on all memory chips and all components that need cooling without the need of thick (and juicy) thermal pads. This means the memory chips almost don't even need thermal pads, they need liquid thermal pads and they are good to go. All you need is 4 more screws which are added anyway to almost all newer cards. This won't help when the video card is bent (thermal pads do help) and we all know that owners do make their expensive video cards to bend horribly.
@northwestrepair would it be possible to have a average price for you to fix a card that has a "normal" problem requiring the average work that you do? i have a couple of broken card and i am wondering if it is worth fixing. what country are you in (for postage cost)?
This! I have a ASRock Radeon RX 6800 XT Taichi Gaming and I would love to replace the pads, but I cannot find anything that will let me know what size to get.
All those cards come off the same production lines. And even if they didn't, they still all use pretty much the same standard components. On some cards the engineers will feel creative and try something funny that may or may not work better than the default, but that's not too common. However, running data traces right next to the edge where the board bends at the connector is not one of the brightest ideas of this era.
When the excess solder is squeezed out ( like at 3:15 ) why does it not short underneath the DRMOS FET? Is it again about the flux and surface tension?
The 3 pads underneath the mosfet is all thats really making a connection I think. With solder and a good amount of flux the solder should move only to traces and pads. It kinda avoids anything not metal. I think squeezing it would just push the solder out and leave just enough on the pads underneath.
@@Laggyness Thank you. I only have soldered a few ICs with pads underneath. I have always been miserly with solder, worried about a short underneath. I will be a bit more relaxed going forward!
Hey, I have this same GPU. It failed after a few months of ownership back in 2021. After having the card for longer than I did they finally sent it back "repaired". Near as I could tell they only replaced a fuse as it was the only area that showed signs of rework. However, I don't have a microscope so I probably couldn't tell if they worked those chips. Also, it was a different fuse than the ones you replaced. It was the fuse above the first one you replaced, the one with a Z on it. I said "repaired" (with quotes) because near as I can tell "Z" should be a 20-amp fuse (and I see you replaced it with a 20 as well). Gigabyte replaced it with a 10. I hadn't used the card much since I got it back because I didn't trust it. Since I bought an Asus 3080 Ti while waiting to get it back I have been using that. But because the warranty is up this year I've recently (as of 2 days ago) switched to using it. Haven't really pushed it yet, I'm somewhat afraid that it blowing again might somehow damage other things in the computer, though I can't imagine how. If you for some reason want more details/pics of the card I can send you them. I'd link them but putting links in comments pretty much always gets them flagged as spam. Edit: Looking back at the pictures, one of the driver mosfets now has a different date code, so they did replace at least one of them. The fuse still appears to be incorrect though, so idk.
What you're seeing is a result of nVidia selling GPU dies to their AIBs at ridiculous prices, and the AIBs then doing a bunch of cost cutting on their component selection which results in flagship cards prone to failures as they dial in their component selection to only last as long as the warranty period. This is why eVGA told nVidia to kick rocks and exited the GPU market, because nVidia was forcing their hand to sell junk. I hope you remember this next time you consider purchasing an nVidia card. I will no longer do business with nVidia, I consider them the scummiest in the industry. I've made some sacrifices switching to Radeon, but now I at least know that I'm not going to need to constantly worry about an expensive paperweight the very moment the warranty is up. AMD is also pushing hard to reach feature parity with nVidia, so it's a good time to start supporting them if you can manage without a CUDA equivalent for another 1-2 years.
@@K31TH3R i thought about going with amd, but there are too many issues. way more than intel and nvidia. i'd rather my components and drivers to just work. also i never go with the higher end stuff. i stick to middle of the road. it does what i want for years and it doesn't cost me an arm and a leg. i don't need 4k, i don't do any editing or any workload like that just gaming and normal computer stuff.
@@Stackali When was the last time you ran a Radeon GPU in your machine? AMD is still suffering a bunch of (deserved) bad press for a terrible launch of RDNA in 2019, and another poor launch in 2022 with RDNA3. I bought a 5700 XT on launch and the card was borderline unusable for the first 3 months, and wasn't really fully reliable until about 6 months later. It was the worst product launch I have ever seen from AMD. However, when I retired it in 2023, it was just as reliable as any other nVidia GPU I've owned, if not more so, as my luck with nVidia meant I statistically would've had a failed GPU in 2022, as they rarely last longer than their warranty period in my PCs. RDNA2, on the other hand, has been about as close to flawless as a GPU architecture gets for the entirety of it's existence. I'm running a Liquid Devil 6800 XT and it has absolutely been that "plug it in and go" experience we all want from PC components, and while the productivity and RT side of things is lackluster, the card often has no issues performing way above it's weight class and often matches the RTX 3090 in raw rasterization. I also have zero fears about the reliability of the card, because it is massively overbuilt with a 14 phase VRM that maxes out at 42C even if I push 400W through it. RDNA3 is also now in a very good state despite the rocky start, with 7900 XTX and 7900 XT owners having already done the work of testing and bug reporting that has ultimately made the 7800 XT one of the best all around GPUs AMD has ever launched, with little to no issues from the day it went on shelves. It's also expected that RDNA4 at launch will be another repeat of RDNA2s flawlessness, since it's more of a refresh and refinement of RDNA3 than a completely new architecture. RDNA4 will also be targeting the midrange performance segment that you fall into. If you're willing to give Radeon a try, RDNA4 will probably be the best time to do it. Worst case scenario, you can just return it. IMO it's worth a shot, because we all need to be sending a message to nVidia that their behavior isn't acceptable.
I'm OCD about temperatures, always end up replacing TIM and pads with top-draw stuff, adding fans and ghetto-mod ducts/baffles. I think a problem with devices over the last couple decades is designers letting various chips and SMD's run at temps that are technically within spec (taking the manufacturer at their word) but in the medium/long term kills them. Noticed that the MSI Gaming X cards I've been running for a couple of years years (a 1070 and 2070 Super) both have a scattering of pads on small, even tiny components which perhaps would have been left to cook on some GPU's, and there might well be other components that could use cooling. TL;DR - I think cooling on graphics cards is -often- usually compromised and just Not Very Good, even if the GPU, VRM and memory _seem_ to be running at reasonable temps. I doubt expensive cards like this one are much different.
This is the reason most cards fail. Bottom barrel heatsink design with hilariously bad fans. On top of way too slow stock fan profiles. Yes the core temps are more than fine, but other components on the card are getting cooked. Running at a high fan speed ensures that this happens much much less, as the fans will continuously push air over the non-directly cooled components (even if it's hot). This is why cheaper cards are generally speaking more reliable. Because they don't draw that much power to begin with to cook anything. TLDR: Run fans at 70%+ whenever the card has any meaningful amount of power going through it.
@@LiveType Agree, on the same page. I have the fans on the 2070 Super kick in at 43°C and on a steep curve that keeps it under c. 56°C (nearer 50°C in most games). I used to use Afterburner but switched to Fan Control (which was running everything else in the PC) because you can set the rate of ramp up/down. Highly recommend it.
quick question if alot thermal pad grease was to ooze into the pcie slot could it cause the pc to restart i have a 4090 with a optimus waterblock the backplate is a big chunk of aluminum with a full coverage fujipoly thermal pad that oozes thermal grease from the pad into the pcie slot because my card is vertical the grease goes straight into the slot. ive been all of the sudden having restarts when playing a game happens anywhere from 5 min to an hour nothing changed on my pc this is the only thing i can think of it did this with my 3090 and both my 4090 all had optimus waterblocks with a full coverage thermal pad covering the back of the pcb
Only if it somehow prevents connection on card (paste and pads are entirely non-conductive, so if they go between card pins and slot pins, it is disconnected). Either way, throw that pad away, it doesnt cool anything anymore. And PCIE slot need some cleaning too.
I bought a 32 gb MI50, that's a datacenter version of the Radeon VII. How about those? Are they super reliable? They have nothing but carbon pads, no paste.
"Only one way" you could also use continuity test from gate to source/drain on suspected MOSFET. If you're going for CC and thermal cam visual inspection is just wasted time. Also, using BW or Rainbow HC color scales are better for spotting very small differences. Last thing, why not replace ALL THE transistors? I highly suspect the card was operated in a suffocated case at very high temperatures and next MOSFET failure probability must be quite high.
More of certain GPU model on the market = higher chance of failures. Which means, despite of all the GPUs that end up on your table, the overall failure rate remains low, unless the GPU has a design flaw, like the 4090s with their melting connectors.
Why don't you use a bench power supply to inject voltage after the fuse? It would be alot easier than having to replace fuses just to find out which MOSFET is shorted. Of course, if you are going to fix the card, you need to replace the fuses, but if the GPU is shorted, it is game over.
@@northwestrepair Ouch... 😬 Yeah looks like they hit with that copyright nonsense for even not-so-well-known bands. But seriously, there has to be some free synthwave/chiptune tracks you could use somewhere. Can't risk anything else unless you make them yourself these days.
i have that brand new on sale for $875 and room for negotiation. and yet, having a hard time selling it, i would sell it for $825 if somebody wanted it in a build offer from me, i can pair it with i7-10700kf for 4K gaming or r9 5950x for content creation and 3D modeling
i think failure through no fault of the user is fairly low. but some people do kill their components either by not cleaning them or other reasons like high humidity and not running a dehumidifier.
Could anyone help me please, I am looking at buying a new rtx 4070 ti super gpu, do PNY make better cards now, or is ASUS a good choice, if anyone has knowledge which brand is better for reliability/longevity. I don’t intend to Overclock the gpu. Any help would be appreciated 👍🏻
So, since EVGA is out of business, what's the most reliable brand recently according to you? Gigabyte is the only brand I never had a problem with, but I guess you have a lot of fixing to do on these cards too.
EVGA def never held that title considering the blown caps of the 900-1000 series and ripped PCB/pads under mem and gpu for 30 series. He’s had at least 10 evga just ripped to shreds and had to retrace them. EVGA got out of the game because they couldn’t make their cards any cheaper without falling apart
I would avoid Gigabyte personally. Their RMA was hacked a couple years ago and wiped entirely during 3000 card release and fucked over alot of people who sent their stuff in for repair.
I know asus got a bad rap recently but I still think they make the best hardware. imo gigabyte is on my avoid list, MSI seems meh, asrock doing a good job recently but I don't think they do nvidia cards.
MOSFETS have their own "silicon lottery" just like CPUs and GPUs , sometimes you get one that's a bit weak and it works for a while or until it sees a load or thermal limit it doesn't like and it falls over. They are particularly problematic since they both tend to be used in high power situations and usually fail to short circuit, hence the blown fuses. Keep them cool, limit the power, and you'll reduce the chances of a weak one seeing a condition it doesn't like.
Don't know anything about fail rates but personally I always keep my distance from ASUS GPUs because 80% of factory recertified/refurbished in my stores is from that brand. Especially if it is Geforce.
i've actually had 3 asus cards( 1060, 3070, 4070) and haven't had an issue. the other 3 were msi 1070ti , evga 760, and sapphire hd 7770. haven't had an issue with any gpus.
Tony, you didn't answer the question. _Do rare and expensive graphics cards fail less than budget cards?_ Also, I have a question: you said the card was heated to _at least_ 150°C. So, just to be clear, you have to have the entire card heated up hot enough to roast a turkey, _before_ work can even begin? Is that what needs to be done every time? The hot air itself isn't enough? (150°C = 302°F, turkeys are often roasted at around 325°F . . . close enough) The polymer caps can handle the 150°C?
Hot to a Turkey and a powered on graphics card is not hot for a graphics card that is not running. Preheat in the area you are working on lowers the duration and intensity of heat needed to finally melt the solder and replace the part. This is standard practice if you are not butchering what you are working on. Lead-free solders melt in the neighborhood of 425 °F so the 320 degree temperature is a STARTING point for melting lower temperature leaded solder which you wouldn't even find on these cards. These cards are made of many layers of copper and not preheating would result in the heat you put into the card quickly racing away from the solder you want to melt which means you need higher temperatures on an iron, in the 600 Degree range, to have a chance of removing a simple component and nothing on the board is rated to handle that much heat. Tony can't actually answer the rare cards vs budget cards question because he only sees broken cards and he only sees broken cards people feel are worth repairing ( basically people only feel it worth the effort to save an expensive card these days so his experience is highly skewed). The best person to answer that question would be the RMA department of each card maker.
i noticed the more u pay the more the fail and today gpus cant even handle the warranty period that in at least in europe is 3 years..... complete pos gpus they are delivering and remember most of this cards will be oc and mined on in the used market........
rtx 3090 shows chessboard middle of screen for 1 sec, wont crash in games or video rendering, could you give some hint, googeled and there are opinions wall to wall
I'll dont get it, ¿Why the newer generations of cards are getting big like a bricks? ¿Ins't supposed than the PCIEx slot can only support certain degree of weight? because what i'll can see is many cards being twisted or bented making them prone to damage because by their own weight, it's counterproductive, remember my last GTX 1660 was long but not that heavy like these 3090-4090 and appers to be much worse with 5000 Blackwell.