@@AmaroqStarwind It's not that I dislike RT. It just feels like we have been focussing on something that could have been simulated with pre-baked code. Similar to how Quake handled lighting. What I disliked about RT is how it was marketed harder to devs than to consumers. As a shortcut so they don't have to prebake lighting in the game. Let the customer's GPU do it instead. Im also not happy that many games will go on about RT but their dev's have not even implemented proper physics or environmental textures.
This is why my Mum was smart enough to buy me consoles from Sega master system up to PS2. Gaurenteed future game support for the console life. We had a Win 95 pc that had DoS games and Doom wolfenstien ect. I rejoined PC gaming in 2019 as it makes sence now as PC setups are usable for many more years like you said
I looked into the "only supporting integer" stuff. The "shader processors" that 3D labs are talking about are not the full equivalent of modern fragment shaders. 3D labs have actually split the fragment shaders across two seperate cores, these shader processors and the "texture coordinate processors", and the texture coordinate processors do support full 32bit floating point. These two cores are detached from each other, presumably with a longish FIFO between them. The idea is that the Texture coordinate processor does all the texture coordinate processing, fetches the textures, does dependant textures and then passes the final texture fetch colors on to the shader processors, which does the final calculations with 10 bit integer precision. The documentation explicitly points the texture processors can pass any raw value they calculate though to the shading units without needing to do a texture fetch. And if you check the ARB_fragment_shader extension (notice how the contact detail was 3Dlabs themselves) you will notice that it only requires the full 2^32 range for positions, normals or UV coords. Color values are only required to support the range of 2^10. This split between texture processors and final color processing was not unique to 3D labs. I believe all major graphics vendors implemented such a design for their register combiner and early pixel shader GPUs. It's why Pixel Shader model 1.x and Pixel Shader 2.0 (not 2.0a) have the concept of "texture instruction" and "arithmetic insertions". I don't think the Pixel Shader Model documentation ever actually stated those types of instructions would be run on two completely independent shader cores... but that's what was happening under the hood. I believe it was Nvidia who unified both jobs into a single "pixel shader" core with their FX line of GPUs, and this unification was part of the reason why these GPUs had bad performance, because they had to use 32bit floats for color. On the plus side, it was the first GPU with proper flow control. Even the Radeon R300 still had this split, I'm seeing references in the register documentation to seperate "texture microcode" and "ALU microcode".
hmm interesting, in particular the bit about the FX stuff, I still should have one of the big boys of the FX line in storage and at the time had a bunch of discussions with people who were "pooping" on it due to articles that came out dissing the architecture. Sure, it was imperfect, but it was far from the absolute dog it was made out to be, at least the higher end models were quite OK - expecting big things from the fx5200 or how it was called was a strange idea anyway.
The 3Dlabs split of higher precision texture coordinate processing and lower precision fragment processing seems most similar to the Geforce3 and Geforce4. NVIDIA had a very similar split there, and this can be seen in the NV_texture_shader and NV_register_combiners split. At least some modern architectures still have vestiges of this. On Intel GPUs, the texture instructions conceptually send a message to an asynchronous unit that performs the texture look up. Once the data has been fetched and processed, it is delivered back to the shader execution unit. Depending on numerous factors (e.g., caches) the texture access can take a wildly variable amount of time. The texture sampler hardware is on a separate piece of the die from the execution unit. The huge difference between current and vintage architectures is that current architectures are asynchronous (other things can happen in the shader while waiting for texture results) and vintage architectures were synchronous (the next phase of ALU operations would block until the texture unit was done). As I mentioned in the video, Geforce FX only had flow control in the vertex stages. Fragment processing had predication that could be used to emulate if-then-else structures (similar to conditional move instructions on CPUs), but loops were impossible. You are correct that a lot of hardware from this era has a concept of alternating "texture phases" and "ALU phases." These architectures have a limited number of texture indirections, so the compiler / driver has to try to group texture accesses into bundles to reduce the number of phases. Each Shader Model has a limit for the number of texture indirections that are guaranteed to be supported. On R200 it's really small... 2 or 3. I know ATI_fragment_shader only allowed 2 passes. I think I'm going to have to do a deep-dive video on the Radeon 8500 architecture. On Intel i915, it's also pretty small. I want to say 4, but it has been years since I did any serious work on that driver. The general idea is that a texture phase receives data from a previous stage. That is either the vertex processing / interpolation or a previous ALU phase. Texture look ups occur, and that data is fed to either the next ALU stage or the color / depth outputs. ALU stages have some number of values that can be persistent by passing through the texture stages (this is related to the "pass any raw value they calculate" bit that you mention) and some temporary values that are lost at each texture stage. Fragment shader color inputs (via gl_Color and gl_SecondaryColor variables), color outputs (via gl_FragColor), and depth output (via gl_FragDepth) are implicitly clamped to [0, 1] and may have reduced precision. Intermediate calculations are expected to be "full" precision. The spec is a little vague about what full precision would be. Issue 30 of the GLSL 1.10 spec defers to the OpenGL 1.4 spec, "It is already implicit that floating point requirements must adhere to section 2.1.1 of version 1.4 of the OpenGL Specification." Section 2.1.1 is where I assume you got the 2^32 and 2^10 values. That section also says, "The maximum representable magnitude for all other floating-point values must be at least 2^32 ." "All other floating-point values" would include intermediate calculations in a shader. NV30 and R300 had different representations. NV30 could use either 32-bit single precision or 16-bit half precision. The latter was not directly exposed in GLSL. R300 used some sort of 24-bit precision internally. It met the requirements of all the specifications at the time, but I have some memory of it causing problems for developers that expected / needed full single precision. Given that the P10 shaders can supposedly be quite large, there are a few ways they could have gotten around this. A clever compiler could analyze the shaders to determine that some calculations would be fine at lower precision. I implemented a similar pass in my compiler. For many values in simple shaders, it is very easy to prove that the range will be limited. That would cover a lot of things. The GLSL spec and OpenGL 1.4 are pretty loose in the definition of precision and the processing of exceptional values (infinity, NaN, etc.). Perhaps this enables the compiler to use a different representation for floating point values... like storing an integer numerator and integer denominator. This is just speculation. There is one thing that still bugs me. GLSL is a superset of ARB_fragment_program (the assembly language shader). If the driver and hardware can do the former, it can, by definition, do the latter. Why not support it? Why not support Shader Model 2.0?
@@TalesofWeirdStuff Looking for someone like your for help with a project in id Tech 4. Im trying to force the game to use more modern JPG formats by changing the hex code and renaming it. Forcing 3DC instead of RXB DXT5 would be a nice little upgrade to squeeze more memory bandwidth out of the engine.
@@supabass4003 Why did it die? That's a mystery, because the high-end, especially that high-end, aren't even likely to have bad caps. Except for motherboards, I usually would see bad caps more in 2004-2005.
I'll have to re-test Doom 3 on my fresh rebuilt Athlon XP 2500+ system. NF7 2.0 with a 9700 Pro. If memory serves, I think I upgraded to a 6800GT to play Doom 3.
I had bought a FireGL X1 it was about $1000 USD new, still have it somewhere actually, was able to play games back in the early 2000's at insane resolutions on a super high res CRT I used to have, was awesome.
Just got to 12:18 and the claims about 512mb RAM. Very serendipitous you should mention Doom 3 too... So here's the thing. There was a big competition around the release of Doom 3 that I remember to this very day. The prize for this competition? The world's first and only special edition 512mb graphics card - the point being it would allow you to play in "Ultra" graphics settings. To this day I can't find any mention of it online, no one seems to remember it, there's no trace of it. Perhaps this is something you might be interested in looking into.
I have some vague memory of that too. I don't recall for sure if that was Doom 3 or some other game, but I do recall something about that. I can't even image what a slide show Doom 3 on Ultra settings would be on this card.
Those later cards seem like they might be more interesting. Did you have any of the 3Dlabs cards back in the day? I am curious to hear about people's real world experiences with them.
@@TalesofWeirdStuff I was born far too late for that. I got my first 3dlabs card (Wildcat Realizm 200) in 2011 at a flea market, without a clue on the significance it had. It was already outdated by then but ran TrackMania and Minecraft okay-ish on the terrible hands-me-down computer I had. I have collected a few other cards since, and I even managed to score some ZiiLabs ZMS stuff (the embedded ARM SoCs with StemCell GPUs, derived from their existing IP).
@@TalesofWeirdStuff I remember the realizm cards whilst impressive for some professional applications had real issues if you attempted to game on them.
@@TalesofWeirdStuff Go to moddb Prey 2006 Remake and look at the second photos comments. Im using DXT1, DXT3, DXT5, RXGB DXT5, TGA RLE, and JPG 4:2:0 87-96% making up a over 60GB file that absolutely maxes out a RTX 4090s memory system using custom resolutions for EVERY SURFACE in the game. The goal is to get Nvidias attention and drop path tracing beyond the screenspace stuff im stuck using and scaling back to development hardware for the engine itself.
@@TalesofWeirdStuff As a former driver developer, though not in the display space (and formerly part of NTDev in Redmond), I have a ton of questions about how you approach core display miniport development for these beasts. Without disclosing anything sensitive, I’d love to see a video about the general process, from spec sheet onwards (especially once we entered the era of programmability). (Also, this video was fascinating, as are some of your detail followup comments.)
Young people today will never understand how hard it was the get things running right back then. Today its basically plug and play, especially when it comes to steam and how it automatically downloads all dependencies.
and yet things now are not working as well as they should. For example, I have a relatively high-end setup costing and I have more problems with games than I ever had. The amount of software within hardware now means the possibility of conflict is ridiculously high. You could argue we have too many options in too many areas.
I absolutely do not miss my first GPU, the Geforce FX5600XT. At that time Nvidia used XT to mean "low-performance" versions, something 15-year old me didn't know when I bought my first pre-built. Upgraded to a 6800GT later that year, and that thing was a beast at the time. Thanks for this blast from the past
FX5600XT if I recall correctly was a sysintegrator only SKU, I it came as standard in some branded PC package deal? Pre-built can mean a local shop built PC package, or something totally different like an Acer, HP or similar. I recall at least Medion had FX5x00XT sysintegrator only cards that were strange ones, they seemed to be binning rejects or bus width gimped GPU's. I had either 5500xt or 5600xt that was a "faulty/disabled" 5800 where the memory controller was limited to 64mb at half speed or something like that. Very strange card, but not very useful.
I can relate. My 1st PC around 2000 had a PCI nvidia graphics card. This was when AGP was standard. I was amazed how much better my friend's PC played Midtown Madness using integrated graphics than mine could play it with a GPU . Mine barely could play it at all. But hey, I knew enough to make sure my PC had a gpu. ☠️ $500 lesson. That PC was a true potato. Icing on the cake was it ran Windows ME.
@@Chbcfjkvddjkiuvcfr heh, I still have a pair of 7950Gx2's. As useful as perfume on toast. But, it did turn heads heh, if not much else, quad SLI never worked right. I never used those cards really, other than to test and verify how crap they were. Still have 6800Ultra, and 5950Ultra (V9950Ultra) which both actually worked really well and did what it said on the tin. Have a 8800Ultra too etc. The single cards were good but it was only 8xxx series that worked well in SLI for me, even have a pair of 8600 something in SLI that work surprisingly well, roughly 8800U equivalent performance but cheaper and less prone to the GPU desoldering itself from the PCB spontaneously. 8800U has problems, if any of them survive still in use, I have 2 dead ones and 1 working. The working one if because I don't use it except to briefly benchmark things on. 8800GT was good, have a pair of them too, 512mb ones. The GTS is okay but a lot slower than it should be, and 2 slots wide without justification. Also have the 8800GT "alpha dog" which if it works is great but it has the same problem as the Ultra, you'll have to dig through a stack of them to find a working one. If you do, cooler swap it immediately and it might live on. Triple SLI 8800Ultra was a hoot though, million FPS but the graphics bugs made everything unplayable unless you only played one game and found a magic modded driver for it, otherwise disable SLI and run just a single one.
I had 2 Voodoo 2 cards in SLI back in the day. Great stuff! I used it to play the original Tomb Raider with a patch that allowed it to support 3DFX glide.
@@mproviserremember those school guys "i have 2 voodoo in sli at home" , asking then let us play it - "my parents doesnt allow any friends from school", even if you break in his house to play it he will say "my father put's it in vault"
In 2004 I played through Doom 3 on a Radeon 9200 SE, back then it cost about $50 new. The SE meant that the ram bus was cut to 64bit width, about 1/4th the typical 256bit data bus. It ran Doom 3 with almost identical performance to this thing.
I played Doom 3 when it came out on a Geforce 4 MX 440, which was basically just a faster Geforce 2, at 640x480 and it ran better than this. There are videos of the Geforce 4 MX 440 on RU-vid playing Doom 3 just fine. There's something wrong with this card or setup I think, it should run Doom 3 better than a Geforce 4 MX 440.
From tomshardware, "Old Hand Meets Young Firebrand: ATi FireGL X1 and Nvidia Quadro4 980XGL" The FireGL X1 128 MB is shown at $749, and $949 for the 256 MB version.
@@TalesofWeirdStuff Correlates with what i found. (mostly german Hardware/IT Sites who had reports from 2002-2003 for these cards.) :D A german Hardware Website had a report with benchmarks who stated that the ATI FireGL X1 256MB was listed and sold for 995,-€. The 3Dlabs Wildcat VP990 Pro was MSRP 999,-€ before taxes, 1160,- with VAT applied. The VP760 was listed for 449,-$ The VP870 was listed for 599,-$ The VP970 was listed for 1199,-$ As stated in a report from June 24th 2002 on a german IT website.
@@TalesofWeirdStuff i found the price on australian magazine made in early 2000 ( Atomic: Maximum Power Computing) issue 023 december 2002 - and it said 2,595 $ (australian $ presumably) the magazine is dumped in archivedotorg if you want to see it Yourself
@@TalesofWeirdStuff Not really. Due to the PS2 and original Xbox, games ran on DirectX8.1 until around 2006, with some games even in 2007+ still running on that API.
@@TalesofWeirdStuffAh, that Radeon was a sweet card...it died on me, switched to a GeForce2 Ti (or was ot Pro?). The speed bump was quite noticeable on an Athlon at 1200 MHz in Max Payne but the image quality of Nvidia was still lagging behind ATi.
I remember back in the day owning a £600 IIyama Vision Master Pro 510 22” NF diamondtron monitor and how cool that was. A great time to be in computing.
That's not just a GPU, that's a workstation card. I recommend the brand and the model. 3DLabs cards were meant for workstations doing CAD, complex modeling, and they averaged as much as quadruple the amount of video RAM that a consumer market card has available. It's because it was a workstation card. I remember 1998 when these workstation cards had 64MB of RAM in a time when consumer graphics cards only had 16MB of RAM available. They're for heavy workloads too.
I had a Geforce 5800 Ultra and an AMD Athlon 2800+ back during the Doom 3 days. Was my first gaming PC. Not sure how that compares to what hes using here, specwise. That game ran smooth on my setup, i remember that. That setup lasted me quite a few years.
In the follow-up video, I run a PCX 5900 on an Athlon X2 4800. The GPU is probably about the same as the 5800 Ultra, but the CPU is... a bit more. I decently playable numbers on that setup.
While you were scrolling through the extenions, I could see both ARB_vertex_shader, and ARB_fragment_shader (and the obligatory umbrella ARB_shader_language100), so it absolutely looks like it has full GLSL support. Which makes sense, because 3DLabs was at the time at the forefront of GLSL and GL2 development. I never heard of 3DLabs before reading their whitepapers and slides about it back then. The big question then is, does it has a GLSL noise() function that doesn't always return 0? Because I want one if it does.
That is a fabulous question! I will for sure check the noise function. For awhile Mesa had a real noise function, but we took it out because lower end GPUs couldn't do it. It generated way too many instructions.
If Intel’s cancelled Larabee cards had actually been released, you probably would’ve had a field day with them. Just out of curiosity, have you heard of a game called 4D Golf? I’d be curious to see what someone of your expertise thinks of its rendering code. The engine was recently made open source, and the developer also has a comprehensive build log on RU-vid.
I played Doom3 on a GeForce3 back in the day. It didn't run very well, and I'd acquired the GF3 THREE YEARS PRIOR specifically to play Doom3, because that's the GPU that Carmack mentions at the Macworld Expo where Doom3 was originally unveiled. Then the game took 3 more years to release, and in that time GPUs got better and faster, so by the time my Doom3-intended GPU actually rendered the game, it was antiquated and bare-minimum hardware. I did have fun playing with NV_register_combiners in OpenGL though, creating a normal-mapping demo, and manually creating normal maps in Photoshop 5.0. It basically was a fixed-function toggleable dot-product that you could route different inputs into. It was the earliest most basic "programmable graphics hardware" you could imagine - no actual "shader" support, but you could do normalmapping.
I've recently seen Doom 3 running in a very similar manner. On my PIII 550 with an Nvidia FX 5500 PCI. It's running drivers that aren't entirely meant for it, the older ones that only support the 5200, as for whatever reason the newer ones for 98SE don't let System Shock 2 run in it's original form, as well as a couple of other games. But even so, it comfortably runs Fahrenheit, Max Payne 1 & 2, but primarily use it for older games. My Athlon Thunderbird 1.13 with a Voodoo 3000 can technically run modded Doom 3, but it looks like some kind of fever dream. For regular work, my 3800 X2 with Nvidia 7900GTX 512MB, PCI-e Ageia PhysX, 2GB of RAM, and a transparent 140GB 10k Raptor does just fine. Oh, and one of those fancy Creative Z31 cards with the gold ports. Since I'm not versed on this anywhere near as well as you are, maybe you can answer this question. The Chronicles of Riddick Escape From Butcher Bay looks absolutely marvellous on the 7900GTX. But, on later unified shader cards, the lovely soft shadows and lightning doesn't look right. This isn't my imagination, I've run the games side by side on my old brick and my actual PC, and the difference is clear. I even had the remaster running on my brother PC and compared it, still doesn't look anywhere near as good. Is there any features that a vertex and pixel shader card could do, that a unified shader card cannot? Because I know it's not the first time I've noticed this with games from around 2004 ~ 2006, where it looks cleaner and shadows and lighting look better on the older cards. 🤔 Or, maybe the devs were lazy.
That is interesting. I don't know anything about that particular game, but I can't think of a particular reason why the shaders should have any affect. Is it just the shadows? There could be changes in how the texture samplers work that could have some impact on soft shadows.
@@TalesofWeirdStuff Well, I honestly didn't expect a response.. Okay, in TCoR: EFBB it's the shadows, and the interaction with light sources against shaded surfaces. The hardware at the time didn't necessarily permit copious use of polygons, so many surfaces were just flat with textures and shaders to make bricks, tiles, wood etc look like it was actually 3D. What I need to do, is grab a capture card, and record some of my favourites from back in the day, and stick the side by side comparisons in a RU-vid video that no one will watch. I cannot say this with any degree of certainty, but either some level of precision was lost when moving from separate pixel and vertex shader pipelines to the unified architecture. Or, the devs who made those games / game engines didn't write the rendering code in a way that later drivers code interpret to produce the same exact image on the screen. One that definitely sprung to mind again, was THQ's The Punisher, a game that by all accounts looks pretty bad even for the time. But on ATI X series cards (X300 ~ X800) old junk by today's standards, certain details could be made to look really good by tweaking driver options. Effects that couldn't be replicated when I moved over to an nVidia 7600 around that time. I'm firing up my old brick now to dig through my archive. Okay, bit sad looking at screenshots I took 19 years ago, but still. Amongst the many comparison shots, I found what I was looking for. With ATI APC the textures look sharper and cleaner, and it emphasised the metallic parts on the trench coat, buttons, edge lining etc. Without it, the game looked flat and textures were "standard". Anyway, don't let me bore you to death. When I was a teenager, these things mattered to me and my friends. Why does it look like this on this hardware, and like that with these driver settings, but that hardware looks slightly different and cannot achieve the same results. Don't worry, I'm just an idiot who still thumbs his nose at the past. 😂
Appreciate your format, and your outstandingly long experience with this stuff. Keep it coming I honestly love the stuff about extension implementation and compliance, fuckin' gold
In high school, we had shop class where we learned how to use power tools and different kinds of hardware. We had a small period where we learned about computer hardware chips, and i will never forget our teacher being hyper serious about never touching the gold fins on it. He would constantly refer to it to "Fingering the fins". Even on the test. Meanwhile every single person on youtube is fingering the heck out of these gold fins like they are the king of the world.
This is so interesting! I wonder if Doom 3 is defaulting to fixed function mode. That framerate though... Does it not have occlusion culling or am I misunderstanding that? I do remember my friends and I back in the day saw this card on NewEgg and such and just gawked at the ridiculously huge framebuffer. But we all knew it wasn't suitable for gaming.
I'm going to delve into the rendering modes in the next video. I finally looked it up... "gfxinfo" will show which modes are available, and "r_renderer" can be used to specify a mode. Based on the available functionality, it should be able to do the NV20 mode, but it might be falling back to ARB. The card doesn't have occlusion query, but I don't think Doom 3 uses that. I'm not 100% sure, though. It's possible that's a factor.
Doom 3 was released in 2003 and it was running okay on Nvidia FX5200 on pentium 4, but was designed to run on lower end systems, I believe OpenAL and Open GL 2.0 was what was neccessary. By the way OpenGL 2.0 could load shades as extensions, a mechanism incepted in OpenGL from the initial design stages, then it was called IrisGL. By the way you explain very well for people who programmed in OpenGL 1..2
I remember running Doom 3 on my Pentium 4 Northwood 2.4GHZ socket 478 with ATI 9000 AGP. Barely ran at 640x480. But in 2004 I upgraded to a x850xt agp and I was able to max it out. Pretty awesome.
Man the Radeon 9700 Pro was such a beast, I had one for like 3 days before I returned it, my CPU was too slow to actually use all of it. It was 700$ CDN at Bestbuy back in like 2003ish.
Glad you have a sane definition for gpu. Nothing worse than someone looking at the most basic frame buffer (like say a sun cg3, where it's literally just a chunk of ram you can write to, and a ramdac that converts data from that ram into video data) and saying "look at this gpu"
I rarely "correct people," but it does annoy me when people call even something like a Voodoo 2 a GPU. But... I always tell people the correct way to say GIF is the *opposite* of however they just said it. 😂
Definitely interested in a video on Radion 8500/GeForce 3/4. GPU evolution in general is very interesting and under covered topic, the deeper the better (if time permits)
Very interesting! I've been interested in trying my hand at Windows graphics driver writing myself, though I haven't really had much of a clue of where to even start yet outside of grabbing the pre-made example drivers from the Windows DDK.
Radeons supports Vertex Texturing starting from DX10 era cards R600 and higher. But i have feelings that R500 also should get a support but it was disabled for some reason. Maybe it was buggy. That would interesting to read R500 specification about this. And feelings I got from Chronicles of Riddick: Assault on Dark Athena. This game specifically requires Vertex Texturing and keep R500 as unsupported, but few Catalyst drivers actually pass the check and alow to play the game for a while. Game still crashes on level 3 and there no way to go further on R500, but it made me think that Vertex Texturing may be implemented but buggy, or maybe level 3 is the first level which really require it.)) That's interesting that 3D Labs implemented so many advanced features but forgot about basic ones as occlusion query, also it's not 100% necessary, pretty sure DOOM 3 do not using occlusion query and do geometry culling on CPU and fight overdraw with early Z pass. Waiting for the next video with more games on P10.
Thank you so much for your work on Linux! ❤ These are fascinating, never heard of em. My first card was a BFG FX 5200 (NV34), and while it was underpowered extremely quickly, I had to buy one recently to put on a shelf just for the nostalgia. It’s an incredible contrast with one tiny fan compared to the modern triple slot+ behemoths.
I had a beast of a 5900 at one time. Overclocked like a true champ. Paid way too much for it, but that was just a matter of bad timing. Never ran Doom 3 with it, I ran Doom on a pair of 7800s in SLi.
The right way. Without hardware and software that we know, your knowledge is abstract to me. What did I understand in the end? The Weird would have been a great character in the movie.
Thank for the wonderful video! As a computer engineering student, I find more technical explanations of relatively old hardware to be fascinating look into how we got to where we are today
Nowadays, any graphics chip, even from the 8 and 16-bit era, is considered a GPU, which to me is an aberration. They were not GPUs, nor did we use that term back then! The term GPU started to be used with the GeForce 256 in 1999 and, as you mentioned, it should be programmable, which, as far as I know, happened a few years later. But as I said, back then we used the term graphics chip or VDP ("Video Display Processor"), or even PPU ("Pixel Processing Unit") for the SNES. Your videos are super interesting.
Yup,I remember the GeForce 256 SDR (wich I bought at the time for my Athlon 500 rig),being addressed as a geometry processing unit(gpu). Before that time, my voodoo 2 12 Meg was defined as a 3d add in card ,just like my ATi rage pro 4meg,wich also can be defined as a display adapter. Although my S3 virge 2 meg before that was my real first 3d card. I know it's bad,but seeing dark forces 2 run in 320x200 w/texture filtering hooked me for life..
@@seebarry4068 Today, the term "graphic card" is also used, I believe. I think the term "GPU" is more oriented towards referring to the chip itself in a strict sense and, in general, the video card
The term GPU was introduced on PC by Nvidia with Geforce 256, which introduced a vector processor for a fixed function vertex pipeline and register combiners for the pixel pipeline. Configurable, not programmable. Outside the PC the term had popped up previously with very fixed function devices. SONY called the video subsystem of the original 1994 Playstation the GPU, and i believe there may have been earlier examples. But i guess if we agree to call only programmable units GPUs i wouldn't mind :D Dreamcast not having a DVD drive is... i don't know this seems largely irrelevant? I mean who cares that Gamecube has a drive that has a DVD pickup, the discs have barely more capacity than Dreamcast's, and you can't fit a DVD in there. The rotary speed of the Dreamcast drive is also not crazy high, it's not like any of the 52x PC drives, it's just noisy because there's a massive huge gap between the lid and the drive recess, so there is a lot of air noise from air running through that gap. I'm going to bet the ARB fragment shader support is very incomplete and that it will actually reject valid programs that don't conform to hardware constraints. This is not something they could sanely do with assembly ARB fragment program.
So this is the part of RU-vid where the PC engineers hang out. Cool. I was thinking in 5yrs we'll have a video asking "can an $1,600 gpu from 2022 run a modern AAA game?"
Radeon 9800 was a DX9 card and supported shader model 2. I had one and it was an incredible card for its time. I was running Far cry maxed out (needed 1 Gb of Ram or there was stutters, I tested it) with this beast when the game came out. Before the FX 5900, nVidia had no card worth a damn against it or the previous one, the legendary Radeon 9700 Pro. FX 5800 and the lower card were really bad. I built a Pentium 4c 2.4Ghz with dual channel ram (first gen with it), 1 GB of Corsair TwinX and the Radeon 9800. It was one hell of a PC and served me well for years, which was unusual back then. PCs were obsolete so fast in this era. Question: Are you the guy who was making the custom Omega drivers for Ati cards?
Are you sure it needed 1gb? I played it with a 8800gtx (and a 4200ti and 5900gtx before that) with 768mb and I don't recall that, maybe I'm wrong, It was a long time ago.
I think Far Cry needs 1GB system RAM, not video memory. I'm not the person making the Omega drivers, and... I'm not sure what they are. :) Another thing to research!
I've barely started this video but I love how you immediately dive into the programming and feature set nitty-gritty of this? these? GPUs and other graphics technologies. It's a clear tier above most any other historical GPU content on youtube (no shame on PixelPipes I love you but this is a whole other level).
I think there's plenty of room for lots of different perspectives. I equally enjoy LGR videos and Modern Vintage Gamer videos, but they tend to be very different levels of technical depth. I've been doing videos for almost 4 years, and I'm still trying to find where "my place" is.
to me a turning point of graphics, in my head at least, was when I saw the game demo named something like "krugen" that was compressed to less than a megabite or something like that, but it looked like 4gbs of data at least....
Years ago when I taught graphics programming, I'd show the students Debris by Farbraush... and then tell them the whole file was 177K. Minds successfully blown.
I feel sorry that i missed this card in 2003 seeing a label with 512mb on it 2years ahead of its competitors would have really opened my eyes, then again this card really isnt designed for Gaming, its designed for making games or 3d rendeing for making films etc.... but then even now i refuse to pay over £300-£400 for a graphics card although i am leaning towards the 4070 Ti im still holding out not spending over £700 just on a graphics card as long as possible, specially since im still stuck on AM4 with DDR4 not AM5 with DDR5 which bothers me since De8eur with his Thermal Grizzly line of products has made delidding and water cooling the AM5 CPU's as easy as Pie with there delidder and frame to make it compatible with any water cooler block.
I remember reading PC Gamer, I think, when the first GeForce launched. I think that was the first time I had read the term "GPU" and being kinda like "huh? what?" It's so weird looking back now because I think we just called video cards that had 3D acceleration...3D cards?
I played Doom 3 on a 9800PRO. Also played it on a 6600GT. The 6600GT played it far better, but that's when Nvidia was still KILLING image quality through the drivers, too. (They also were doing the same on the previous FX5000 series, too. BLURRY image quality for higher performance) Then ATi came out with drivers that increased Anti-Aliasing performance by 40% which made the 9800PRO like a new card. But, sure, the 6600GT was faster. And I even got a 6800GS AGP later, and then an HD3850 AGP (the last AGP ever made). Waste of money, because my 3400+ (s754) wasn't powerful enough to really unleash it. But yeah, the 9800PRO played Doom 3 "okay".
I didn't have that expensive card, but I did become a fan of 3DLabs in the late 1990's when I bought the FireGL1000Pro which had the Permedia2 processor on it. Because it was more or less a budget CAD card & general graphics, it ran games of that time less than decent but still good enough, especially when my PC was really only a Pentium 233mmx or K6-II or III at 400 or more. It did get a little better when I updated to a Pentium3 1.0 ghz cpu. But the biggest change was when I went to GeForce video cards. I did read about the P10 cards back then but at those prices I wasn't about to need one, heh. 07/16/24
It's kind of nuts that a GPU that was 2 years old at the time of DOOM 3 releasing struggles to run it. In modern times you could run a brand new title with a 5 year old top end at the time graphics card pretty well, even without upscaling.
I think that is because of consoles. Game devs who want to make the maximum profit ship for PC and various consoles. That means the practical minimum system requirements is a Switch, and that's a pretty low bar. As late as 2012 I remember seeing talks at GDC where game devs said, basically, they couldn't use interesting PC graphics features because they had to ship on PS3 and Xbox 360.
Wow this reminds me how I played DOOM 3 back in the day on my mom's windows XP machine, it had a OEM GPU that came with it, I finished the game with a average FPS of about 12 at the lowest settings
I had 128MB 8500LE that I bought for $20 back when Doom 3 came out. It played way better on that, even though 8500 was removed from Doom 3 recommended configs by launch time (it was targeted earlier in development).
Maybe it's just because I was there at the time, but to me a GPU is a graphics card with hardware T&L, since that's why nvidia coined the term. So anything before that isn't a GPU, anything after that is, but shader architectures definitely should be seen as equally revolutionary to just that. I mean, a 4004 is technically a CPU, but not much of one.
A 6800 could run Doom 3 Ultra @ 1024x768 at 60 fps. It was only a few months old. Something like a middle 2002 card would be single digit fps in the menu at those settings. GeForce 6800 was a graphics card by NVIDIA, launched on April 14th, 2004. Doom 3 was released in the United States on August 3, 2004.
In 2004 i was playing UT 04, battlefield 1942, HL2, battlefield vietnam, DOOM 3 and C&C Generals - Zero Hour on my Leadtek Geforce 5200FX 128mb, e6600 core 2 duo intel sk775 cpu with 512mb DDR ram
My friend bought a Radeon 8500LE. I was running a Geforce 2 Ultra back then, I advised my friend to get a Geforce 3 to get similar performance to me (he was struggling to play Morrowind on his aging Geforce 2 MX). His budget didn't quite stretch to the £180 Geforce 3, so we settled on the Radeon 8500LE which was only £110. After some overclocking and tweaking, his card was surprisingly good! In 32-bit colour it could rival my GF2 Ultra in some titles, such as Giants : Kabuto. And Morrowind was vastly improved for him, along with those water shaders. Definitely not a full 8500, but still ... best bang for buck at the time.
I remember when I played doom 3 for the first time on my old ati radeon x850 pro agp, accompanied by a modest sempron 3100 socket 754. Greetings and blessings
The main processor being very programmable reminds me in some indirect way of IBM's weird M-ACPA sound cards that used a chunky Ti MSP430 DSP (the driver disks came with some example code for decoding JPEGs on the sound card...)
I had a 5600XT when this game released and I rushed out to buy a copy of D3 only to get a rude shock when I tried to play it. Thankfully I later upgraded to a 6600GT then a cheap used 6800GT which both ran it perfectly fine
Shoot 2002 sounds so old. Please don't recognize the gpu. Please don't recognize the gpu.... dang it the 9800? That's fresh. My buddy who had a 9800 pro was able to run antialiasing on bf1942 which was crazy for the time.
Battlefield 1942! I completely forgot about that game! As much as I played that game... I should have included it in the follow-up video. Maybe next time.
wow, video cards are some kind of wild thing. parallelism without limits. had i been an european politician looking to start an industry in europe i would hire you if you would like to come
That was very interesting. I love tech and hardware especially gpu's but just computers in general. I also come from the same time period as you do. To see these 1,200 video cards that can't run a game like Doom 3 and we now have 500.00 plastic boxes that can run them at 60 to 120 fps is amazing. I just bought a new rig with a 4070 TI Super and the price of that gpu was hard to swallow but it will last quite some time. Before the 4070 the last gpu I bought were 2 Evga super over clocked editions or something and they were 768 mb cards I think without looking for specifics but yes EVGA is a company we need to come back. They made some of the most stable cards back then.
I had an Athlon 3200+ at 2.2ghz, a geforce 4 t1 4400 with 256mb and a whopping 512mb of ddr back in the day and Doom3 was running super smooth at 1280x1024. I hated the game but it ran well.
Today people complain that 4090 is expensive but 20 years ago was payment 4 times worst in my country. And yes i have only 800 euros payment and i have 4090.
800per month or per year? not trying to be condescending, but sadly there are many many countrys where 800per year is more than double/triple(/or more than that even) of their salary per year
I mean, what they argue is that they've jacked the prices up while having worse performance per dollar each gen. All I always hear is that the 10 series spoiled us.
I had a Geforce 5500 FX cheap in my 1st computer (fam computer at the time was a Pentium 2 350mhz, ATI Rage II Turbo 8MB VRAM AGP) I bought to run Doom 3 specifically. No problem despite being a bad card at the original asking price. Until Prey came out. Same engine idtech 4, but more demanding. Then I switched to Radeon, but I can't recall the model haha.