Or rather if you're going to invent a wheel that requires special roads, make sure someone wants to build those roads. If not make sure that your wheel is so good that you can afford to do it yourself.
It'd be interesting to see LTT's take on the Transmeta Crusoe, a CPU that was very much NOT x86 but managed to run a custom x86 emulator that was, in almost all cases, at least as fast as an Intel or AMD CPU of the same cost, and using much less power in the process. (The "almost all" caveat is probably ultimately why they failed) It's actually the same architecture family as Itanium, VLIW (Very Long Instruction Word, with 32-bit instructions some of which could be combined to make 64- and 128-bit instruction words).
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-3EPTfOTC4Jw.html You both made me Google this, this video? Interesting.. Edit: It is a link to NCommander's video
The point of view of a retired Microsoft software engineer that created much of the tools of windows. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ThxdvEajK8g.html&ab_channel=Dave%27sGarage
x86 is already dead. The writing is already on the wall. ARM will be the future. Intel/AMD will have to adapt or die. Anyone who don't understand that is kidding themselves.
It's a low level foundation. Makes perfect sense that it survived this long. It's like the foundation/basement to a building. You can't yank it out and expect the building to not topple. There is software decades old running on x86. At a university lab I once saw an old piece of machinery that is controlled by a PC running Win 95 on x86. This machine is still used today to prototype CPU designs.
It shouldn't have survived, but we're kinda trapped by it. Apple proved that x86 SHOULD die. RISC is far more efficient. But we all use x86 because all the software is written for it. And all the software is written for it because we all use it. It's very hard to get out of this loop.
@@Myvoetisseer Well it cant reach the computing power x86 (and mainly AMD64) can. Quite frankly you can just dump more power into AMD64 and it will deliver. you cant do that with RISC and you wont be able to match AMD64 for about a decade to come.
@@Myvoetisseer x86 uses CISC and using RISC which stand for Reduced Instructions would mean low performance overall. Apple's M1 ARM processor is great with CPU performance but its GPU is laughable at best compared to nvidia or AMD and is comparable to integrated graphics like the Iris Xe graphics.
Missed at least one important point I think was worth bringing up: Intel had no choice but to license the AMD64 instruction set extension from AMD, who had licensed the x86 instruction set from them initially - creating a mutual dependence that remains to this day, and extremely important leverage for AMD in cross-licensing agreements, lawsuits and industry deals that have taken place since. This significantly leveled the balance of power between the two companies, and it's extremely likely that AMD's fate may have been very, very different if not for this series of events.
We've all benefited from the competition between these two companies for decades as a result of these cross licensing agreements. I'm curious why Intel has so far stated that they are not going to jump into the upcoming ARM(s) race that's going to attempt to displace x86 in both the PC and server realms. We already have AMD, Nvidia, MediaTek, Qualcomm, Apple, and Samsung in the mix, and so many proven performers from cell phones, and Apple's new laptops/desktops that Intel might not be able to ignore this round of ARM on the desktop.
@@Barkebain the problem with arm on desktop is largely the same as with itanium. most things run on x86, and most of us are not willing to take the performance hit of emulation of x86 on arm. the reason it is taking over the laptop and server space is because power efficiency matters more, and Arm is already well known, being what phones use, so not as hard to convince people to develop for. also apple switched forever ago, and so we already had apps coded for Arm
Actualy architectures exist where the scheduling is done by the software ahead of time. This is the Very Long Instruction Word architecture still found in Digital Signal Processors today!
Don't compilers already do this to some extent? I once read LLVM IR of my program and it had rearranged quite a lot of instructions for some low-level optimisation (for packing reasons? idk)
@@VivekYadav-ds8oz Yes VLIW compilers do this ofcourse but a compiler is still software. So it happens at compile time as opposed to at runtime as would be the case with a hardware scheduler
It works for DSP because memory access and processing all run regularly with little interruption. It's the same reason those applications can have a million pipeline stages. Random memory accesses and dealing with other interrupts from I/O makes VLIW+software scheduling really annoying for a central processor/controller.
As an Itanium native speaker, the CPU would probably say "Disculpe señor, ¿podría indicarme dónde está el baño?". The same error was in Itanium emulators a while ago.
I've got an ancient HP zx2000 Itanium workstation in my basement that's been sitting there for a decade and a half. They were horrible to develop on, and insanely slow. I named the computer "insanium". I didn't realize that they existed this long, and thought the architecture died like a decade ago. We're all so lucky that AMD created AMD64 and brought it to market when they did, or we could be plagued by the Itanium. :)
most replacement parts rather than new installs. The 80386 was in production to about 2007 partly for the same reason, support for existing industrial equipment. (And the 80386 had been fully tested for bugs and corner cases in critical systems so remained popular in new designs for quite a while too. Having more than enough power to handle many embedded style tasks like systems monitoring, navigation calculation, and so forth.)
Nah, there were plenty of other options at the time. Sparc, ARM, MIPS, etc, which eventually all added 64-bit support and were based on clean-sheet designs not hampered by vestiges from the 1970s. But none of those would have let us keep relying on millions of lines of Windows-based code that no one would ever bother to recompile.
@@dycedargselderbrother5353 that is because it is alien. i've rummaged through risc and tonnes of x86/64 assembly. the only thing ia64 had going for it is that if you had the right symbols files you could figure out quite a bit. but the actual execution logic was was like swimming in spagetti. i think assembly should be somewhat readable as many errors are only detectable via assembly inspection. (kd rules)
@thatonespathi what I meant by development environment was a bunch of libraries that you use in YOUR OWN development environment. (I have used both unity and unreal engine + making my own) game engine != car engine a game engine is more of a library, it wraps low level graphics api calls, physics calls, and etc into simpler to use functions. Along with giving you an environment to code in. (environment = mono behavior in unity, or like the c++ enviornment UE gives you)
x86 is also a liability because of its age too, it needs to retire, and maybe now it might be able to be phased out. When they first tried it was just too early I think.
At some point RISC-V will have an extension that helps emulate x86 with good hardware support and that'll be the end of that. Though, in the present, we're way more likely to emulate RISC-V on x86 native hardware...
I never heard of ITANIUM, but I came across IA64 during college, particularly as we were using Intel's manual when we were studying x86 architecture assembly language.
I'd want one just for fun, but the later models that run halfway fast are expensive even on eBay. It's old shit someone is throwing out but still, there has to be demand by some exotic shops depending on them, I have no other theory.
I ran an HP SuperDome, what they called at the time a "Supercomputer". It could have multiple CPUs, with both PA-RISC and Itanium CPUs at the same time. I had a lot of fun bringing up Linux and HP-UX on the machine that I maintained as part of a testing lab for an HP software cataloging product. Fond memories...
I had an instructor about five years ago who was part of the design/implementation of those chips. He brought in some samples to show us. They were really big chips compared to x86. They were on daughter cards that plugged into the main pcb.
Slotted CPU mounting was also popular with x86 at the time, purely a matter of manufaturing. It was a way of adding cache without harming production yield. Basically they couldn't test the silicon until it was all mounted together, so a bad bit of cache [It was separate silicon from the core.] would trash a whole traditional pinned package, but by mounting on a mini-PCB they had the option to replace the bad piece of silicon.
VLIW architectures like EPIC are great (if your compilers are great), until you have to deal with register file pressure. TTA's (Transport Triggered Architectures) are better, but are bus heavy so occupy more space, but they eliminate the register file pressure problem. Hobby CPU designer here, and owner of more than a dozen FPGAs. I've built CPU architectures that were based on both of these designs. AMA
Do you have to deal with register file pressure? One of the nicer innovations on the itanium over other windowed designs was that you could allocate register frames to your heart's content and the Register Stack Engine would lazily write them back for you.
"if your compilers are great" - I think this was the real downfall for Itanium. It was just a bit too early. Its easy to forget just how bad compilers were then. SSA was still a largely ignored research paper from IBM at the time.
@@chainingsolid I'm a software engineer, have been writing code since I was a smallish child. I had a fascination with understanding how computers executed code, and when I heard of FPGAs I found an opportunity to explore some of that. It all started with a simple clock divider, an LED, and some verilog code to implement said clock divider and send voltage to said LED making it blink. From that point forward, learned how to build a debugging interface, and consumed a ton of material on old cpu designs, since they're comparatively simple (modern cpus are really complex). A decade later, I had written my first CPU implementation, compiler toolchain capable of running code I wrote for it.
@@capability-snob in TTAs there is no inherent register file pressure. Intel's solution to the problem was ingenious but not a panacea. Most VLIW designs have hundreds to thousands of registers to "solve" this issue, but Intel came up with the window. They have more than 64 registers, but only some of them are available for writing for any instruction
More of a case of multiple options going on rather than trying to kill off another with a new one with those. The 8086 chips were initially never seen as the future, just a chip to pump out while working on the 432 series, but that never took off. I860 ended up in printers mostly. It was a similar thing with Motorola, they wanted to move to a new chip design, but the 68k was so popular they ccould not get the follow on design going and then hit the limit of the 68k design. Eventually theyd get to the powerpc design though not alone.
The 432 was a daft design anyway. Instructions didn't contain addresses. Instead they contained a reference number that have to be looked up in a table to get the type and address of the variable. The type was use to check that this operation was permitted on this type of variable. The address from the table was used to get the value from physical memory. This made all if it take an extra memory access. The 286 outperformed the 432
By the time Itanium was unveiled, Intel had made at least 3 previous attempts to replace the x86: iAPX 432, i860, and i960. Now it looks like they're jumping on the RISC-V bandwagon.
Anyone familiar with the difference between software and hardware knows: If you want something cheaper, do it in software; if you want something faster, do it in hardware.
Anyone remember the Transmeta cpus? They where non x86, but emulated x86 by running a cross compiler in real time to convert x86 instructions to its native instruction set. Still have my Compaq TC1000 on the bookshelf...
Itanium lives on in many college computer architecture courses! It's a really well thought out architecture that is more modern than MIPS but less complicated than x86. A great architecture to study.
@@drooled2284 Yes but sadly the RISC-V is not as good of a CPU as an ARM. The lack of a status register makes simple checks on operations a lot harder. It takes several instructions to detect an overflow for example.
Itanium wasn't the first attempt from Intel. There was also the IPX432 and the I860. Each of these was worse than the other. Working out which instruction goes first is fairly easy on a RISC machine and would be within the grasp of a compiler to do. It would require a custom compiler for each version of a chip but that could be done if speed was your biggest goal.
Video idea: Full explanation as to why it is possible to have a virtual/software cpu or control the CPU instructions through software and not hardware, because in order for the software to do anything it needs to have the appropriate hardware and depends on the hardware in the end.
You’re misunderstanding what controlling instruction process through software means. The instructions are bigger and have extra data to tell the cpu what to do in more details. Since this data is part of the instructions, it is software, and it controls the cpu. They are just instructions control the cpu more precisely than usual. Now, since the instructions can tell the cpu how to do some of this stuff, it does not need to calculate it itself, hence no need for scheduler circuits, etc. It can also add room for extra optimizations through clever manipulation of instructions
Because other manufacturers could produce x86 chips, if their new architecture would have worked, intel wouldn't have had to bother with AMD Cyrix/VIA and co anymore.
You have to remember the pre-historic Intel chips... The 4004, designed to be a traffic light controller, and the 8008 which was commissioned by ARPA (now DARPA) to implement "Fly-by-wire" control systems in military aircraft. The 8080 was intended as an improvement but also became popular when MIT built a prototyping system in 1972 called the "Altair 8800" which they decided to sell to hobbyists because students were having a blast with it. The 8086 was the 16 bit version, with the 8088 being version in a smaller package that multiplexed data and address lines. 8088 was in the first PC/PCXTs, while a newer 8086, the 80286 was used in the AT models for 16 bit (IBM skipped the 8086 and 80186).
Intel never intended to replace x86 with Itanium (unless you run server on every computer) Itanium was strictly targeted at server, not even workstation x86 at that time wasn't targeted at server xeon was pretty much same thing as AMD Athlon MP Itanium was designed as server architecture
No, Intel was very clear when Itanium was announced that it was meant to replace x86 completely in servers and desktop. The plan they laid out was they were going to start Itanium in the server market where there was bigger margins to allow them to better develop the technology. When the performance matched, or exceeded x86, they would bring it into the consumer space. Part of making that performance match was to work with developers to design their software to work better with Itanium. It was a long term strategy. I also remember when they announced that they were scrapping their plans to replace x86.
This wasn't the first time Intel tried to kill the 8086, or even the second. The first was the iAPX 432. A kinda-sorta was the 376 which was a legacy-free 386 for embedded systems. The second was the 860/960, which saw more commercial success than Itanium, the iAPX 432, or the 80376.
@@naamadossantossilva4736 this processors are meant for corporate use (government and companies working with the state contracts), yet now there's money being stol... allocated for new designs based on RISC-V
@@MrMediator24 tbf, if it is like that, doesn't it seem that executing anything potentially dangerous from internet will be even possible ... Given for government use ofc
Ok, just so that you know, even before Itanium there were attempts at killing the X86 family, actually not long after it's creation Intel tried to drive the market away from the x86 architecture because that architecture was pretty much a "hack" of a design implemented on a rush to ensure they'd win the IBM contract (which they did therefore creating the legacy we have today), those CPUs were the iAPX 432, the i960, the i860. (yes this is the correct order) Although not all of these were specifically designed with that goal you can find references of that goal for the likes of the iAPX 432 and i960. Truth be said they never expected backwards compatibility to be the market driver it proved to be. Although in those days this incorrect perspective would be understandable if one didn't think things through properly. This is a mistake we nowadays have more than enough evidence to know Intel at some semi-random times does this kind of short sighted crap. That said, back then backwards compatibility IN HARDWARE was a must to assist with software development, the development environment for any architecture was pretty much in it's infancy (not to say still in the wound) and therefore any system that could use prior art to speed up it's use in the real world would have a major advantage. Nowadays we even have enough horsepower to emulate other instructions sets in software, and if someone becomes smart enough to grant our consumer CPUs some FPGA grunt logic directly next to the cores we could even add those instructions so we could run other architecture code natively (some Xeons do have FPGAs on them, just not as deeply integrated as this, still is faster than software emulation though). As such, backwards compatibility is not that much of an issue. A good example for this is Apple and how they've first migrated from Motorola 68x00 family to the PowerPC (which was somewhat simpler than the next change) then to x86-64 and nowadays are migrating to ARM (which would likely would have been easier coming from PowerPC as those are 2 RISC architectures unlike x86-64 even though nowadays x86-64 could be called a mix of CISC-RISC design, just a mess if you ask me even though I love it)
IBM was nowhere near deciding to test the waters in microcomputers when work started on the 8086 in 1976 to launch in 1978. At the time, Intel was lagging badly on 16 and 32-bit products, at least in demo form, and knew its 'clean' designs would not arrive in time to prevent competitors from staking out the market. Thus the 8086, building atop the existing Intel products, which saved a great deal of time in the design stages. When IBM decided to create the Entry Systems Division, there were already plenty of 8086 boards and systems being sold. C/PM-86 was already in existence and IBM first sought to license that in their rushed effort. Dorothy Kildall, Digital Research's lead attorney and wife of the founder, Gary Kildall, did not like the terms IBM proposed and told them so. An IBM exec worked on a United Way board with Mary Gates, wife of a prominent WA lawyer. She mentioned that her son had a software company and perhaps they could fill IBM's need. Microsoft didn't have anything to offer except the understanding that this was an immeasurably huge opportunity and went looking for a company that did have something that would serve. One of those companies was Seattle Computer Products, who produced 8086 S-100 bus products and had its own in-house C/PM clone, 86-DOS. Microsoft bought the rights to that and some months later it was shipped with the 5150 as PC-DOS.
@@danielandrejczyk9265 That depends: when is the future? Within the next five years, X86-64 will continue to dominate. In ten years, it gets a lot foggier as we reach the limits of how small we can go for a silicon process node, both in terms of physics and the immense cost of creating a mass production facility. Any major shift away from silicon opens up the opportunity for new architectures to emerge, either from new players or from existing players looking to take advantage of the right moment to make something entirely new. (Or as close as one could come without ignoring the principles that will still apply.) One failing has been the downfall of many companies: dragging their heels or outright trying to kill a new technology that competes with their existing product. A competitor with a superior product is inevitable, so better to have that competitor be part of your own company rather than some outsider who'd be perfectly happy to see you die off rather than transition.
@@wskinnyodden The 8088 was not created at IBM's behest either. It was an obvious follow-on to the 8085 thanks to its compatibility with a lot of existing chips needed to make a functioning system back then. IBM was not very serious about making their initial product the best it could be. They wanted to keep it cheap and would then produce something better if the market was proven viable. There was much dissent within IBM that wanted to strangle the project in its crib, so there was no lack of pressure to keep expenditures down if this thing struggled to find customers. This didn't get better when the PC was a huge success. It instead got worse until culminating in the undermining of OS/2 and IBM dropping out of the PC business entirely. IBM didn't really get around to making their 'real' PC until the 80286, largely because Intel didn't give them the option. There were variants for making cost reduced systems but they came along well after the standard for 286 PCs was well established, even if that didn't live up to its original promise. (Insisting on running on 80286 was one of the big downfalls of OS/2.)
Ah, yes, the Itanic. I wonder if it might have done better if Intel had gotten it into a games console like IBM did with the Cell processor in Sony's Playstation 3. The explicitly-parallel architecture of the Itanium does sound like the kind of quirky thing you'd find in a console.
You guys should cover the inherent delay in cycles while the processor waits for various levels of cache (and ask some people about exactly WHY you have to wait 3-4 cycles for L1 cache), an extremely in depth video (but in plain english) of how you get from bit level circuitry to machine, then assembly, then say, C when running a hello world, an in depth video on GPU pipelines (and why old GPUs had like 800 MHz chips, 3GB of GDDR5, and why the new GPUs run so much faster and have more ram/etc.), a video on distributed computing (Boinc/curecoin/the ethereum network, theoretically), another video that explains why AWS is so damn expensive in comparison to say bare hardware+free linux software/storj/other distributed computing/storage systems, a video on how cheap bandwidth actually is (putting down wires is a one time cost, network servers basically only cost electricity, have no moving parts, and they basically never die) for landline and cell (a 5G antennae basically costs like $5k and you can service 500+ people, and you don't need to replace it for 20 years).
Oh! Oh! I suggested this topic on a previous video. When Itanium came out it was on the cover of every tech magazine, and I was fully expecting to be building a gaming PC with a 64-bit Itanium processor and RD-RAM within a year or so, but then both of those technologies just sort of disappeared. I had wondered why I stopped seeing advertising for Itanium CPUs since they were supposed to be so much more efficient, and now I know they were simply too good to be true. Thanks James and the rest of the LMG crew for satisfying my curiosity on this! Maybe next you can do a video about those physics processing cards that were supposed to enable fully destructible environments in games and were eventually going to allow for realistic real-time, procedurally generated particle based physics on every object in the game so that every grain of sand and blade of grass would behave just like it would in the real world (or so advertisements and news articles of the time would have you believe). I almost bought one of those, but didn't quite have the money at the time as I was still in school. I think it was called a PhysX card.
The lesson of this is backwards compatibility. New development is great, but people like to use their existing software as they transition to make use of the new technology. Therefore backwards compatibility is very important, even if the final result is a bit less efficient than a brand new but incompatible design.
In fact, Was NetBurst kind of originally meant to be a kind of a stopgap between P6 and Itanium that would keep them competitive with AMD's Athlon reaching higher clock speeds than Intel's Pentium III at the time and quickly phased out once Itanium matured and got a sizeable library of software, But of course, reality turned out to be very different with Itanium failing.
As I understand it Itanium wanted to execute groups of instructions at once by grouping then into an 256 bit call and then do all at once. So it would would require a lot of the compilers. Feels a bit PS 3 to me. Yes X86 has plenty of flaws but as 90% of your cpu is cache and branch prediction and +5% is float and integer math units. 30 years ago we had 200K transistors on an chip and saving 50K was an huge deal, now we have 10 billions, saving 50K transistors :)
Itanium instruction encoding is based on 128-bit "bundles" each consisting of three 41-bit instructions, with the other 5 bits used to encode what types of instructions (integer, floating point, memory access, or branch) these are.
The fact that went into Itanium at all seems a little misleading. Itanium was never on Intel's plate as competition to x86. That's was part of HPs long game, the entire reason for buying Compaq. All those DEC customers. Entirely different market. x86 was a stop gap from the beginning to a product intended for the mainframe market. All of this coming from a company that never intended to make CPUs. A lot of happy accidents.
Whilst x64 may well have saved AMD due to royalties , it probably has not done us any favours as far as hardware is concerned. X64 is not a true 64 bit architecture and we are probably a lot worse off for it than if we had all moved to Itanium. It may be that the door has been left open for 64 bit ARM as is being tested/proved by Apple.
Ooh, mostly accurate, well done! So: the software compatibility thing wasn't as big a deal for itanium and amd64 as people imagine, indeed the itanium had hardware x86 support for user applications, although that hardware support lacked the OoOE of the competing Pentium Pro and MMX. But the target market were migrating from SPARC, MIPS, and HP-PA RISC, so x86 support was not so important for these - nobody with money ran x86 servers in the 90s. The problem actually came down to price. Intel wanted to charge like they were SGI or DEC and make great margins, but you could buy pentiums and later opterons that would give you more bang for the buck thanks to competiton and scale. It was the bean counters that killed it, which was the theme of the 90s (c.f. Apple under Gaseé, symbolics, all the RISC manufacturers).
I think saying that software scheduling is a "bad idea" essentially ignores the idea or virtualization... there are amazing software schedulers out there that can outperform bare-metal these days (given, with instuction-set integrations like vt-d). The evolution in that space has been fascinating to watch
It's the same problem everybody trying to build a competing architecture faces: The vast majority of software is for x86 and making software compatible with a new instruction set is time consuming. So unless you're a company like Apple that has enough industry clout to essentially force the industry to port everything to their new chip as quickly as they can, it's going to be really difficult to sell many chips if most software won't run on them.
I have an idea for a topic; the origin of "Alt+F4". It's quite an interesting rabbit hole that goes a long *long* way back; the stimulus for IBM's CUA, its adoption in Windows, why it was less influential in Unix, and its eventual obsolescence.
Just to make things clearer for everyone: x86_64 was created by AMD for starters, and the Opteron was the first cpu to support but x86 and X86_64 so you could run both legacy 32-bit code and newer 64-bit code. What we are all using today - and everything Intel has created since then - is base on AMD x86_64. So, once again in the history of cpus, AMD saved the day for all of us. And some people are still vouching for Intel in 2021... Edit: 2003-2004, I remember I successfully steering away our architecture team for our next ERP system to move away from Itanium system that HP wanted us so badly to buy. Even as early as 2002, the Itanium was already dead in the water before reaching systems. Ty AMD!
Fun fact: Nvidia has in-order VLIW (like Itanum) CPU designs with very high IPC (Denver, Carmel). The cores use an internal design and perform dynamic instruction translation to run 64bit ARMv8.2 code. Theoretically they could also run x86 code by using a different firmware, but that isn't available atm due to Nvidia not having access to x86 patents. The technology was invented in the 90s by Transmeta and even made it to the market back then, in form of consumer devices (laptops), which even recieved support for new CPU instructions via software updates. Radeon graphics cards up to and including 6000 series (terascale architecture) were VLIW designs aswell and the architecture was very efficient. Unfortunately, the driver's shader compiler had to be tweaked for individual games to optimize things like instruction order, scheduling and utilization in order to achieve good performance. GCN, the successor which doesnt use a VLIW design, was supposed to reach full utilization much easier, aswell as having a simpler shader compiler and being easier to program for. To an extent, this worked very well (i.e HPC compute), but for games not so much. One of the problems was that you need at least 256 threads to utilize one CU fully; on top of that: the execution of an instruction takes 4 clock cycles.
Fascinating that AMD did not learn anything from Intel's mistake. In my opinion it was clear from the start that Itanium will not go anywhere with a "you have to adapt your program" approach since software development is expensive. So it kinda baffles me that AMD thought that it was a good idea to do almost the same thing when they launched their bulldozer APU lineup and they said that programmers would have to optimise their code for their APUs. _Switch to my platform! You may have to write your entire software from scratch, but... aahmmm did I mention the CPU is brand new?_ Yeah... Like this ever going to happen...
Um, did you forget about the Pentium IV? It needed optimized code that didn't exist when it launched. It took probably 2 years for the software to be optimized for it. I don't have concrete examples because that was over 20 years ago so my numbers may be off some, but initially a Pentium IV 1.5ghz could not go up against an Athlon 1ghz.
Actually, they tried twice. In 1981 they marketed the iapx432, a really weird and complex 32-bit CPU. It had a lot of advanced features. Far too many and far too restrictive a set of features. It was also very slow. They didn't learn from that total flop.
The IA-64 architecture was conceptually based on HP-PA for which HP had written very successful branch prediction compilers for HP-UX. The big problems came when Intel wanted to add support for out of order execution that they had in their x86 compilers. The concepts of out of order execution and software branch prediction conflict with each other in ways that had never been studied. With the secretive natures of Intel and HP, they didn't let the details of the problem out to the open source community which might have had a chance at solving it. That along with supporting the x86 hardware emulation shell killed the project. If some logicians and mathematicians ever come up with the rules for compiling with both out of order execution and software branch prediction enabled the IA-64/HP-PA architectures will far surpass x86 and probably compete with Apple's new RISC architecture.
That video could be a year long series, or a 15 minute vid depending entirely on how deep/ to the basics they would go with it. (instruction by instruction or the general steps taken ex: load exe from disc -> load libraries -> get opengl working/make a window....)
Funny story I was configuring the Linux kernel and I thought IA32 referred to the 32bit version of Intel Itanium, and couldn't install Nvidia drivers.. turns out that doesn't exist and is just 32bit x86 :/
x86 won't go away anytime soon, but I have a feeling its towards the end of its lifespan. Not sure which direction Intel is going to go, but I think Risc-V might have an impact towards their newer design decisions...
Sure hardware scheduler is faster and OoOe/register renaming are powerful until you met the meltdown/spectre on every risc/cisc from 1983 (x86 started from p6 in 1995 and onward). Sometime you just can’t have security and performance at the same time.
They should have killed it off. After 50 years x86 just isn't the best way to crunch numbers anymore. It was great when all it did was basically run a suped up calculator and the CPU's didn't even need heatsinks and came in DIP packages. Today not so much. Maybe when a CME from the sun takes out every electronic device on the planet what rises from the ashes will be better, because it seems it will take no less than that to make the switch.
If Intel had released the Itanium today, it would have been a success. Considering that today's smartphones only support the arm64 architecture, it's safe to say that this CPU would not have failed that bad today.
I'll see your Itanium and raise you an Intel iAPX 432. This from someone who was subjected to having to write multiprocessor real time code in assembler on the Intel i860, a RISC chip that came out at about the same time as the i486 and which had user visible pipelines and delayed execution slots. The nightmares have mostly stopped...
Some early DSP and bitslice machines were even more "interesting" to code on. ATT made one where a store into a memory location followed by a load from that location would get you the value from before the store because that would not complete until after the load instruction. There was always the branch latency to consider too. A branch would do the next instruction then the one on the new path.
For at least 35 years I've heard people say things like: "X86 is old, slow and clumsy. There is no way to improve performance the way it is going to need to improve. RISC is the future in processor design and it will stomp all over the CISC architectures." Well 35 years down the road we are still using X86, but it has evolved from being 16-bit to first 32-bit and now 64-bit. Each time doom and gloom was predicted. Adding the 64-bit extensions was described as putting lipstick on a pig, and yet this pig is pretty damned spry. Meanwhile the RISC projects has evolved back and forth. PowerPC was pretty impressive in it's heyday, certainly better supported than Itanium, but MIPS and SPARC were where things seemed to really happen. Then ARM came about and suddenly it was THE RISC architecture everyone seemed to love, but only in low power applications, at least up until last year. Perhaps it's finally time for RISC to shine but I wouldn't hold my breath. Somehow it seems that every time the death of x86 looks like it just may happen Intel, and lately AMD, dig in and produce new processors with even better performance. As long as they continue to do that the customers win.
The motorola 68000 architecture was also CISC but inherently superior back in the day. The Amiga and Macintosh computers being the main systems that used it, and while in many aspects they were also superior platforms Commodore lacked vision with it's terrible management and Apple jumped ship to PPC since Motorola had no interest in developing the 68k architecture further. Personally I think RISC chips have their place but I just don't see them taking over. I mean heck, Nvidia brute-forced their way to the top in the early 2000's... x86-64 certainly has pulled off scalability.
@@shadowwolfmandan The superiority of the 68000 was largely due to the fact that is was designed as a 32 bit processor that was scaled back to 16 bit to make it cheaper and more viable in the market at the time. They even had an answer to the original x86 with the 68008 (if I remember correctly) that only had a 8-bit bus. That was used in for instance the Sinclair QL. Scaling the 68000 up to 32 bit was mostly a non issue as it was originally designed for that with a flat memory topology and wide registers available from day one. From what I was told it was also using a much cleaner and more logical instruction set, but that's really beyond my experience. The biggest thing I think was memory management where the 68000 family was much easier to work with without all that bank switching that the x86 processors were forced to when running 16-bit code. This enabled the simple implementation of bitmapped graphics like in the case of the early Macintosh machines. These were technically much simpler than any IBM PC clone at the time.
Do the same thing with very different implementation. Transmeta is runtime translation. Rosetta 2 is install time translation.
3 года назад
That HP idea was not HP's idea, but based on the much older idea of the RISC, Reduced Instruction Set Computer. Simplify, and use that to speed things up (by being able to parallelize better). And let the software do the heavy lifting. See SUN's SPARC, DEC's Alpha, MIPS' MIPS chips (yes, indeed :D). Oh, the idea is *old*, much older than SUN's and DEC's (and certainly HP's) developments. Stanford and Berkeley were the origins of that idea (of course, it's older than that even, but at those universities it was kind "put in writing", fully developed). And one of the core parts was the compiler that knew how to create machine code optimized for the strengths of the design. Another bit: there was no x86-64 when AMD developed the Opteron. It was (and is still) known as AMD64. Only when Intel *copied* it (legally since cross-licensing shenanigans in effect) did it morph into x86-64. Credit where credit is due, you know.
At the time HP 'had' this idea they owned DEC, through their ownership of Compaq, hence they also had expertise with DEC Alpha available.
3 года назад
@@davidbonner4556 I looked that up because I suspected that… but: they bought Compaq (thus Digital) in 2002, and Itanium roll-out was in 2001. So, while it was an enticing idea, the timing doesn't fit.
3 года назад
Though on the matter of Compaq/DEC: I was pretty shocked back then in the late 90s that this young whippersnapper of a company would *dare* buy old, legendary old company (Compaq is younger than I am, while DEC is almost 10 years older :D)
I tuned out half way through this video and started thinking about how much I adore Donna Murphy's performance in Tangled, which I didn't really get until I bought the cd soundtrack and listened to it without the visual, because during the movie I spent all my time hating the character