Тёмный

iAPX The first time Intel tried to kill x86 

RetroBytes
Подписаться 79 тыс.
Просмотров 151 тыс.
50% 1

Intel seems to like trying to kill off it's most successful line of x86 CPUs. Itanium might be the last time we all remember, but the first time was with iAPX.
Lets look at the architecture that was once Intel's future, until it crashed and burnt.
This video is sponsored by PCBWay (www.pcbway.com).
0:00 - Introduction
0:16 - A word from our sponsor
0:43 - The background
1:20 - What iAPX stands for (Intel can't spell)
2:33 - iAPX and high level languages
4:19 - Ada
5:59 - iMAX
6:55 - iAPX 432 (The lemon)
9:00 - The compiler is rubbish too
11:00 - IBM PC to the rescue
13:30 - 80286 the end of the iAPX 432
16:00 - Thanks

Наука

Опубликовано:

 

31 май 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 711   
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 года назад
5:23 Ada was designed for writing highly reliable code that could be used in safety-critical applications. Fun fact: the life-support system on the International Space Station is written in Ada, and runs on ancient Intel 80386 processors. So human lives depend on the reliability of all that.
@RetroBytesUK
@RetroBytesUK 2 года назад
Indeed that was a design goal for it. Its still in use in alot of military hardware particularly missile gidance.
@JMiskovsky
@JMiskovsky 2 года назад
@@RetroBytesUK ADA has roots in British MoD project. ADA is used in F-22 too.
@lexacutable
@lexacutable 2 года назад
the safety critical design and military applications of Ada always seemed like the whole point of it, to me - odd that it wasn't mentioned in the video; I'd have thought this would be a major factor in Intel's decision to design chips around it.
@JMiskovsky
@JMiskovsky 2 года назад
@@lexacutable yeah the flight computer of F-22 is based on something obscure also. There is military i960. And then there is is x87 arch. (Yes X87).
@RetroBytesUK
@RetroBytesUK 2 года назад
@@lexacutable apparently the critical factor for intel was the growing popularity of Ada in university comp sci departments. They thought that trend would continue out in the wider world and it would become a major language for application development. They where quiet wide of the mark on that one, it turned into more of a nitch language for military applications.
@sundhaug92
@sundhaug92 2 года назад
4:42 Ada didn't program the difference engine, which was built after her death, because that wasn't programmable. The difference engines works by repeatedly calculating using finite differences. Ada wrote a program for the analytical engine, a more advanced Babbage design that would've been the first programmable computing machine.
@RetroBytesUK
@RetroBytesUK 2 года назад
I know but I could not find an image of the analytical engine with an appropriate copyright license that ment I could use it.
@sundhaug92
@sundhaug92 2 года назад
@@RetroBytesUK I understand, but if I didn't hear wrong you also called it the difference engine
@frankwalder3608
@frankwalder3608 2 года назад
Your comment is as informative and interesting as his video, inspiring me to query my favorite search engine. Has the Analytical Engine been built, and was Ada’s program(s) run on it?
@sundhaug92
@sundhaug92 2 года назад
@@frankwalder3608 afaik not yet, but there have been talks of doing it (building it would take years, it's huge, Babbage was constantly refining his designs, and iirc even the difference engine built is just the calculator part, not the printer)
@wishusknight3009
@wishusknight3009 2 года назад
@@sundhaug92 The printer of the difference engine was built if i recall. it was in somewhat simplified form from the original design though. And the analytical engine has been constructed in emulation as proof of concept. That is about it so far. Several attemps at starting a mechanical build have met with money issues. This is not a device a machinist could make in his spare time over a number of years. I imagine it would take a team of at least a dozen ppl.
@davidfrischknecht8261
@davidfrischknecht8261 2 года назад
Actually, Microsoft didn't write DOS. They bought QDOS from Seattle Computer Products after they had told IBM they had an OS for the 8088.
@RetroBytesUK
@RetroBytesUK 2 года назад
It also turns out seattle computer products did not write all of it either. Thanks to a law suite many years later we know similarities to CP/M where none accidental.
@petenikolic5244
@petenikolic5244 2 года назад
@@RetroBytesUK At last someone else that knows the truth of the matter CP/M and dos are brothers
@grey5626
@grey5626 2 года назад
@@petenikolic5244 nah, not brothers, unless your idea of brotherhood is Cain and Abel? Gary Kildall's CP/M was the star student, while Tim Patterson was the asshole who copied off of Kildall's answers from a leaked beta of CP/M-86 and then sold them to the college drop out robber baron dip it.sh Bill Gates who in turn sold his stolen goods to the unscrupulous IBM. Gary Kildall didn't just run Digital Research, he didn't just host The Computer Chronicles, he also worked at the Naval Postgraduate School in Monterey, California collaborating with military postdoc researchers on advanced research that was often behind doors which required clearance with retinal scanners. It's even been speculated with some credibility that Gary Kildall's untimely death was a murder made to look like an accident.
@akkudakkupl
@akkudakkupl 2 года назад
Ahh you mean CP/M86
@tsclly2377
@tsclly2377 Год назад
and .. most likely, a large purchase order and integration with a very big US Agency (of clowns)
@antonnym214
@antonnym214 2 года назад
The 68000 was a nice chip! It has hardware multiply and divide. An assembly programmer's dream. I know, because I coded in Z-80 and 8080 Assembly. Good reporting!
@christopheroliver148
@christopheroliver148 2 года назад
Are you me?
@peterbrowne3268
@peterbrowne3268 11 месяцев назад
The 68000 CPU powered the very first Apple Macintosh released in 1984.
@4lpha0ne
@4lpha0ne 7 месяцев назад
Yeah, it had so many registers compared to x86, nice addressing modes and useful instructions (as you mentioned, just that mul and div were microcoded until later 68k family versions appeared), and a 32b register width.
@vikiai4241
@vikiai4241 7 месяцев назад
I recall reading somewhere that the IBM engineers designing the PC were quite interested in the 68000, but it wasn't going to be released (beyond engineering samples) in time for their deadline.
@earx23
@earx23 2 месяца назад
It was. Intel 8086 had it too, but on 68000 you had it on all 8 data registers. On Intel just on a single register. Everything had to be moved around a lot more. Intel was well behind Motorola.. I consider the 486 the first CPU where they actually had caught up a little, but the 68060 still bested the Pentium. Why? General purpose registers, baby.
@tschak909
@tschak909 2 года назад
It wasn't just that the processor returned by value. Values were literally wrapped in system objects, each with all sorts of context wrapped around it. You put all this together, and you'd have a subroutine exit take hundreds of nanoseconds. This is when all the system designers who WERE interested in this, tell the Intel Rep, "This is garbage." and leave the room.
@samiraperi467
@samiraperi467 2 года назад
Holy fuck that's stupid.
@wildbikerbill6530
@wildbikerbill6530 2 года назад
Correction: "...subroutine exit take hundreds of microseconds." Clock rates and memory speeds were many orders of magnitude slower than today.
@leeselectronicwidgets
@leeselectronicwidgets 2 года назад
Ha, great video. I love the hilarious background footage... that tape drive at 2 mins in looks like it's creasing the hell out of the tape. And lots of Pet and TRS-80 footage too!
@RetroBytesUK
@RetroBytesUK 2 года назад
That tape drive at 2mins, is a prop from a film. While its spinning the traffic light control system in an italian city is going nuts. Never let Benny Hill near your main frame.
@leeselectronicwidgets
@leeselectronicwidgets 2 года назад
@@RetroBytesUK Haha!
@criggie
@criggie 2 года назад
Yeah looks like we can see through the ferrous material and see clear patches in the carrier backing. I think its just on show, but eating that tape slowly.
@tschak909
@tschak909 2 года назад
Intel would use the term iAPX286 to refer to the 80286, along with the 432 in sales literature. Intel had intended the 286 for small to medium time sharing systems (think the ALTOS systems), and did not have personal computer use anywhere on the radar. It was IBM's use in the AT that changed this strategy.
@herrbonk3635
@herrbonk3635 2 года назад
Yes, a short while they did. I have 8086 and 80286 datasheets with and without "iAPX".
@jecelassumpcaojr890
@jecelassumpcaojr890 2 года назад
One more thing that really hurt the 432's performance was the 16 bit packet bus. This was a 32 bit processor, but used only 16 lines for everything. Reading a word from memory meant sending the command and part of the address, then the rest of the address, then getting half of the data and finally getting the rest of the data. There were no user visible registers so each instruction meant 3 or 4 memory accesses, all over these highly multiplexed lines.
@wishusknight3009
@wishusknight3009 2 года назад
It was also how poorly IP blocks were arranged on the two packages requiring upto multiple jumps across both chips to complete an instruction. Someone I knew surmised a single chip solution with some tweaks could have had IPC higher than a 486! He saw quite a lot of potential in it and remarked it was too bad intel pulled the plug so soon.
@HappyBeezerStudios
@HappyBeezerStudios 4 месяца назад
To be fair, that kind of limitation didn't hurt the IBM PC, The chosen 8088 was a 16-bit CPU on an 8-bit bus
@wishusknight3009
@wishusknight3009 Месяц назад
@@HappyBeezerStudios The 8088 had a register stack though, which will make some difference. Note the 8086 only had a few % performance advantage over the 8088. It really couldn't use the bandwidth up.
@MrPir84free
@MrPir84free 2 года назад
That programming reminds me very much like a "professional" I used to know ; he worked on a Navy base, and wrote his own code. Professional, in this case, meant he received a paycheck.. The dataset came from COBOL - very structured, each column with a specific set of codes/data meaning something.. He wrote some interpreted Basic code to break down the dataset. It went like this: He would first convert the ascii character into a number. There's 26 lower case characters, and 26 uppercase characters, and 10 digits ( zero thru nine ). So first he converted the single ascii character into a number. Then he used 62 If ... then statements to make his comparison. Mind you, it had to go thru ALL 62 if then statements. This was done for several columns of data PER line, so this really stacked up on instructions carried out. Then based upon the output, he'd reconvert the ascii code back to a character or digit, using another 62 if then statements. And there was like 160 columns of data for each line of the report he was processing.. With a single report consisting of a few thousand lines all the way up to a few hundred thousand rows. Needless to say, with all of the if then statements ( not if -then - else ) it was painstakingly slow. A small report might take 2 to 3 hours to process; larger reports- well, we're talking an all day affair; with ZERO output till it was completed; so for most that ran it, they thought that the code had crashed. In comparison, I wrote a separate routine using Quick Basic, from scratch, not even knowing his code existed, using on...gosub and on .. goto statements, with complete report taking about 1 1/2 minutes, with the every tenth line output to screen so one knew that it was actually working. Only later did I find out his code already existed, but no one used due to how long it took to run. Prior to this, the office in question had 8 to 10 people fully engaged, 5 to 6 days a week, 8 to 10 hours a day, manually processing these reports, reading the computer printouts, and transferring the needed rows to another report, by hand, on electric typewriters. Yes, this is what they did... as stupid as that seems...
@lucasrem
@lucasrem 7 месяцев назад
NAVY + COBOL are you Korean War guy, older ?
@HappyBeezerStudios
@HappyBeezerStudios 4 месяца назад
@@lucasremCould be COBOL-2023, the most recent release :) Yfs, the old stuff like COBOL, Lisp, Fortran, ALGOL etc are still updated and used.
@dlewis9760
@dlewis9760 3 месяца назад
@@HappyBeezerStudios I just retired in Oct 2023. 40+ years programming, almost all in Bank Biz. You get stuff from vendors and clients that's obscure and obsolete. It was probably in 2021 I got a record layout to go with the data. The record layout was written in COBOL. A buddy at works says "I've got that record layout. You won't believe what it's written in". He was correct. I actually do believe it. The amount of stuff still written around 80 characters is ridiculous. NACHA or ACH records is written around 94 character records. There were 96 column cards that never took off. IBM pushed them for the System/3. I always wondered if the 94 character record layouts were based on that. Stick a carriage return/line feed on the end and you are in business.
@steveunderwood3683
@steveunderwood3683 2 года назад
The 8086/8088 was a stopgap stretch of the 8 bit 8080, started as a student project in the middle of iAPX development. iAPX was taking too long, and interesting 16 bit devices were being developed by their competitors. They badly needed that stopgap. A lot of the most successful CPU designs were only intended as stopgap measures, while the grand plan took shape. The video doesn't mention the other, and more interesting, Intel design that went nowhere - the i860. Conceptually, that had good potential for HPC type applications, but the cost structure never really worked.
@alanmoore2197
@alanmoore2197 2 года назад
It also doesn't mention that the i960 embedded processor family was created by defeaturing 432 based designs - and this went on to be a volume product for Intel in computer peripherals (especially printers). At the time the 432 was in development the x86 didn't exist as a product family and intel was best known as a memory company. Hindsight is easy - but at the time Intel was exploring many possible future product lines. The i960 family pioneered superscalar implementation which later appeared in x86 products. The i750 family implemented SIMD processing for media that later evolved into MMX. You have to consider all these products as contributing microarchitecture ideas and proofs of concept that could be applied later across other product lines. Even the dead ends yielded fruit.
@80s_Gamr
@80s_Gamr Год назад
He said at the beginning of the video that he was only going to talk about Intel's first attempt at something different than x86.
@fajajara
@fajajara Год назад
@@alanmoore2197 …and in F-22 Raptor.
@charleshines2142
@charleshines2142 9 месяцев назад
iAPX must have taken too long from trying to fix is inherent flaws, It seems they dodged the bullet when they developed the 8086 and 8088. Then they had the terrible Itanium. I guess the best thing it had was a somewhat nice name. It almost sounds like something on the periodic table.
@HappyBeezerStudios
@HappyBeezerStudios 4 месяца назад
Interestingly NetBurst and the Pentium 4 also were sort of a stopgap because their IA-64 design wasn't ready for consumer markets. Yes, we only got the Pentium 4 because Itanium was too expensive to bring it to home users and because with later models they removed the IA-32 hardware.
@brandonm750
@brandonm750 2 года назад
Have you ever heard the tragedy of iAPX the Slow? I thought not, it's not a story Intel would tell. Very great video, loved it.
@serifini2469
@serifini2469 2 года назад
I'm guessing one of the subsequent attempts at intel shooting itself in the foot would be the 1989 release of the i860 processor. For this one they decided that the 432 was such a disaster that what they needed to do was the exact opposite - to implement a RISC that would make any other RISC chip look like a VAX. So we ended up with a processor where the pipelines and delays were all visible to the programmer. If you issued an instruction the result would only appear in the result register a couple of instructions later and for a jump you had to remember that the following sequential instructions were still in the pipeline and would get executed anyway. Plus context switch state saves were again the responsibility of the programmer. This state also included the pipelines (yes there were several) and saving and restoring them were your job. All this meant that the code to do a context switch to handle an interrupt ran into several hundred instructions. Not great when one of the use cases touted was real time systems. Again intel expected compiler writers to save the day with the same results as before. On paper, and for some carefully chosen and crafted assembly code examples, the processor performance was blistering. For everyday use less so, and debugging was a nightmare.
@treelineresearch3387
@treelineresearch3387 2 года назад
That kinda explains why I only ever saw i860 end up in things like graphics processors and printer/typesetter engines, things that were really more like DSP and stream processing tasks. If you're transforming geometry in a RealityEngine you can afford to hand-optimize everything, and you have a bounded set of operations you need to do fast rather than general computation.
@roadrash1021
@roadrash1021 2 года назад
I worked at two of the companies that made extensive use of the i860 as a math engine. Yeah, you could make it scream for the time, especially if you're into scraping your knuckles in the hardware, but the general impression was what a PITA.
@X_Baron
@X_Baron 2 года назад
Intel sure has a long history of thinking they can handle compilers and failing miserably at it.
@pizzablender
@pizzablender 2 года назад
I remember that from the perspective of exception handling, that the handler needed to correct and drain all those instructions. FWIW that idea was tried later in more RISC processors in a simpler way, for example "the instruction behind a JUMP always gets executed, just swap them". Simple and good, but newer CPU implementations wouldn't have that slot. So again this was not a future proof idea.
@heinzk023
@heinzk023 Год назад
Let me guess: Being a compiler developer at Intel must have been a nightmare. CPU design guy: "Ah, well, our pipelines are a little, ehm, "complex", but I'm sure you compiler guys will take care." Compiler guy: "Oh no, not again"
@IanSlothieRolfe
@IanSlothieRolfe 2 года назад
Back in 1980 I was off to University to do a Computer Science degree, and was also building a computer at the same time - I]d been mucking about with digital electronics for a number of years and with the confidence of a teenager that didn't know if he had the ability to do so, I had designed several processor cards based on the TMS9900, 6809 and others (all on paper, untested!) and had read about the iAPX 432 in several trade magazines, and was quite excited by the information I was seeing, although that was marketing mostly. I tried getting more technical data through the Universiity (because Intel wouldn't talk to hobbyists!) but found information thin on the ground, and then six months or so later all the news was basically saying what a lemon the architecture was, so I lost interest as it appears the rest of the world did. About a decade later my homebrew computer booted up with its 6809 processor, I played with it for a few weeks, then it sat in storage 20 years, because "real" computers didn't require me to write operating systems, compilers etc :D
@johnallen1901
@johnallen1901 2 года назад
Around 1989 I was building my own home-brew computer around a NEC V30 (8086 clone), an Intel 82786 graphics coprocessor, and three SID chips for sound, because why not? Everything was hand-wired and soldered, and even with tons of discrete logic chips, I managed to get it operating at 10 MHz. Initially equipped with 64K of static RAM, it was my senior project in college. Instead of the iAPX, I was interested in the National Semiconductor 32000 CPUs, and my college roommate and I had big plans to design and build a computer around that platform. A 32032 CPU/MMU/FPU setup was about as VAX-like as you could get, with it's full 32 bit architecture and highly orthogonal CISC instruction set. Unfortunately computer engineering was not yet a thing at our college, so we were on our own and simply didn't have the resources to get that project off the ground.
@Mnnvint
@Mnnvint 2 года назад
@@johnallen1901 That sounds impressive! It's actually not the first time I see people talking about their late 80s homebrew computers in youtube comments... if you (or any of the others) still had it and could show it off in a video, that would be beyond cool. Three SIDs at 10mhz, yes please :)
@javabeanz8549
@javabeanz8549 2 года назад
​@@johnallen1901 I thought that the V30 was the 80186 or 80188 instruction and package compatible CPU?
@johnallen1901
@johnallen1901 2 года назад
@@javabeanz8549 Yes, the V20 and V30 were pin compatible with the 8088 and 8086, and both CPUs included the new 80186 instructions and faster execution.
@javabeanz8549
@javabeanz8549 2 года назад
@@johnallen1901 Gotcha! 8086 replacement. I never did see many of the 80186/80188 except some specs on industrial equipment. I did buy a V20 chip for my first XT clone, from the Frys in Sunnyvale, back in the day. The performance boost was noticable.
@Vanders456
@Vanders456 2 года назад
Intel building the iAPX: Oh no this massively complicated large-die CPU with lots of high-level language support that relies on the compiler to be good *sucks* Intel 20 years later: What if we built a massively complicated large-die CPU with lots of high-level language support that relies on the compiler to be good?
@scottlarson1548
@scottlarson1548 2 года назад
Didn't Intel also build a RISC processor that... relied on the compiler to be good?
@acorredorv
@acorredorv 2 года назад
You could almost just change the audio on the video and replace iAPX with IA64 and it would be the same.
@wishusknight3009
@wishusknight3009 2 года назад
The IA64 was not quite as high level as APX was. And the difference is it could actually perform exceptionally well in a few things, if not the best in market. But outside of a few nitch workloads where it shined it was rather pedestrian in some rather common and ubiquitous server workloads, this and its slow x86 compatibility mode was its biggest problem. In the area's it was lousy, it couldn't even out perform the best of x86 at the time. And part way through its development Intel seemed to lose interest and relegated it to older process nodes and didn't give it a large development team to iterate on it over generations. It got a few of the AMD64 features ported to it and what not, but it was left languishing after Itanium2.
@DorperSystems
@DorperSystems Год назад
@@scottlarson1548 all RISC processors rely on the compiler to be good.
@scottlarson1548
@scottlarson1548 Год назад
@@DorperSystems And CISC processors don't need good compilers?
@ChannelSho
@ChannelSho 2 года назад
Allegedly the "X" stood for arCHItecture, as in the Greek letter. Feels like it's grasping for straws though.
@RetroBytesUK
@RetroBytesUK 2 года назад
Oh that really is grasping is'nt it.
@Autotrope
@Autotrope 2 года назад
Ha I was thinking maybe the "ect" in architecture sounded a little like an "x"
@AttilaAsztalos
@AttilaAsztalos Год назад
Well, technically speaking, no more tenuous than the "X" in CHristmas...
@teddy4782
@teddy4782 Год назад
I always assumed the "x" was basically just a wildcard. 386, 486, 586, etc .....just became x86 because the underlying architecture was the same (or at least the lineage).
@AaronOfMpls
@AaronOfMpls Год назад
@@AttilaAsztalos or in the academic preprint site _arXive._
@tschak909
@tschak909 2 года назад
IBM ultimately chose the 8088 because it meant that the I/O work that had been done on the Datamaster/23 could be lifted and brought over. (No joke, this was the tie-breaker)
@colejohnson66
@colejohnson66 10 месяцев назад
I thought the deal breaker was sourcing the M68k? There were multiple reasons.
@tschak909
@tschak909 10 месяцев назад
@@colejohnson66 The 68K was discarded early on in planning, because not only was there no possibility of a second source (Motorola explicitly never did second source arrangements), there was no production silicon for the 68000 yet (even though the 68000 was announced in 1979, the first engineering samples didn't appear until Feb 1980, and production took another half a year to ramp up). (a lot of ideas were discarded very early on in planning, including the idea to buy Atari and use their 800 system as a basis for the IBM PC, although a soft tooling mock-up was made and photographed) :)
@tschak909
@tschak909 10 месяцев назад
@@colejohnson66 Production samples of the M68K weren't available until Feb 1980, with the production channels only reaching capacity in September 1980. This would have been a serious problem with an August 1981 launch. Motorola's second sourcing also took a bit of time, because of the process changes to go high speed NMOS. Hitachi wouldn't start providing a second source to the 68000 until a year after the PC's launch.
@HappyBeezerStudios
@HappyBeezerStudios 4 месяца назад
And they moved the S-100 bus into the upper segment. Imagine if the S-100 (and some successors) would be used in modern x86 PCs. And there was an addon card for 8086 on S-100
@reaperinsaltbrine5211
@reaperinsaltbrine5211 3 месяца назад
@@tschak909Actually the opposite: Intel never was big on the idea of second sourcing, and it was IBM and the US govt. (who wanted a more stable supply) that made them to agree with AMD.Motorola at the time sold the 68k manufacturing rights to everyone who was willing to manufacture it. Hitachi, Signetics, Philips, etc. Even IBM licensed it and used it's microarchitecture to build embedded controllers (like for high end HDDs) and the lower end 390s (I believe they used 2 cpu dies with separate microcode in them). I think that the main reason IBM chose the 8086/8088 is because all the preexisting 8080/z80 code that was easy to binary xlat to it. The second is cost and business poilcy: IBM actually partially OWNED Intel at the time.
@wishusknight3009
@wishusknight3009 2 года назад
Someone I knew in my area had a type ofAXP development machine. And he remarked its performance was very under rated, and that it had IPC higher than a 386 when coded properly. And estimated a single chip derivative with tweaks could out perform a 486 or 68040 in IPC. Sadly he developed dementia and passed away about 10 years ago. And I have never been able to track the hardware down or what happened to it. He had one of the most exotic collections I had ever seen. And he was a very brilliant man before mental illness overtook him. I only got to meet him at the tale end of his lucidity which was such a shame.
@RonCromberge
@RonCromberge 2 года назад
You forgot the 80186 processor! Not used in a PC. But it was a huge failure couldn’t jump back out of protected mode. This needs a reboot!
@orinokonx01
@orinokonx01 2 года назад
The 80286 couldn't return to real mode from protected mode without a reboot, either. Bill Gates hated that...
@ScottHenion
@ScottHenion 2 месяца назад
The 80186 did not have a protected mode. It was an 8086 with interrupt and bus controller chips brought on die. They were intended for embedded systems although some PC's used them (Tandy) but had compatibility issues due to the different interrupt controller. I worked on systems that used up to 16 80186's and '88's and wrote a lot of code for them in C and ASM. Variants of the 80186 still were made after the 286 and 386 were basically retired. They were common in automotive ECU's, printers, automation and control systems.
@lcarliner
@lcarliner Год назад
Burroughs was the first to feature compiler friendly hardware architecture in the B5000 model. it also was the first to feature virtual memory architecture. As a result, huge teams of software developer were not needed to implement Algol-60 and other major compilers with great reliability and efficiency. Eventually, Burroughs was able to implement the architecture in IC chips.
@MrWoohoo
@MrWoohoo 2 года назад
You should do an episode on the Intel 860 and 960. I’m remember them being interesting processors but never saw anything use them
@cathyfarcks1242
@cathyfarcks1242 2 года назад
Yes that would be interesting. I know SGI used them. They got quite a bit of use as coprocessors I think
@asm2750
@asm2750 2 года назад
i960 was used in the slot machines of the mid 90s. I believe it was also used in the F-16 avionics.
@stanfordlightfoot7079
@stanfordlightfoot7079 Год назад
I860 was used in the ASCI Red System @ Sandia NL, 1st machine to break TFLOPS barrier.
@grantstevens5
@grantstevens5 Год назад
"The Second Time Intel Tried to Kill the x86". Slots in nicely between this video and the Itanic one.
@hye181
@hye181 Год назад
the i860 was the original target platform for Windows NT, microsoft was gonna build an i860 PC
@Gurdia
@Gurdia 2 года назад
I've seen a bunch of videos on itanium. But this is the first time I've ever even heard of iAPX. There's not a lot of videos on these lesser known topics from the early days of home computing so I appreciate the history videos you've done.
@a500
@a500 2 года назад
Yet again a super interesting video. Thank you. I had no idea that this processor existed.
@paulblundell8033
@paulblundell8033 2 года назад
Intel did do another O/S called RMX (stood for Real-time Muti-tasking Executive ) was used extensively with their range of multibus boards. It was powerful and popular in control systems ( like factory automation ) There was also ISIS which was used on the Intel development systems. I use to service these and were a major tool for BT, Marconi’s, IBM, and anyone wanting to develop a system to use a microprocessor as they had In Circuit Emulators ( ICE ) units that enabled developers to test their code before they had proper hardware or to test their hardware as you could remove the processor and run and debug your code instruction by instruction. During the early 80’s there was a massive rush to replacement legacy analogue systems with digital equipment controlled by a microprocessor and intel had development systems, their ICE units and various compilers ( machine code level, PL/M, Pascal, C, to name a few ) so a hardware engineer could develop his code and test it quicker than most of the competitors. Fond memories going to factories where they were putting their first microprocessor in vending machines, petrol pumps, cars, etc. One guy developing his coke dispensing machine wanted a way of networking all the machines ( no internet then ) and have them as a super-computer calculating Pi as they would be spending most of their life doing nothing.
@orinokonx01
@orinokonx01 2 года назад
I've got an Intel System 310 running iRMX 86! It's actually been rebadged by High Integrity Systems, and has three custom multibus boards which are the core iAPX 432 implementation by them. The iRMX 86 side works well enough, but the Interface Processor on the iAPX 432 processor board refuses to initialise, and I suspect it is due to faulty RAM. Very interesting architecture, and very interesting company. They appear to have banked their future on the architecture being successful!
@paulblundell8033
@paulblundell8033 2 года назад
@@orinokonx01 I would of had all the service manuals for this back in the day. They all got thrown away on a house move. I think ( and this is a big guess ) that HIS were involved with the railways but I use to fix 310’s in care homes, airports, railways, motorway control rooms to name a few. IRMX was also available as 286, 386 , etc There was a strong rumour that IBM considered it as the O/S for their PS/2 range instead of OS/2, which sort of makes some sense as it would break the reliance on Microsoft ( who were beginning to get a little too powerful in the industry !) and IBM had historical ties with Intel. When I joined Intel in 1984 IBM bought a percentage of the company to keep it afloat with a growing pressure of Japanese chip producers. Using iRMX would of also made sense as it was not limited on memory size like standard DOS and used protected mode as standard so was pretty robust ( had to be for mission critical systems !) I specialised in Xenix running on the 310 and it got me all round Europe supporting the systems back in the 80’s. 😊
@ModusPonenz
@ModusPonenz 2 года назад
I remember RMX and ISIS. They ran on multibus based computers, Intel's version of the S100 bus. ISIS had a whole set of editors, assemblers and emulators. While working at Intel, I used to write 8085 code, assemble it, and write the machine code into EPROMs. The EPROMs were installed into main board of 8085 based systems that tested Intel memory products. As bugs were fixed and features added to the code, I'd remove the EPROM from the system, erase it under an Ultra Violet light. Edit the code and re-assemble, and re-program the EPROM with the new code. Although I never wrote code myself on the RMX systems, I worked alongside some folks who did. It was all in PL/M. I recall that RMX ran on either the System 310 System 360 or System 380 boxes - again they were multibus based and had various options for cpu cards, memory, IO, and later, networking. Later I put together a lab that used Xenix on the various multibus boxes. I was trying to acquire additional hardware to build up the lab and noticed in our internal warehouse inventory some unused hardware that might be useful. It was pallet after pallet of scrapped iAPX 432 boxes. They also used multibus. I was able to repurpose that hardware (without the iPAX 432 cpu boards) and turn them into 80286 based boxes running Xenix.
@paulblundell8033
@paulblundell8033 2 года назад
@@ModusPonenz I worked on systems up to Multibus II. It was needed for the 32 bit processors. I left Intel in 1991 when they put my group up for sale. The PC architecture killed the need for their own service engineers. Great times though, could book myself on any of their courses, did 8085 assembler through to 386 architecture.
@boblangill6209
@boblangill6209 2 года назад
I attended a presentation in Portland Oregon where Intel engineers were extolling features of iAPX 432. I remember they discussed how its hardware supported garbage collection addressed a key aspect of running programs written in high level languages. During the Q&A afterwards, someone asked about its reported disappointing performance numbers. Their response didn't refute or address that issue.
@goodgoodstuff
@goodgoodstuff 2 года назад
Having done ASM for a 286 and a 68000 at university, I have to say, at the time I preferred the 68000.
@RetroBytesUK
@RetroBytesUK 2 года назад
Me too if I'm being honest.
@lawrencemanning
@lawrencemanning 2 года назад
@@RetroBytesUK Ah if only the 68K was released a few months prior, IBM would probably have picked it for the PC. Such a shame, as the x86 is pretty horrible from an ISA POV. FWIW the 68K has... 68,000 transistors. As referenced by Wikipdiea. Cool video.
@DosGamerMan
@DosGamerMan 2 года назад
@@RetroBytesUK All those general purpose 32 bit registers. So good.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 года назад
As somebody who had done assembly language on the old PDP-11, it was pretty clear where Motorola’s inspiration came from. ;)
@Membrane556
@Membrane556 2 года назад
@@lawrencemanning The 68K was more equivalent to the 80286 than the 8086.
@fsfs555
@fsfs555 2 года назад
An expensive arch with an inordinate dependency on an a magic compiler that never worked? It's the Itanium's granddaddy. x86 was never intended to do what it ended up doing, which is why Intel kept trying to replace it. Too bad the bandaids were so effective as to render most attempts futile. The i860 was Intel's next attempt after the iAPX, a pure RISC design this time, but this also failed for a variety of reasons. The APX wasn't unique in using multi-chip CPUs (multi-chip and discrete logic-based CPU modules were common in mainframes such as the System/360) but in minis and micros they weren't ideal and they certainly would've needed to consolidate to single-chip implementations ASAP to keep up with chips (in both performance and price) such as the 68k and especially the RISC designs that were less than a decade off. Also, this wasn't the only arch that tried to run a high-level language directly: AT&T's CRISP-based Hobbit chips ran C (or they did until development was canceled).
@RetroBytesUK
@RetroBytesUK 2 года назад
I did not think many poeple would have heard of Hobbit, so I went for the lisp machine as an example as people at least go to see them in comp sci labs.
@Membrane556
@Membrane556 2 года назад
The i860 did find use in a lot of raid controllers and other applications. One thing that killed Itanium was AMD making the X86-64 extensions and the Opteron line.
@fsfs555
@fsfs555 2 года назад
@@RetroBytesUK I'm a little bit of a Mac nerd. The Hobbit was pursued for potential use in the Newton, but Hobbit was overpriced and also sucked so it was canned in favor of DEC's StrongARM. Then Be was going to use it in the BeBox but when the Hobbit project was cancelled (because Apple wasn't interested in forking over the millions of dollars demanded by AT&T to continue development), they switched to the PPC 603.
@fsfs555
@fsfs555 2 года назад
@@Membrane556 The i960 was fairly widely used in high-end RAID controllers because apparently it was really good at XORing. RetroBytes has a video on the channel about the Itanium that's worth a watch if you haven't already. It was a huge factor to be sure but there was more to it than just the introduction of x86-64 (including that ia64 was too expensive, too slow, and had dreadful x86 emulation).
@douggrove4686
@douggrove4686 2 года назад
@@Membrane556 ah, that was the i960. The i960 had an XOR op code. Raid does a lot of XOR....
@DM01710
@DM01710 2 года назад
i really enjoy these videos i have a bit of knowledge, problem solving repairs and builds but these vids do make a nice introduction to some more complex topics, thank you.
@framebuffers
@framebuffers 2 года назад
Seems like they didn’t learn their lessons of “letting the compiler do the thing” when they did Itanium.
@MoultrieGeek
@MoultrieGeek 2 года назад
I was thinking along similar lines, "been there, done that, failed hard".
@adul00
@adul00 Год назад
I have a slight feeling, that it may be kind of the opposite approach. iAPX intended to make writing compiler(s) for it easy. Itanium made writing efficient compilers for it impossible.
@reaperinsaltbrine5211
@reaperinsaltbrine5211 3 месяца назад
To be fair it is at least as much HP's fault: They got enamored with WLIW when they bought Apollo and got their PRISM architecture with it. HP already was working on a 3way LIW replacement for the PA. Which is a shame, because PA was a very nice and powerful architecture :/
@JonMasters
@JonMasters 2 года назад
The iAPX was actually one of the first commercial capability architectures. See contemporary efforts such as CHERI. You might consider a future video on that aspect of the architecture. While it failed, it was ahead of its time in other respects.
@wcg66
@wcg66 2 года назад
My first job was programming Ada for Ready System’s VRTX operating system, targeting Motorola 68000 VME boards. That’s is some retro tech soup. Killing the x86 would have done us a favour really, the fact we are stuck on it in 2022 is a bit of tech failure IMO. I’m glad there is real competition now from ARM-based CPUs.
@cal2127
@cal2127 2 года назад
love the channel.these deep dives on obscure cpus are great. any chance you could do a deep dive on the i860 and the ncube hypercube?
@flynnfaust6004
@flynnfaust6004 2 года назад
The X in "Architecture" is both silent and invisible.
@RetroBytesUK
@RetroBytesUK 2 года назад
🤣
@quantass
@quantass 2 года назад
Love your content and presentation style.
@minutemanqvs
@minutemanqvs Год назад
This channel is awesome, so much history I even didn’t know about.
@bradsmith7219
@bradsmith7219 2 года назад
Woo hoo! Thanks for these videos, they're great! You're my favorite retro-computing youtuber.
@RetroBytesUK
@RetroBytesUK 2 года назад
Thanks Brad, thats nice of you to say.
@SignumCruxis0
@SignumCruxis0 2 года назад
You deserve more subs! Great informative video.
@stephenjacks8196
@stephenjacks8196 2 года назад
Correction: the IBM 5100 Basic Computer already existed and used an i8085 chip which uses the same bus as i8088. It was not a random choice to pick the 8088.
@RachaelSA
@RachaelSA 2 года назад
Thank you for making these :)
@jenniferprime
@jenniferprime 2 года назад
I love your architecture videos
@Waccoon
@Waccoon 2 года назад
It's hard to get across how bonkers iAPX is unless you read the programming documentation, which I did a few years ago. No only are instructions variable length in true CISC fashion, but they don't have to align to byte boundaries and can be arbitrary bit lengths. That's just nuts. Even the high-level theory of operation is hard to understand. All of that so they could keep the addressing paths limited to 16-bit (because all that 64K segmentation stuff in x86 worked out so well, didn't it?) I'm not a fan of RISC, as I feel more compromise between the RISC and CISC world might be pretty effective, but iAPX really takes the cake in terms of complexity, and it's no wonder why RISC got so much attention in the mid 80's. It's a shame the video didn't mention i860 and i960, which were Intel's attempts at making RISC designs which were a bit quirky and complicated in their own rights. Of course, those processors saw some success in military and embedded applications, and I don't believe they were meant to compete with x86, let alone replace it. One thing worth mentioning: the 68000 processor used about 40K transistors for the logic, but also used about 35K more for the microcode. The transistor density of microcode is much higher than logic, but it's misleading to just chop the microcode out of the transistor count. That said, the 68000 was also a very complex processor, and in the long term probably would not have been able to outperform x86. That's a shame, as I liked programming 68K.
@RetroBytesUK
@RetroBytesUK 2 года назад
Have you seen the timing of some of those instructions that the part that really suprised me. We are all used to variable instruction times, but this is on a very different scale to other cisc processors. The 68060 shows how they could can continued the line, with complex instructions being broken down in a decode phase into simpler internal instructions for execution, just like intel did with the pentium and later chips. I think the fact they could not get the same economies of scale as intel ment they had to narrow down to one cpu architecture and chose powerpc. That was probably the right move back then, and powerpc had a good life before it ran out of road and space in the market.
@Waccoon
@Waccoon 2 года назад
@@RetroBytesUK There are two main issues with 68K that make it complicated to extend the architecture. First, 68K can access memory more than once per instruction, and it also has a fixed 16-bit opcode space limited to 32-bit operations. The only way to properly extend 68K to 64-bit is to either make an alternate ISA or resort to opcode pages. It's doable, but I doubt a modern 68K machine would really have a simpler core than x86. If the 68000 had been designed with 64-bit support from the start, then things might have worked out differently. One of these days I should look into Motorola's attempt at RISC, the 88000. I've read that it was pretty good, but for some reason nobody used it.
@RetroBytesUK
@RetroBytesUK 2 года назад
@@Waccoon I believe the reason it struggled was that it kept being delayed time and time again, then was available in very low quantities. So those designing machines around it moved on to another cpu.
@5urg3x
@5urg3x 2 года назад
Great video once again, I enjoyed it. I never knew about this though I had heard of ADA. Never really knew what it was, just heard of it in passing.
@lawrencemanning
@lawrencemanning 2 года назад
It’s a nice language. Ada was the DoDs attempt to force their suppliers into producing software in one well designed language, vs the dozens there suppliers were using. VHDL was there attempt at the same for describing hardware. Both have very similar syntax. I learned Ada at university (1995) and VHDL much later, and they are both very verbose, ridged languages, since these are actually good attributes. It’s a shame that each aren’t more popular.
@dustinm2717
@dustinm2717 2 года назад
@@lawrencemanning yeah, i wish something more strict become more popular in the past, but instead we got C and C++ being the base of basically everything, with their questionable design aspects still haunting us to this day I wonder how many major security scares would have never happened if C/C++ was less popular and something more rigid was used to implement a lot of important libraries instead (probably a lot considering how many can be traced back to things like buffer overflows in C)
@RetroBytesUK
@RetroBytesUK 2 года назад
I think one of the reasons the MoD liked it was exactly what you onlined. A lot less room for certain classes of errors. I remember being interviewed for one job after uni they where very keen that I'd done development work in Modula-2 and some ada, as a lot of the code they where working on was written in Ada for missle guidance. I was less keen once I found about the missle guidance part of things. C was very much a systems language that became very popular. Its very well suited for its original task, far less well suited for more or less everything else. However given how well c code performed when compiled given it did things in away that was closer to how the hardware did it seamed to just take over. Once we nolonger needed to worry about that sort of thing so much it was to late it was the defacto standard for most things.
@MonochromeWench
@MonochromeWench 2 года назад
iAPX was the sort of CISC that RISC was created to combat and that makes modern cisc vs risc arguments look kind of silly
@user-yg4kj2mf1p
@user-yg4kj2mf1p 2 года назад
This. People say things like "x86 is CISC garbage!", and I am like "boy, you haven't seen true CISC".
@wishusknight3009
@wishusknight3009 2 года назад
All modern cpus are RISC now pretty much. X86 is mostly micro-coded with a simple internal execution core and an unwieldy behemoth of a decode stage.
@Carewolf
@Carewolf Год назад
@@wishusknight3009 Even RISC instruction sets are micro-coded to something smaller these days.
@wishusknight3009
@wishusknight3009 Год назад
@@Carewolf I think you're right. There might be some purpose built applications where the ISA isn't mostly microcoded in to the cpu. But fromwhat I am seeing most SOC's used in cell phones for example microcode most of the ISA if not all of it. Ones like apple that have more of a controlled ecosystem and a good team of designers may be more native but its just a guess from me.
@marksterling8286
@marksterling8286 2 года назад
Loved the video, never got my hands on one, but brought back some fond memories about the NEC v20, I had a Hyundai xt clone and a friend that had an original ibm 5150, who was convinced that because it originally cost more and was ibm that it was faster than my Hyundai, until the day we got a copy of Norton si. (May have been something similar) The friendship dynamic changed that day.
@CommandLineCowboy
@CommandLineCowboy 2 года назад
1:23 that tape unit looks wrong. There doesn't seem to be any devices to soften the pull of the tape up reel. On full height units there are suction channels that descend either side of the head and capstans. The capstans can rapidly shuttle the tape across the head and you'd see the the loop in the channels go up and down. When the tape on either channel would shorten to the top of the channel then the reels would spin and equalize the length of tape in the channels. So the only tension on the tape was from the suction of air and not a mechanical pull from the reels that at high speed would stretch the tape. Slower units might have a simple idler? whigh was on a spring suspension. You'd see the idler bouncing back an forward and the reels intermittently spinning. Looking at the unit shown either its really old and slow or its a movie/TV prop.
@RetroBytesUK
@RetroBytesUK 2 года назад
It is indeed from a movie, one of the first crime capers to feature computer hacking. Its took the term cliffhanger very literally for its ending too.
@steevf
@steevf 2 года назад
@@RetroBytesUK Oh thank god. The way that tape looked like it was getting twisted and jerked was stressing me out. HAHAHA. Still I hate it when movie props don't look right either.
@Membrane556
@Membrane556 2 года назад
One fact that drove the decision on using the 8088 in the 5150 PC was that they had experience with using Intel's 8085 CPU in the Datamaster and the 8086 in the Displaywriter.
@herrbonk3635
@herrbonk3635 2 года назад
Yes, and also that the 68000 was too new, expensive, and lacked a second source at the time. (But they also contemplated the Z80, as well as their own processor, among others.)
@complexacious
@complexacious 2 года назад
The main takeaway I got from this video was that for all the faults of the CPU, the compiler itself was much much worse; using hugely inefficient CPU instructions to do simple things when the CPU had far better ways of doing it built in. Much like the Itanium how much more kindly would we be looking back at it had the compiler writers actually done a really good job? Hell, the "compiler had to organise the instructions" idea is in use today and has been since the Pentium. How much different really is that to the Itanium's model? I just think that Intel has a history of hiring the "right" people but putting them with the "wrong" teams. They have engineers who come up with really interesting concepts, but then when it comes to implementation it seems like they just fall over and pair it with some really stupid management. Meanwhile on the x86 side they add something that sounds suspiciously like a nerfed version of what the other team was doing and then they kill the interesting project or allow it to fail and then put billions of dollars into assuring that the x86 line doesn't even when faced with arguably superior competition from other CPUs. I just remember that when RISC came about it was considered slow and dreadfully inefficient and it was some time before the advantages of simpler CPUs started to make sense. If the money and backing had been behind language specific CISC instead of assembly focused CISC would we be looking back at x86/68k and even ARM and thinking "boy, what a weird idea that compilers would always have had to turn complex statements into many cpu instructions instead of the CPU just doing it itself. CPUs would have to run at 4GHz just to have enough instruction throughput to resolve all these virtual function calls! LOL!"
@IsaacClancy
@IsaacClancy 2 года назад
c++ virtual function calls are usually only two instructions in x86 (plus what is needed to pass arguments and save volatile registers that are needed after the call if any). This is almost certainly bound by memory latency and mispredicted indirect jumps rather than by instruction decoding.
@borisgalos6967
@borisgalos6967 Год назад
The point of the iAPX 432 was to design a clean, safe, reliable architecture. Had it happened a decade later without the need for backward compatibility, it would have been a superb architecture. Its problem was that it was too advanced for the time and its day never came. The same was true for their Ada compiler. It was, again, designed to be clean and safe at the cost of speed.
@Clavichordist
@Clavichordist 2 года назад
Very interesting stuff. I'm glad I skipped this chapter in Intel's life! When I used to repair Ontel terminals, I came across one CPU board that had 8008s on it! That was their oldest product I came across. The others were based around 8080 and and 8085 CPUs not counting their Amiga CP/M 2.0 computer which had the requisite Z80.
@DryPaperHammerBro
@DryPaperHammerBro 2 года назад
Wait, 8085 or 8086?
@Clavichordist
@Clavichordist 2 года назад
@@DryPaperHammerBro Yes. 8085s and yes, 8086s as well which I forgot to mention. Really old ca. 1980-81 and earlier equipment. The had static memory cards with up to 64K of RAM on some models. There were platter drives, and Shugart floppy drives available as well as word-mover-controllers, PIO and other I/O cards. Their customers included Walgreens Drugstores, Lockeed-Martin, Standard Register, Control Data, and many others. I learned more working on this stuff than I did in my tech and engineering classes. Amazing and fun to work on actually.
@DryPaperHammerBro
@DryPaperHammerBro 2 года назад
@@Clavichordist I only knew of the 86
@samiraperi467
@samiraperi467 2 года назад
Amiga CP/M 2.0 computer? Does that anything to have with CBM? I mean, there was the C-128 that had a Z80 for running CP/M, but that wasn't an Amiga.
@grey5626
@grey5626 2 года назад
@@samiraperi467 yeah, as far as I know the Amiga (previously known as Hi-Toro) was ALWAYS an MC68000 design, Jay Miner purportedly had flight simulators in mind when he created it, and using something to take advantage of the recently invented low cost floppy drives (which is why the Amigas have the brilliant Fast vs Chip RAM and RAM disk paradigms, so as to load as much code into memory to speed up application performance while mitigating the slower speeds of floppy drives without having to rely on taping out large ROMs which are much more expensive).
@flippert0
@flippert0 5 месяцев назад
Btw, the 'X' from APX stems from the "chi" in "Ar*chi*tecture" interpreted as Greek letter "X" ("chi").
@byteme6346
@byteme6346 2 года назад
Someone at IBM should have tried harder to contact Gary Killdall. The z80 or the Motorola 68000 would have made a better PC.
@christopheroliver148
@christopheroliver148 2 года назад
I'm hoping you meant to write z8000. Having written z80 stuff under CP/M, I'll state that the z80 had much of the issues the 8080 had: a paucity of registers (even given the alternate set and IX/IY) with dedicated function to a few. I.e. instructions which favored register A or the HL pair. Both the 68k and the Z8000 were clearly thought out as minicomputer processors, and had a far better ISA, addressing, and a architecture spec'd MMU,. (Actually two varieties of MMU in the case of the Zilog.)
@Chriski1994
@Chriski1994 2 года назад
Loving the tape drives from the Italian Job
@RetroBytesUK
@RetroBytesUK 2 года назад
I love putting in little bits like that for people to spot.
@DinHamburg
@DinHamburg Год назад
@RetroBytes - when you run out of topics, here is one: MMUs . Early microprocessors had some registers and some physical adress-width. Then they found out, that they might do something like 'virtual memory' and/or memory-protection. The processor families Z8000 and 60000 had separate chips, which acted as Memory Management Units. They did translation of some logical adress to a physical adress. when the requested data was not in physical memory, they aborted the instruction, called a trap-handler, which loaded the memory page from disk and restarted/continued the trapped instruction. some architectures could just restart/redo the instruction, but some required to flush and reload a lot of internal state. Maybe you can make a video about the various variants, which were the easy ones, which were the tricky ones, how is it done today (nobody has a separate MMU).
@markg735
@markg735 Год назад
Intel actually did offer up another operating system during that era. It was called iRMX and was a real-time OS that used lots of those x86 hardware multitasking features.
@JashankJeremy
@JashankJeremy 2 года назад
I've been toying with the idea of building an iAPX 432 emulator for a few years - if only to get my head around how weird the architecture was - but I hadn't known that it was broadly untouched in the emulator space! Perhaps it's time to revisit that project with some more motivation.
@kensmith5694
@kensmith5694 2 года назад
If you do it in verilog using icarus, you can later port it onto some giant FPGA.
@computer_toucher
@computer_toucher 2 года назад
So completely different from Itanium, which basically went for "well the compilers will sort it out eventually"?
@wishusknight3009
@wishusknight3009 2 года назад
The i860 shares more issues with Itanium than 432 does i think.
@Conenion
@Conenion 2 года назад
> went for "well the compilers will sort it out eventually"? Right. The sweet spot seems to be the middle ground: Either superscalar RISC, or CISC to RISC-like translation which is what Intel does since the Pentiom Pro (AMD shortly after).
@codewizard58
@codewizard58 2 года назад
At Imperial College we bought an iAPX432 and SBC86-12 with a Matrox 512K multibus system. Managed to run OPL432 the Smalltalk style language. When I left Imperial, they sold me the system and I eventually put the iAPX432 on ebay.
@user-fp3ql4le6m
@user-fp3ql4le6m 2 года назад
Enjoyed that. Thank you
@digitalarchaeologist5102
@digitalarchaeologist5102 2 года назад
Interactive UNIX file /bin/ls /bin/ls: iAPX 386 executable Thanks for explanation...
@kevinbarry71
@kevinbarry71 2 года назад
Thanks for this video. Another thing people often forget these days; back then Intel was heavily involved in the manufacturer of commodity DRAM chips
@RetroBytesUK
@RetroBytesUK 2 года назад
Your right they did ram chips, they also did a lot of controller chips too. I think most 8bit micros that had a disk system in the 80s used an intel disk controller. Intel was also still a general chip fab back then, and you could contract with them to fabicate your own custom asic.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 года назад
A key point is that the x86 processor family was not their biggest product then.
@herrbonk3635
@herrbonk3635 2 года назад
@@lawrencedoliveiro9104 The iAPX project started in 1975, three years before the 8086 (the first x86).
@douglasdobson8110
@douglasdobson8110 2 года назад
do a video on the evolution of Cyrix . . . I'm diggin' this techy stuff . . .
@kennethng8346
@kennethng8346 2 года назад
The Ada language was a safe bet back then because it was being bushed by the US Department of Defense, which was the origin of a lot of software back then. IBM's decision to go with the Intel processor architecture was because Intel let AMD second source the processor. IBM's standard was that no part could have a single source. Some of the rumors around the 432 was that Intel was pushing it to freeze AMD out of the processor market.
@BuckTravis
@BuckTravis Год назад
Correct. Ada and the i432 were the chosen code and platform for the F15. Programmers could wright a ton of code in Ada and once it worked the programmers would be able to clean up the code in assembly language.
@tconiam
@tconiam Год назад
Unfortunately for Ada, most of the early compilers were buggy and slow. Ada was pushing the compiler (and PC) technology limits of the day. Today, AdaCore's Gnat Ada compiler is just another language on top of the whole GCC compilation system. The only real drawback to modern Ada is the lack of massive standard libraries of code like Java has.
@thefenlanddefencesystem5080
@thefenlanddefencesystem5080 2 года назад
Rekursiv might be worth a look at, too. Though I don't advise dredging the Forth and Clyde canal to try and find it, that's a retro-acquisition too far.
@johnelectric933
@johnelectric933 7 месяцев назад
Remember that most all those early X86's came up in the 8088 mode and had to be prepared and switched to the processor configuration.
@hansvetter8653
@hansvetter8653 Год назад
I had read some Intel Manuals about iAPX432 back in 1985 during my work time as a hardware development engineer. I couldn't find any value argument for the management to invest in that Intel product line.
@connclark2154
@connclark2154 2 года назад
You should do a video on the intel i860 Risc processor. It was a worthy effort for a high end 3D workstation but it kind of fell on its face because the compilers sucked.
@klocugh12
@klocugh12 2 года назад
I had Ada class in college circa 2008, it was mainly about teaching parallel computing concepts.
@hosgoth
@hosgoth Год назад
I remember i286 I believe from an old coco type magazine I got at a yardsale in the 90's(US Ca.)
@peterpayne2219
@peterpayne2219 2 года назад
Love looking back at those crazy times.
@lawrencedoliveiro9104
@lawrencedoliveiro9104 2 года назад
7:58 I don’t think the splitting into multiple chips was really a performance or reliability problem back then. Remember that the “big” computers used quite large circuit boards with lots of ECL chips, each with quite a low transistor count. And they still had the performance edge on the little micros back then. Of course, chip counts and pin counts would have added to the cost.
@RetroBytesUK
@RetroBytesUK 2 года назад
Cost should never be under estimated in success of a product. Placing that aside, the big board cpus at this point where starting to come to and end. Propagation and settling time had become the significant factor in limiting performance for them. DEC had been very clever in how they grouped logic together in single ICs to avoid the worst consiquences. Intel had apparently had been less clever in how functionallity was spit over the packages, so more or less every insturction involved crossing packages. You should see the instruction latency on some of the instructions, some of them are massive, and most of that delay is apparently crossing the packages some times many many times. Also it stopped Intel just increasing the clock rate to get round performance issues due to the increased noise sensitivity on the chip interconnect. Motorola could nock out higher clock rate 68k chips, not an option for Intel.
@wishusknight3009
@wishusknight3009 2 года назад
@@RetroBytesUK Motorola would try this same concept with the 88000. It was several chips to have full functionality and was a bit of a bear to program. Though its downfall was mostly cost and lack of backwards compatibility.
@billymania11
@billymania11 Год назад
The big issue was latency going off chip. The government knew this and sponsored the VLSI project. The breakthroughs and insights gained allowed 32 bit (and later 64 bit) microprocessors to really flourish. The next big step will be 3d chips (many logic layers on top of each other.) The big challenge is heat and the unbelievable complexity in circuitry. Assuming these chips get built, the performance will be truly mind-blowing.
@wishusknight3009
@wishusknight3009 Год назад
@@billymania11 3d design has been in mainstream use to some extent for about 10-12 years already starting at 22nm. Albeit early designs were pretty primitive to what we may think of as a 3d lattice of circuits. And wafer stacking has been a thing for several years now. Mostly in the flash and memory space.
@DigitalViscosity
@DigitalViscosity Год назад
40 OR in the US Military as a systems developer, we still use ADA new libraries are written in ADA and some of the best security libraries are in ADA.
@sammoore2242
@sammoore2242 2 года назад
As a collector of weird old cpus, a 432 would be the jewel of my collection - of course I'm never going to see one. For 'failed intel x86 killers' I have to settle for the i860 (and maybe an i960) which is still plenty weird, but which sold enough to actually turn up on ebay every once in a while.
@RetroBytesUK
@RetroBytesUK 2 года назад
You stand a chance of finding one of those. I have a raid controller or two with an i960 used for handling the raid calculations.
2 года назад
i960 that weird or odd? My 486 133Mhz systems has a DAC960 RAID controller card (prom a PPRO system I was told) that runs the i960 and has 16Mb of ram (max 32 I think). Also Dec Alpha NT 4.0 drivers for it so I could put it in my PWS 500AU if I wanted to. Its way to fast for my 486 PCi buss BUT I do load the DOS games on 486 lans the fastest, first in on a MP season of Doom2 or Duke3D so there is that XD
@DennisPejcha
@DennisPejcha 2 года назад
The original i960 design was essentially a RISC version of the i432. It was originally intended to be used to make computer systems with parallel processing and/or fault tolerance. The project was eventually abandoned, but the core of the i960, stripped of almost all of the advanced features became popular as an I/O device controller for a while.
@O.Shawabkeh
@O.Shawabkeh 2 года назад
Don't miss the channel 'CPU Galaxy', in one video he showed a gigantic collection of cpus.
@petermuller608
@petermuller608 2 года назад
Isn't returning values from procedures the norm? Like how would you return a pointer to your stack, since the stack will be cleaned during the return from procedure
@davidvomlehn4495
@davidvomlehn4495 2 года назад
Actually, for small-sized values, the most efficient thing is to return values in registers. If it's too big you have to return some or all of it in the stack. (Even if it's large you can return the first few elements in registers)
@drigans2065
@drigans2065 2 года назад
@RetroBytes amusing video. I'd almost forgotten about the i432. The culture of the times was very much high level languages of the Pascal/Algol kind driving design, ADA being the most prominent/newest kid on the block and as I recall there were *no* efficient implementations of ADA and it took forever to compile. Because the ADA language specification was very complex and the validation of it highly rigorous it just took too long to qualify the compilers and my guess is, Intel ran out of *time* to do a decent compiler. Also, ADA was designed to support stringent operating scenarios in the military space. Ultimately the server market was driven by business scenarios which didn't need such stringency, paving the way for C. So wrong hardware for wrong language for what the rest of world wanted...
@LimitedWard
@LimitedWard Год назад
Okay you joke but if PCBWay could get into producing custom cable knit sweaters I would 100% buy.
@thcoura
@thcoura 2 года назад
Memory (price and speed) and storage ( the same ) always fu**ed semiconductor evolution
@rfvtgbzhn
@rfvtgbzhn 7 месяцев назад
8:01 propagation delay doesn't become relevant before you reach the order of magnitude of 1 GHz (at 1 GHz a signal at lightspeed travels 30 cm during 1 clock cycle, in a copper wire still at least 20 cm). The iAPX 432 ran at a maximum clockspeed of 8 MHz.
@SnakeBush
@SnakeBush 2 дня назад
if you told me ARM would be what ate x86 finally back in 2000 i would have said i need to go potty
@JayMoog
@JayMoog 11 месяцев назад
Ada is still in regular use, or at least in the form of VHDL, used to describe electronic hardware. Ada is a really cool language that lends some great features to later languages.
@mglmouser
@mglmouser Год назад
A bit of precision is required concerning the meaning of “RISC”. It means “Reduced [Complexity] Instruction Set Chip”. Ie, each operations are simpler to decode and execute-more often than not, in a single CPU cycle. Whereas CISC might require multiple CPU cycles to treat a single “complex” instruction. This simplicity generally meant that RISC processors has more instructions than CISC to compensate.
@LabyrinthMike
@LabyrinthMike 2 года назад
You commented (mocked) Intel for using iAPX on the 286 chip name. Most CISC machines are implemented using microcode. What if they used the APX technology to implement the x286 by simply replacing the microcode with one that implemented the x86 instruction set plus some enhanced instructions. I don't know that this happened, but it wouldn't have surprised me.
@RetroBytesUK
@RetroBytesUK 2 года назад
The 432 its such an odd design with its system objects etc that I doubt it would have been possible to implement that via micro code on a 286. They are vastly different architectures.
@jecelassumpcaojr890
@jecelassumpcaojr890 2 года назад
@@RetroBytesUK back then nobody called it "A P X" but just "four, three, two" instead because Intel did use the iAPX brand for everything between the 186 and the 386, only dropping it when the 486 came along. The 286 was indeed a 432-lite and does include some very complex objects that the x86 have to support to this day. In fact, it was the desire to play around with one of these features (call task) on the 386 that made Linus start his OS project.
@tschak909
@tschak909 Год назад
also dude. Intel wrote several operating systems: * ISIS - for 8008's and paper tape * ISIS-II for 8080/8085's and floppy disk (hard disk added later) * ISIS-III for 8086's with floppy and or hard disk * ISIS-IV a networked operating system for iMDX-430 Networked Development Systems AND * iRMX a preemptively multitasking operating system used in all sorts of embedded applications (and was also the kernel for the GRiD Compass)
@JosephKeenanisme
@JosephKeenanisme 2 месяца назад
And there were a few CPU chip manufacturers back then. I was there where the dinosaurs ruled and the Processor Wars were happening. My grandkids will be thinking I'm crazy talking about languages having line numbers :)
@charlesjmouse
@charlesjmouse 10 месяцев назад
PS: I find it amusing that the only line of CPUs Intel has had any real success with was based in a contract for a terminal chipset they botched so badly the client was forced implement what they wanted in TTL... which Intel then used as the basis of the CPU they were supposed to design!
@harlzAU
@harlzAU 2 года назад
Man, I had one of those space bunny soft toy from the original Pentium II release promotion. I chucked it when I moved house in 2021. Now I'm regretting it.
@jeffsadowski
@jeffsadowski 2 года назад
Seems like a lot of similarities between the APX and Itanium from what I heard of the Itanium. Itanium brought about my first look at EFI boot process and that turned out useful for me. I had Windows 2000, HPUX and linux running on the Itanium. The Itanium was faster at some operations. I had an SGI server with Itaniums that I stuck linux on and had it as a camera server running motion on around 2006 or so.
@RetroBytesUK
@RetroBytesUK 2 года назад
There are alot of parallels between iAPX and Itanium, and also i860 it seams Intel was doomed to keep creating similar problems for its self.
@thatonekevin3919
@thatonekevin3919 2 года назад
IA64 failed because of the drawbacks of VLIW. The performance you squeeze out of that is dependent on very complex compile-time decisions.
@stefanl5183
@stefanl5183 2 года назад
@@thatonekevin3919 Yeah, but that's the opposite of what's being described here. From the description here iAPX was to make writing compilers easier. Ia64 needed a more complex, smarter compiler to get the best performance. So, these 2 were complete opposites. Also I think AMD's 64bit extension of x86 and the resistance to abandon the backward compatibility to legacy x86 was part of the reason IA64 never took off.
@alanmoore2197
@alanmoore2197 2 года назад
These are not similar at all - quite the opposite...
@tiberiusbrain
@tiberiusbrain 2 года назад
I would love that ada lovelace video at some point too 😁
@RichTheEngineer
@RichTheEngineer 8 месяцев назад
Ada was mandated by U.S. Dept. of Defense for mission-critical applications, as well as safety-critical.
@theantipope4354
@theantipope4354 Год назад
The really big reason why ADA was invented was as a standard language for defence applications, as specified by the American DoD. The theory was that it was supposed to be provable (in a CompSci sense), & thus less buggy / more reliable. Hence, there was a *lot* of money behind it.
@HappyBeezerStudios
@HappyBeezerStudios 4 месяца назад
Yup, even Intel saw 8086 and it's decendents as something periodic. From iAPX432, to i960, i860 and IA-64
@WTFShelley
@WTFShelley 2 года назад
are you using Cool-retro-term for the terminal overlay??? well playted sir
@RetroBytesUK
@RetroBytesUK 2 года назад
I am indeed, well spotted.
@aegisofhonor
@aegisofhonor 8 месяцев назад
the Fiesta reference comparing it's alternate naming to Intel's iAPX 286 would be better compared by renaming the Ford Fiesta to the Ford Fiesta Edsel, as the Ford Edsel was essentially Ford's version of the iAPX back in the 1960s.
@80s_Gamr
@80s_Gamr Год назад
Just to note, Intel didn't jump straight to 80286. For a brief period there was an 80186. I believe it only went into 2 or 3 offerings of PC's but it was a thing at one point.
@jaybrown6350
@jaybrown6350 2 года назад
Kinda. the 8088 was a special modified 8086 because Intel had the 8086 16-bit CPU but it didn't yet have a 16-bit support chipset for the 8086. Intel modified the 8086 to the 8088 to work with the 8-bit support chips for the 8080.
@jaybrown6350
@jaybrown6350 2 года назад
, That's exactly what I said.
@DataWaveTaGo
@DataWaveTaGo 2 года назад
At 1:11 and on - that tape unit clip is from a scene in the 1969 movie "The Italian Job". Is it PD now, or what?
@Da40kOrks
@Da40kOrks 2 года назад
Having just watched your Itanium video, seems like a similar mindset between the two...pushing the performance onto the compiler instead of silicon
@davidvomlehn4495
@davidvomlehn4495 2 года назад
No sin there. RISC computers are a key part of the architecture's success. But the simpler architecture of RISC makes it easier to optimize instruction choices and the simplicity of the underlying hardware makes it possible to run fast and to add things like caching, instruction pipelining, and speculative instruction execution. Of course, when the CPU runs screaming fast you have to wait for memory to catch up, but *something* has to be the bottleneck.
@delscoville
@delscoville Год назад
I've only heard of iAPX. But you mentioned Ford Fiesta, the original model is rare to see here in the states. But I used to own one, but sold it on 1992 when I inhereted a newer bigger car. But still see that same Fiesta I once owned zipping around the small town I'm in.
@johnmckown1267
@johnmckown1267 2 года назад
Look at the current IBM "z series" mainframe. It has a lot of "conditional" instructions. Equivalent to "load if" instruction.
@davidvomlehn4495
@davidvomlehn4495 2 года назад
ARMv7 and earlier have lots of conditional instructions but fewer on ARMv8. Interestingly, RISC-V slices this in a different way by packing a conditional jump into 16 bits of a 32-bit word. The other 16 bits hold instructions like move to register, perform arithmetic, etc. The branch doesn't create a bubble in the pipeline when the target is the next word so it runs just as fast instruction. as a conditional. Very slick.
@JoeMcGuire
@JoeMcGuire 2 года назад
Looking forward to hearing about BiiN and i960!
@byteme6346
@byteme6346 2 года назад
Ada was adopted by the Department of Defense (U.S.).
@techkev140
@techkev140 2 года назад
Recall the failure that was IA64 architecture, the Itanium processors, had no idea iAPX existed.
Далее
Intel's biggest blunder: Itanium
10:35
Просмотров 354 тыс.
Разница подходов
00:59
Просмотров 43 тыс.
Arcnet - It was a contender
28:02
Просмотров 116 тыс.
The history of SPARC, its not just a Sun thing
41:36
Просмотров 153 тыс.
DEC Alpha
30:39
Просмотров 267 тыс.
Homelab Setup Guide - Proxmox / TrueNAS / Docker Services
2:44:39
The First Personal Computer
5:20
Просмотров 6 тыс.
iPhone 12 socket cleaning #fixit
0:30
Просмотров 2,2 млн