Тёмный

Intel Tried To Kill x86! - Itanium Explained 

Techquickie
Подписаться 4,3 млн
Просмотров 415 тыс.
50% 1

Опубликовано:

 

26 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 789   
@kwerboom
@kwerboom 3 года назад
I prefer, "If you're going to reinvent the wheel at least make sure it works on the roads that already exist."
@Fenrisboulder
@Fenrisboulder 3 года назад
like how they had such poor insight , shallow connections maybe w/big client
@ch4.hayabusa
@ch4.hayabusa 2 года назад
Or rather if you're going to invent a wheel that requires special roads, make sure someone wants to build those roads. If not make sure that your wheel is so good that you can afford to do it yourself.
@dabbasw31
@dabbasw31 10 месяцев назад
This is the main reason why maglev trains did not replace traditional wheel-rail trains. (:
@pyronical
@pyronical 3 года назад
You should start covering more obscure/failed hardware devices that people probably never heard of.
@JosifovGjorgi
@JosifovGjorgi 3 года назад
and dressed as hipster :)
@StarkRG
@StarkRG 3 года назад
It'd be interesting to see LTT's take on the Transmeta Crusoe, a CPU that was very much NOT x86 but managed to run a custom x86 emulator that was, in almost all cases, at least as fast as an Intel or AMD CPU of the same cost, and using much less power in the process. (The "almost all" caveat is probably ultimately why they failed) It's actually the same architecture family as Itanium, VLIW (Very Long Instruction Word, with 32-bit instructions some of which could be combined to make 64- and 128-bit instruction words).
@SonicBoone56
@SonicBoone56 3 года назад
Canadian LGR
@fanseychaeng
@fanseychaeng 3 года назад
That's a good one tho.🤣🤣🤯
@WarriorsPhoto
@WarriorsPhoto 3 года назад
Yes agreed. Any ideas come to mind?
@I0NE007
@I0NE007 3 года назад
I just heard about Itanium two days ago when learning about "why Space Pinball didn't make it to Vista."
@peteasmr2952
@peteasmr2952 3 года назад
Same
@rishi-m
@rishi-m 3 года назад
ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-3EPTfOTC4Jw.html You both made me Google this, this video? Interesting.. Edit: It is a link to NCommander's video
@I0NE007
@I0NE007 3 года назад
@@rishi-m Yeah, that's the one.
@ikannunaplays
@ikannunaplays 3 года назад
It's almost as if TechQuickie did also and decided to expand on it
@angeldendariarena2287
@angeldendariarena2287 3 года назад
The point of view of a retired Microsoft software engineer that created much of the tools of windows. ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-ThxdvEajK8g.html&ab_channel=Dave%27sGarage
@xplodingmojo2087
@xplodingmojo2087 3 года назад
“Tried to take out the bricks from my house yesterday” - Intel
@shyferret9924
@shyferret9924 3 года назад
Speaks in 10nm
@notaname1750
@notaname1750 3 года назад
@@shyferret9924 Finally! Intel using 10nm
@shyferret9924
@shyferret9924 3 года назад
@@notaname1750 when amd already on 7
@Halz0holic
@Halz0holic 3 года назад
More like foundation
@hasupe6520
@hasupe6520 3 года назад
x86 is already dead. The writing is already on the wall. ARM will be the future. Intel/AMD will have to adapt or die. Anyone who don't understand that is kidding themselves.
@basix250
@basix250 3 года назад
Considering the new tech coming out every day, the age of x86 and its survival is really an outlier.
@kusayfarhan9943
@kusayfarhan9943 3 года назад
It's a low level foundation. Makes perfect sense that it survived this long. It's like the foundation/basement to a building. You can't yank it out and expect the building to not topple. There is software decades old running on x86. At a university lab I once saw an old piece of machinery that is controlled by a PC running Win 95 on x86. This machine is still used today to prototype CPU designs.
@dycedargselderbrother5353
@dycedargselderbrother5353 3 года назад
Adaptation is part of it. Under the hood modern parts operate nothing like the original chips.
@Myvoetisseer
@Myvoetisseer 3 года назад
It shouldn't have survived, but we're kinda trapped by it. Apple proved that x86 SHOULD die. RISC is far more efficient. But we all use x86 because all the software is written for it. And all the software is written for it because we all use it. It's very hard to get out of this loop.
@Paerigos
@Paerigos 3 года назад
@@Myvoetisseer Well it cant reach the computing power x86 (and mainly AMD64) can. Quite frankly you can just dump more power into AMD64 and it will deliver. you cant do that with RISC and you wont be able to match AMD64 for about a decade to come.
@manasmitjena5593
@manasmitjena5593 3 года назад
@@Myvoetisseer x86 uses CISC and using RISC which stand for Reduced Instructions would mean low performance overall. Apple's M1 ARM processor is great with CPU performance but its GPU is laughable at best compared to nvidia or AMD and is comparable to integrated graphics like the Iris Xe graphics.
@JemaKnight
@JemaKnight 3 года назад
Missed at least one important point I think was worth bringing up: Intel had no choice but to license the AMD64 instruction set extension from AMD, who had licensed the x86 instruction set from them initially - creating a mutual dependence that remains to this day, and extremely important leverage for AMD in cross-licensing agreements, lawsuits and industry deals that have taken place since. This significantly leveled the balance of power between the two companies, and it's extremely likely that AMD's fate may have been very, very different if not for this series of events.
@bartkoens5246
@bartkoens5246 3 года назад
slightly off-topic. I still deplore the demise of Zilog's Z80 followers like Z8000 with a superior register layout.
@Barkebain
@Barkebain 6 месяцев назад
We've all benefited from the competition between these two companies for decades as a result of these cross licensing agreements. I'm curious why Intel has so far stated that they are not going to jump into the upcoming ARM(s) race that's going to attempt to displace x86 in both the PC and server realms. We already have AMD, Nvidia, MediaTek, Qualcomm, Apple, and Samsung in the mix, and so many proven performers from cell phones, and Apple's new laptops/desktops that Intel might not be able to ignore this round of ARM on the desktop.
@craigmccune6066
@craigmccune6066 2 месяца назад
@@Barkebain the problem with arm on desktop is largely the same as with itanium. most things run on x86, and most of us are not willing to take the performance hit of emulation of x86 on arm. the reason it is taking over the laptop and server space is because power efficiency matters more, and Arm is already well known, being what phones use, so not as hard to convince people to develop for. also apple switched forever ago, and so we already had apps coded for Arm
@DantalionNl
@DantalionNl 3 года назад
Actualy architectures exist where the scheduling is done by the software ahead of time. This is the Very Long Instruction Word architecture still found in Digital Signal Processors today!
@tazogochitashvili6514
@tazogochitashvili6514 3 года назад
VLIW isn't even dead technically. Russian Elbrus is VLIW
@VivekYadav-ds8oz
@VivekYadav-ds8oz 3 года назад
Don't compilers already do this to some extent? I once read LLVM IR of my program and it had rearranged quite a lot of instructions for some low-level optimisation (for packing reasons? idk)
@DantalionNl
@DantalionNl 3 года назад
@@VivekYadav-ds8oz Yes VLIW compilers do this ofcourse but a compiler is still software. So it happens at compile time as opposed to at runtime as would be the case with a hardware scheduler
@sudhanshugupta100
@sudhanshugupta100 3 года назад
EPIC, the design idea Itanium is based on, builds upon VLIW :)
@afelias
@afelias 3 года назад
It works for DSP because memory access and processing all run regularly with little interruption. It's the same reason those applications can have a million pipeline stages. Random memory accesses and dealing with other interrupts from I/O makes VLIW+software scheduling really annoying for a central processor/controller.
@JuanPablo-ho7fg
@JuanPablo-ho7fg 3 года назад
As an Itanium native speaker, the CPU would probably say "Disculpe señor, ¿podría indicarme dónde está el baño?". The same error was in Itanium emulators a while ago.
@Manganization
@Manganization 3 года назад
Thank you Itanian for gracing us with the proper language execution.
@Quique-sz4uj
@Quique-sz4uj 3 года назад
@@Manganization is this a woosh or not
@mineland8220
@mineland8220 3 года назад
El cpu quería descargar los procesos extra
@CaptainSunFlare
@CaptainSunFlare 3 года назад
@@Quique-sz4uj no, amigo, no es un whoosh. Aunque, Tristemente, mi computadora usa el procesador itanium, así no sé si vas a comprender este...
@soyiago
@soyiago 3 года назад
Basado 😎👌
@skeletonbow
@skeletonbow 3 года назад
I've got an ancient HP zx2000 Itanium workstation in my basement that's been sitting there for a decade and a half. They were horrible to develop on, and insanely slow. I named the computer "insanium". I didn't realize that they existed this long, and thought the architecture died like a decade ago. We're all so lucky that AMD created AMD64 and brought it to market when they did, or we could be plagued by the Itanium. :)
@maybeanonymous6846
@maybeanonymous6846 2 года назад
dID yOU tRy LinUx ?
@mytech6779
@mytech6779 2 года назад
most replacement parts rather than new installs. The 80386 was in production to about 2007 partly for the same reason, support for existing industrial equipment. (And the 80386 had been fully tested for bugs and corner cases in critical systems so remained popular in new designs for quite a while too. Having more than enough power to handle many embedded style tasks like systems monitoring, navigation calculation, and so forth.)
@Thorovain
@Thorovain 2 года назад
You should send it to LTT. I bet Anthony could make an interesting video with it.
@ailivac
@ailivac Год назад
Nah, there were plenty of other options at the time. Sparc, ARM, MIPS, etc, which eventually all added 64-bit support and were based on clean-sheet designs not hampered by vestiges from the 1970s. But none of those would have let us keep relying on millions of lines of Windows-based code that no one would ever bother to recompile.
@jacko314
@jacko314 3 года назад
i had to debug ia64 assembly crash dumps. nightmare. debugging root kits would have been impossible.
@dycedargselderbrother5353
@dycedargselderbrother5353 3 года назад
NCommander debugged ia64 recently in his video about what happened with Space Cadet Pinball. He had trouble with it and it looked alien to me.
@jacko314
@jacko314 3 года назад
​@@dycedargselderbrother5353 that is because it is alien. i've rummaged through risc and tonnes of x86/64 assembly. the only thing ia64 had going for it is that if you had the right symbols files you could figure out quite a bit. but the actual execution logic was was like swimming in spagetti. i think assembly should be somewhat readable as many errors are only detectable via assembly inspection. (kd rules)
@UltimatePerfection
@UltimatePerfection 3 года назад
Feature, not a bug.
@niduroki
@niduroki 3 года назад
Didn't Intel try to (also) create ARM processors back in the 80's, too, but IBM was like: uuuh, naah, you better not?
@chuuni6924
@chuuni6924 3 года назад
No, they did buy DEC's StrongARM in the 90s and tried to use it for a series of low-power chips, but IBM had nothing to do with its demise.
@Rainmotorsports
@Rainmotorsports 3 года назад
Intel did make Arm processors but that was way later. They sold their license to Marvel who still uses it.
@raulsaavedra709
@raulsaavedra709 3 года назад
A possible topic of interest: game engines, how they work at a very high, easy to grasp level
@etaashmathamsetty7399
@etaashmathamsetty7399 3 года назад
A game engine is just a development environment like an ide
@etaashmathamsetty7399
@etaashmathamsetty7399 3 года назад
@thatonespathi what I meant by development environment was a bunch of libraries that you use in YOUR OWN development environment. (I have used both unity and unreal engine + making my own) game engine != car engine a game engine is more of a library, it wraps low level graphics api calls, physics calls, and etc into simpler to use functions. Along with giving you an environment to code in. (environment = mono behavior in unity, or like the c++ enviornment UE gives you)
@Roxor128
@Roxor128 3 года назад
High-level might be easy to grasp, but low-level is much more interesting.
@reillywalker195
@reillywalker195 2 года назад
Game Makers' Toolkit did basically that.
@raulsaavedra709
@raulsaavedra709 2 года назад
@@reillywalker195 What title did that video have?
@ElliotGindiVO
@ElliotGindiVO 3 года назад
The backwards compatibility of x86 is awesome. I hope it always remains supported.
@powerfulaura5166
@powerfulaura5166 3 года назад
It will. ..via emulation on ARM lol
@danzjz3923
@danzjz3923 3 года назад
@@powerfulaura5166 not ARM, RISC-V
@brandonn.1275
@brandonn.1275 3 года назад
@@danzjz3923 por quo no los dos?
@A.Martin
@A.Martin 3 года назад
x86 is also a liability because of its age too, it needs to retire, and maybe now it might be able to be phased out. When they first tried it was just too early I think.
@afelias
@afelias 3 года назад
At some point RISC-V will have an extension that helps emulate x86 with good hardware support and that'll be the end of that. Though, in the present, we're way more likely to emulate RISC-V on x86 native hardware...
@aah_einstein
@aah_einstein 3 года назад
I never heard of ITANIUM, but I came across IA64 during college, particularly as we were using Intel's manual when we were studying x86 architecture assembly language.
@XantheFIN
@XantheFIN 3 года назад
AMD was actually team green back then.. right?
@wince333
@wince333 3 года назад
yes before buy ATI
@Wahinies
@Wahinies 3 года назад
@@wince333 I still remember the green moulded biodegradable stuffing for my Opteron 165.. Made me go and buy a pack of spearmint gum
@TheExileFox
@TheExileFox 3 года назад
Amd used to have a green logo
@RandomnessCreates
@RandomnessCreates 3 года назад
Yep, wanted to get Nvidia too but Jensen said nope.
@Hadar1991
@Hadar1991 2 года назад
Actually black was and still is the official AMD brand colour and they use black to brand their CPU's. Only Radeon brand is officially red.
@Friedbrain11
@Friedbrain11 3 года назад
I remember those things. I was never interested in one. Apparently neither was anyone else LOL
@chunye215
@chunye215 3 года назад
I'd want one just for fun, but the later models that run halfway fast are expensive even on eBay. It's old shit someone is throwing out but still, there has to be demand by some exotic shops depending on them, I have no other theory.
@FarrellMcGovern
@FarrellMcGovern 3 года назад
I ran an HP SuperDome, what they called at the time a "Supercomputer". It could have multiple CPUs, with both PA-RISC and Itanium CPUs at the same time. I had a lot of fun bringing up Linux and HP-UX on the machine that I maintained as part of a testing lab for an HP software cataloging product. Fond memories...
@MatrixRoland
@MatrixRoland 3 года назад
I had an instructor about five years ago who was part of the design/implementation of those chips. He brought in some samples to show us. They were really big chips compared to x86. They were on daughter cards that plugged into the main pcb.
@mytech6779
@mytech6779 2 года назад
Slotted CPU mounting was also popular with x86 at the time, purely a matter of manufaturing. It was a way of adding cache without harming production yield. Basically they couldn't test the silicon until it was all mounted together, so a bad bit of cache [It was separate silicon from the core.] would trash a whole traditional pinned package, but by mounting on a mini-PCB they had the option to replace the bad piece of silicon.
@randallcuevas5562
@randallcuevas5562 3 года назад
"if you are trying to reinvent the wheel, make sure it is compatible with your car" 4:25
@ChristianStout
@ChristianStout 3 года назад
Another fun fact: Oracle still makes SPARC CPUs for their HPC customers.
@jmtrad1906
@jmtrad1906 3 года назад
The reason we use AMD64 today.
@billazen4865
@billazen4865 3 года назад
Itanium, more like ***INSERT MATCHING WORD***
@TheLaser450
@TheLaser450 3 года назад
If i recall it was also very expensive
@canoozie
@canoozie 3 года назад
VLIW architectures like EPIC are great (if your compilers are great), until you have to deal with register file pressure. TTA's (Transport Triggered Architectures) are better, but are bus heavy so occupy more space, but they eliminate the register file pressure problem. Hobby CPU designer here, and owner of more than a dozen FPGAs. I've built CPU architectures that were based on both of these designs. AMA
@capability-snob
@capability-snob 3 года назад
Do you have to deal with register file pressure? One of the nicer innovations on the itanium over other windowed designs was that you could allocate register frames to your heart's content and the Register Stack Engine would lazily write them back for you.
@jmickeyd53
@jmickeyd53 3 года назад
"if your compilers are great" - I think this was the real downfall for Itanium. It was just a bit too early. Its easy to forget just how bad compilers were then. SSA was still a largely ignored research paper from IBM at the time.
@chainingsolid
@chainingsolid 3 года назад
This sounds really cool. How'd you get into this? Is it primary buying FPGAs and investing time?
@canoozie
@canoozie 3 года назад
@@chainingsolid I'm a software engineer, have been writing code since I was a smallish child. I had a fascination with understanding how computers executed code, and when I heard of FPGAs I found an opportunity to explore some of that. It all started with a simple clock divider, an LED, and some verilog code to implement said clock divider and send voltage to said LED making it blink. From that point forward, learned how to build a debugging interface, and consumed a ton of material on old cpu designs, since they're comparatively simple (modern cpus are really complex). A decade later, I had written my first CPU implementation, compiler toolchain capable of running code I wrote for it.
@canoozie
@canoozie 3 года назад
@@capability-snob in TTAs there is no inherent register file pressure. Intel's solution to the problem was ingenious but not a panacea. Most VLIW designs have hundreds to thousands of registers to "solve" this issue, but Intel came up with the window. They have more than 64 registers, but only some of them are available for writing for any instruction
@hardrivethrutown
@hardrivethrutown 3 года назад
didn't they also try during the 486 era with the i860 and even earlier with iAPX 432?
@bionicgeekgrrl
@bionicgeekgrrl 3 года назад
More of a case of multiple options going on rather than trying to kill off another with a new one with those. The 8086 chips were initially never seen as the future, just a chip to pump out while working on the 432 series, but that never took off. I860 ended up in printers mostly. It was a similar thing with Motorola, they wanted to move to a new chip design, but the 68k was so popular they ccould not get the follow on design going and then hit the limit of the 68k design. Eventually theyd get to the powerpc design though not alone.
@kensmith5694
@kensmith5694 3 года назад
The 432 was a daft design anyway. Instructions didn't contain addresses. Instead they contained a reference number that have to be looked up in a table to get the type and address of the variable. The type was use to check that this operation was permitted on this type of variable. The address from the table was used to get the value from physical memory. This made all if it take an extra memory access. The 286 outperformed the 432
@zenbum2654
@zenbum2654 3 года назад
By the time Itanium was unveiled, Intel had made at least 3 previous attempts to replace the x86: iAPX 432, i860, and i960. Now it looks like they're jumping on the RISC-V bandwagon.
@alexander1989x
@alexander1989x Год назад
Itanium goes to the pile of "We tried to be disruptive but were too ambitious" together with PowerPC.
@MegaManNeo
@MegaManNeo 3 года назад
der8auer did some real nice die shots off Itanium CPUs a few months ago. Definitely worth checking out, maybe even worth using as desktop wallpapers.
@DigitalJedi
@DigitalJedi 3 года назад
I have a die shot of a 1660ti as a wallpaper. I like to think of it as my gpu drawing a self portrait every time I turn on the computer.
@pronounjow
@pronounjow 3 года назад
I just watched a Pulseway ad featuring no-beard Linus.
@StephenKennedyCanada
@StephenKennedyCanada 2 года назад
It’s about time you guys did a video on the Itanium!
@Lionel212001
@Lionel212001 2 года назад
It would be interesting to see how Intel leverages RISC-V.
@marcello4258
@marcello4258 3 года назад
3:09 si, claro.. es a la izquierda
@krystina662
@krystina662 3 года назад
0:55 knowing how many things are programmed, just that is enough to understand why it would never take off haha
@MasticinaAkicta
@MasticinaAkicta 3 года назад
I heard about the Itanic, it did go just as smoothly as the boat. Without x86 support and hardly software for it made it really sank quickly.
@laurendoe168
@laurendoe168 3 года назад
Anyone familiar with the difference between software and hardware knows: If you want something cheaper, do it in software; if you want something faster, do it in hardware.
@erroltheterrible
@erroltheterrible 3 года назад
Anyone remember the Transmeta cpus? They where non x86, but emulated x86 by running a cross compiler in real time to convert x86 instructions to its native instruction set. Still have my Compaq TC1000 on the bookshelf...
@cbeemaac20
@cbeemaac20 3 года назад
Itanium lives on in many college computer architecture courses! It's a really well thought out architecture that is more modern than MIPS but less complicated than x86. A great architecture to study.
@drooled2284
@drooled2284 3 года назад
Wasn't Risc-V invented for that exact purpose though?
@kensmith5694
@kensmith5694 3 года назад
@@drooled2284 Yes but sadly the RISC-V is not as good of a CPU as an ARM. The lack of a status register makes simple checks on operations a lot harder. It takes several instructions to detect an overflow for example.
@kensmith5694
@kensmith5694 3 года назад
Itanium wasn't the first attempt from Intel. There was also the IPX432 and the I860. Each of these was worse than the other. Working out which instruction goes first is fairly easy on a RISC machine and would be within the grasp of a compiler to do. It would require a custom compiler for each version of a chip but that could be done if speed was your biggest goal.
@kodykj2112
@kodykj2112 3 года назад
Video idea: Full explanation as to why it is possible to have a virtual/software cpu or control the CPU instructions through software and not hardware, because in order for the software to do anything it needs to have the appropriate hardware and depends on the hardware in the end.
@MrFram
@MrFram 3 года назад
You’re misunderstanding what controlling instruction process through software means. The instructions are bigger and have extra data to tell the cpu what to do in more details. Since this data is part of the instructions, it is software, and it controls the cpu. They are just instructions control the cpu more precisely than usual. Now, since the instructions can tell the cpu how to do some of this stuff, it does not need to calculate it itself, hence no need for scheduler circuits, etc. It can also add room for extra optimizations through clever manipulation of instructions
@chuuni6924
@chuuni6924 3 года назад
Itanium was more like Intel's fourth attempt at killing x86, after the iAPX432, the i860 and the i960. They must really hate their own child.
@Berobad
@Berobad 3 года назад
Because other manufacturers could produce x86 chips, if their new architecture would have worked, intel wouldn't have had to bother with AMD Cyrix/VIA and co anymore.
@davidbonner4556
@davidbonner4556 3 года назад
You have to remember the pre-historic Intel chips... The 4004, designed to be a traffic light controller, and the 8008 which was commissioned by ARPA (now DARPA) to implement "Fly-by-wire" control systems in military aircraft. The 8080 was intended as an improvement but also became popular when MIT built a prototyping system in 1972 called the "Altair 8800" which they decided to sell to hobbyists because students were having a blast with it. The 8086 was the 16 bit version, with the 8088 being version in a smaller package that multiplexed data and address lines. 8088 was in the first PC/PCXTs, while a newer 8086, the 80286 was used in the AT models for 16 bit (IBM skipped the 8086 and 80186).
@WizardNumberNext
@WizardNumberNext 3 года назад
Intel never intended to replace x86 with Itanium (unless you run server on every computer) Itanium was strictly targeted at server, not even workstation x86 at that time wasn't targeted at server xeon was pretty much same thing as AMD Athlon MP Itanium was designed as server architecture
@Hanneth
@Hanneth 3 года назад
No, Intel was very clear when Itanium was announced that it was meant to replace x86 completely in servers and desktop. The plan they laid out was they were going to start Itanium in the server market where there was bigger margins to allow them to better develop the technology. When the performance matched, or exceeded x86, they would bring it into the consumer space. Part of making that performance match was to work with developers to design their software to work better with Itanium. It was a long term strategy. I also remember when they announced that they were scrapping their plans to replace x86.
@mycosys
@mycosys 3 года назад
That time AMD saved x86
@bootmii98
@bootmii98 Год назад
This wasn't the first time Intel tried to kill the 8086, or even the second. The first was the iAPX 432. A kinda-sorta was the 376 which was a legacy-free 386 for embedded systems. The second was the 860/960, which saw more commercial success than Itanium, the iAPX 432, or the 80376.
@MrMediator24
@MrMediator24 3 года назад
Elbrus (Russian ISA) is VLIW and they thought it was a good idea to use it despite failure of Itanium
@IvanSoregashi
@IvanSoregashi 3 года назад
well our guys didn't have a market share to lose :/
@naamadossantossilva4736
@naamadossantossilva4736 3 года назад
Russia is trying to close off its internet,incompatibility for them is not a bug,it's a feature.
@MrMediator24
@MrMediator24 3 года назад
@@naamadossantossilva4736 this processors are meant for corporate use (government and companies working with the state contracts), yet now there's money being stol... allocated for new designs based on RISC-V
@DimkaTsv
@DimkaTsv 3 года назад
@@MrMediator24 tbf, if it is like that, doesn't it seem that executing anything potentially dangerous from internet will be even possible ... Given for government use ofc
@wskinnyodden
@wskinnyodden 3 года назад
Ok, just so that you know, even before Itanium there were attempts at killing the X86 family, actually not long after it's creation Intel tried to drive the market away from the x86 architecture because that architecture was pretty much a "hack" of a design implemented on a rush to ensure they'd win the IBM contract (which they did therefore creating the legacy we have today), those CPUs were the iAPX 432, the i960, the i860. (yes this is the correct order) Although not all of these were specifically designed with that goal you can find references of that goal for the likes of the iAPX 432 and i960. Truth be said they never expected backwards compatibility to be the market driver it proved to be. Although in those days this incorrect perspective would be understandable if one didn't think things through properly. This is a mistake we nowadays have more than enough evidence to know Intel at some semi-random times does this kind of short sighted crap. That said, back then backwards compatibility IN HARDWARE was a must to assist with software development, the development environment for any architecture was pretty much in it's infancy (not to say still in the wound) and therefore any system that could use prior art to speed up it's use in the real world would have a major advantage. Nowadays we even have enough horsepower to emulate other instructions sets in software, and if someone becomes smart enough to grant our consumer CPUs some FPGA grunt logic directly next to the cores we could even add those instructions so we could run other architecture code natively (some Xeons do have FPGAs on them, just not as deeply integrated as this, still is faster than software emulation though). As such, backwards compatibility is not that much of an issue. A good example for this is Apple and how they've first migrated from Motorola 68x00 family to the PowerPC (which was somewhat simpler than the next change) then to x86-64 and nowadays are migrating to ARM (which would likely would have been easier coming from PowerPC as those are 2 RISC architectures unlike x86-64 even though nowadays x86-64 could be called a mix of CISC-RISC design, just a mess if you ask me even though I love it)
@epobirs
@epobirs 3 года назад
IBM was nowhere near deciding to test the waters in microcomputers when work started on the 8086 in 1976 to launch in 1978. At the time, Intel was lagging badly on 16 and 32-bit products, at least in demo form, and knew its 'clean' designs would not arrive in time to prevent competitors from staking out the market. Thus the 8086, building atop the existing Intel products, which saved a great deal of time in the design stages. When IBM decided to create the Entry Systems Division, there were already plenty of 8086 boards and systems being sold. C/PM-86 was already in existence and IBM first sought to license that in their rushed effort. Dorothy Kildall, Digital Research's lead attorney and wife of the founder, Gary Kildall, did not like the terms IBM proposed and told them so. An IBM exec worked on a United Way board with Mary Gates, wife of a prominent WA lawyer. She mentioned that her son had a software company and perhaps they could fill IBM's need. Microsoft didn't have anything to offer except the understanding that this was an immeasurably huge opportunity and went looking for a company that did have something that would serve. One of those companies was Seattle Computer Products, who produced 8086 S-100 bus products and had its own in-house C/PM clone, 86-DOS. Microsoft bought the rights to that and some months later it was shipped with the 5150 as PC-DOS.
@danielandrejczyk9265
@danielandrejczyk9265 3 года назад
Do you think ARM and RISC are the future for next generation PC chips?
@wskinnyodden
@wskinnyodden 3 года назад
@@epobirs 8088 specifically was the core I had in mind truth be said
@epobirs
@epobirs 3 года назад
@@danielandrejczyk9265 That depends: when is the future? Within the next five years, X86-64 will continue to dominate. In ten years, it gets a lot foggier as we reach the limits of how small we can go for a silicon process node, both in terms of physics and the immense cost of creating a mass production facility. Any major shift away from silicon opens up the opportunity for new architectures to emerge, either from new players or from existing players looking to take advantage of the right moment to make something entirely new. (Or as close as one could come without ignoring the principles that will still apply.) One failing has been the downfall of many companies: dragging their heels or outright trying to kill a new technology that competes with their existing product. A competitor with a superior product is inevitable, so better to have that competitor be part of your own company rather than some outsider who'd be perfectly happy to see you die off rather than transition.
@epobirs
@epobirs 3 года назад
@@wskinnyodden The 8088 was not created at IBM's behest either. It was an obvious follow-on to the 8085 thanks to its compatibility with a lot of existing chips needed to make a functioning system back then. IBM was not very serious about making their initial product the best it could be. They wanted to keep it cheap and would then produce something better if the market was proven viable. There was much dissent within IBM that wanted to strangle the project in its crib, so there was no lack of pressure to keep expenditures down if this thing struggled to find customers. This didn't get better when the PC was a huge success. It instead got worse until culminating in the undermining of OS/2 and IBM dropping out of the PC business entirely. IBM didn't really get around to making their 'real' PC until the 80286, largely because Intel didn't give them the option. There were variants for making cost reduced systems but they came along well after the standard for 286 PCs was well established, even if that didn't live up to its original promise. (Insisting on running on 80286 was one of the big downfalls of OS/2.)
@Roxor128
@Roxor128 3 года назад
Ah, yes, the Itanic. I wonder if it might have done better if Intel had gotten it into a games console like IBM did with the Cell processor in Sony's Playstation 3. The explicitly-parallel architecture of the Itanium does sound like the kind of quirky thing you'd find in a console.
@Zer0Blizzard
@Zer0Blizzard 3 года назад
You guys should cover the inherent delay in cycles while the processor waits for various levels of cache (and ask some people about exactly WHY you have to wait 3-4 cycles for L1 cache), an extremely in depth video (but in plain english) of how you get from bit level circuitry to machine, then assembly, then say, C when running a hello world, an in depth video on GPU pipelines (and why old GPUs had like 800 MHz chips, 3GB of GDDR5, and why the new GPUs run so much faster and have more ram/etc.), a video on distributed computing (Boinc/curecoin/the ethereum network, theoretically), another video that explains why AWS is so damn expensive in comparison to say bare hardware+free linux software/storj/other distributed computing/storage systems, a video on how cheap bandwidth actually is (putting down wires is a one time cost, network servers basically only cost electricity, have no moving parts, and they basically never die) for landline and cell (a 5G antennae basically costs like $5k and you can service 500+ people, and you don't need to replace it for 20 years).
@r.j.bedore9884
@r.j.bedore9884 3 года назад
Oh! Oh! I suggested this topic on a previous video. When Itanium came out it was on the cover of every tech magazine, and I was fully expecting to be building a gaming PC with a 64-bit Itanium processor and RD-RAM within a year or so, but then both of those technologies just sort of disappeared. I had wondered why I stopped seeing advertising for Itanium CPUs since they were supposed to be so much more efficient, and now I know they were simply too good to be true. Thanks James and the rest of the LMG crew for satisfying my curiosity on this! Maybe next you can do a video about those physics processing cards that were supposed to enable fully destructible environments in games and were eventually going to allow for realistic real-time, procedurally generated particle based physics on every object in the game so that every grain of sand and blade of grass would behave just like it would in the real world (or so advertisements and news articles of the time would have you believe). I almost bought one of those, but didn't quite have the money at the time as I was still in school. I think it was called a PhysX card.
@fluffycritter
@fluffycritter 2 года назад
I always appreciated how The Register referred to this whole initiative as “Itanic.”
@Aranimda
@Aranimda Год назад
The lesson of this is backwards compatibility. New development is great, but people like to use their existing software as they transition to make use of the new technology. Therefore backwards compatibility is very important, even if the final result is a bit less efficient than a brand new but incompatible design.
@ASOTFAN16
@ASOTFAN16 3 года назад
I wish we could live in a world where everything is 64-bit. Imagine how glorious that would be.
@abdulazizalserhani7625
@abdulazizalserhani7625 4 месяца назад
In fact, Was NetBurst kind of originally meant to be a kind of a stopgap between P6 and Itanium that would keep them competitive with AMD's Athlon reaching higher clock speeds than Intel's Pentium III at the time and quickly phased out once Itanium matured and got a sizeable library of software, But of course, reality turned out to be very different with Itanium failing.
@magnemoe1
@magnemoe1 3 года назад
As I understand it Itanium wanted to execute groups of instructions at once by grouping then into an 256 bit call and then do all at once. So it would would require a lot of the compilers. Feels a bit PS 3 to me. Yes X86 has plenty of flaws but as 90% of your cpu is cache and branch prediction and +5% is float and integer math units. 30 years ago we had 200K transistors on an chip and saving 50K was an huge deal, now we have 10 billions, saving 50K transistors :)
@danielbishop1863
@danielbishop1863 2 года назад
Itanium instruction encoding is based on 128-bit "bundles" each consisting of three 41-bit instructions, with the other 5 bits used to encode what types of instructions (integer, floating point, memory access, or branch) these are.
@amigang
@amigang 3 года назад
Should do one about PPC history
@Rainmotorsports
@Rainmotorsports 3 года назад
The fact that went into Itanium at all seems a little misleading. Itanium was never on Intel's plate as competition to x86. That's was part of HPs long game, the entire reason for buying Compaq. All those DEC customers. Entirely different market. x86 was a stop gap from the beginning to a product intended for the mainframe market. All of this coming from a company that never intended to make CPUs. A lot of happy accidents.
@jeremyroberts2782
@jeremyroberts2782 3 года назад
Whilst x64 may well have saved AMD due to royalties , it probably has not done us any favours as far as hardware is concerned. X64 is not a true 64 bit architecture and we are probably a lot worse off for it than if we had all moved to Itanium. It may be that the door has been left open for 64 bit ARM as is being tested/proved by Apple.
@capability-snob
@capability-snob 3 года назад
Ooh, mostly accurate, well done! So: the software compatibility thing wasn't as big a deal for itanium and amd64 as people imagine, indeed the itanium had hardware x86 support for user applications, although that hardware support lacked the OoOE of the competing Pentium Pro and MMX. But the target market were migrating from SPARC, MIPS, and HP-PA RISC, so x86 support was not so important for these - nobody with money ran x86 servers in the 90s. The problem actually came down to price. Intel wanted to charge like they were SGI or DEC and make great margins, but you could buy pentiums and later opterons that would give you more bang for the buck thanks to competiton and scale. It was the bean counters that killed it, which was the theme of the 90s (c.f. Apple under Gaseé, symbolics, all the RISC manufacturers).
@Luredreier
@Luredreier 3 года назад
I'm not surprised. Thank you for sharing.
@whosonedphone
@whosonedphone 3 года назад
That was a dangerously good textbook Segway to a sponsor. Don't ever do that again.
@bradleymott7389
@bradleymott7389 3 года назад
I think saying that software scheduling is a "bad idea" essentially ignores the idea or virtualization... there are amazing software schedulers out there that can outperform bare-metal these days (given, with instuction-set integrations like vt-d). The evolution in that space has been fascinating to watch
@vyor8837
@vyor8837 3 года назад
No, no, god no.
@JeremieBPCreation
@JeremieBPCreation 3 года назад
An sfx in that video's music sounds like an HDD of graphics card rattling.
@vivago727
@vivago727 3 года назад
3:30 ltt doing their own stock footage
@NealMiskinMusic
@NealMiskinMusic 3 года назад
It's the same problem everybody trying to build a competing architecture faces: The vast majority of software is for x86 and making software compatible with a new instruction set is time consuming. So unless you're a company like Apple that has enough industry clout to essentially force the industry to port everything to their new chip as quickly as they can, it's going to be really difficult to sell many chips if most software won't run on them.
@snope1779
@snope1779 3 года назад
literally just went over all this in my computer engineering class... WILD
@StigDesign
@StigDesign 3 года назад
More indept on how X86 works and maby kernel too? :D
@TehJumpingJawa
@TehJumpingJawa 3 года назад
I have an idea for a topic; the origin of "Alt+F4". It's quite an interesting rabbit hole that goes a long *long* way back; the stimulus for IBM's CUA, its adoption in Windows, why it was less influential in Unix, and its eventual obsolescence.
@slashtiger1
@slashtiger1 2 года назад
02:42 LOL @ Professor Trelawny reference...!
@ryzenforce
@ryzenforce 3 года назад
Just to make things clearer for everyone: x86_64 was created by AMD for starters, and the Opteron was the first cpu to support but x86 and X86_64 so you could run both legacy 32-bit code and newer 64-bit code. What we are all using today - and everything Intel has created since then - is base on AMD x86_64. So, once again in the history of cpus, AMD saved the day for all of us. And some people are still vouching for Intel in 2021... Edit: 2003-2004, I remember I successfully steering away our architecture team for our next ERP system to move away from Itanium system that HP wanted us so badly to buy. Even as early as 2002, the Itanium was already dead in the water before reaching systems. Ty AMD!
@Lancia444
@Lancia444 3 года назад
I remember those days well :) Windows 2000 x64 was buggy and didn't have great driver support at the time...but the whole AMD64 thing was awesome!
@Psychx_
@Psychx_ 3 года назад
Fun fact: Nvidia has in-order VLIW (like Itanum) CPU designs with very high IPC (Denver, Carmel). The cores use an internal design and perform dynamic instruction translation to run 64bit ARMv8.2 code. Theoretically they could also run x86 code by using a different firmware, but that isn't available atm due to Nvidia not having access to x86 patents. The technology was invented in the 90s by Transmeta and even made it to the market back then, in form of consumer devices (laptops), which even recieved support for new CPU instructions via software updates. Radeon graphics cards up to and including 6000 series (terascale architecture) were VLIW designs aswell and the architecture was very efficient. Unfortunately, the driver's shader compiler had to be tweaked for individual games to optimize things like instruction order, scheduling and utilization in order to achieve good performance. GCN, the successor which doesnt use a VLIW design, was supposed to reach full utilization much easier, aswell as having a simpler shader compiler and being easier to program for. To an extent, this worked very well (i.e HPC compute), but for games not so much. One of the problems was that you need at least 256 threads to utilize one CU fully; on top of that: the execution of an instruction takes 4 clock cycles.
@PurpleKnightmare
@PurpleKnightmare 3 года назад
Yeah, I was a tech working for them at Microsoft... OMG Those things were horrid.
@KuruGDI
@KuruGDI 3 года назад
Fascinating that AMD did not learn anything from Intel's mistake. In my opinion it was clear from the start that Itanium will not go anywhere with a "you have to adapt your program" approach since software development is expensive. So it kinda baffles me that AMD thought that it was a good idea to do almost the same thing when they launched their bulldozer APU lineup and they said that programmers would have to optimise their code for their APUs. _Switch to my platform! You may have to write your entire software from scratch, but... aahmmm did I mention the CPU is brand new?_ Yeah... Like this ever going to happen...
@2frelledminds
@2frelledminds 3 года назад
Um, did you forget about the Pentium IV? It needed optimized code that didn't exist when it launched. It took probably 2 years for the software to be optimized for it. I don't have concrete examples because that was over 20 years ago so my numbers may be off some, but initially a Pentium IV 1.5ghz could not go up against an Athlon 1ghz.
@xWatexx
@xWatexx 2 месяца назад
That sponsor is cool and all, but I already got that kit from work. It’s great btw.
@motoryzen
@motoryzen 3 года назад
1:49. "we'll tell you..right after a massage from our sponsor iFixit" Jayztwocents: " Hold my keg...Let me show you how it's done"
@GreyDeathVaccine
@GreyDeathVaccine Месяц назад
Tricycle with cpus as wheels was funny!
@teknophyle1
@teknophyle1 3 года назад
2:40 the memory uncertainty principle
@georgegonzalez2476
@georgegonzalez2476 6 месяцев назад
Actually, they tried twice. In 1981 they marketed the iapx432, a really weird and complex 32-bit CPU. It had a lot of advanced features. Far too many and far too restrictive a set of features. It was also very slow. They didn't learn from that total flop.
@LenstersH
@LenstersH 2 года назад
The IA-64 architecture was conceptually based on HP-PA for which HP had written very successful branch prediction compilers for HP-UX. The big problems came when Intel wanted to add support for out of order execution that they had in their x86 compilers. The concepts of out of order execution and software branch prediction conflict with each other in ways that had never been studied. With the secretive natures of Intel and HP, they didn't let the details of the problem out to the open source community which might have had a chance at solving it. That along with supporting the x86 hardware emulation shell killed the project. If some logicians and mathematicians ever come up with the rules for compiling with both out of order execution and software branch prediction enabled the IA-64/HP-PA architectures will far surpass x86 and probably compete with Apple's new RISC architecture.
@Knoah321
@Knoah321 3 года назад
RISC-V is the way to go! 👐🏻
@supervisedchaos
@supervisedchaos 3 года назад
Back to basics... describe everything that happens chronologically when you double click an app or game up to when it displays on screen
@chainingsolid
@chainingsolid 3 года назад
That video could be a year long series, or a 15 minute vid depending entirely on how deep/ to the basics they would go with it. (instruction by instruction or the general steps taken ex: load exe from disc -> load libraries -> get opengl working/make a window....)
@nonetrix3066
@nonetrix3066 3 года назад
Funny story I was configuring the Linux kernel and I thought IA32 referred to the 32bit version of Intel Itanium, and couldn't install Nvidia drivers.. turns out that doesn't exist and is just 32bit x86 :/
@md.abdullaalwailykhanchowd3974
@md.abdullaalwailykhanchowd3974 3 года назад
Intel : IA64 is the future. Also Intel : IA64 never existed 😶‍🌫️
@JuanGarcia-lh1gv
@JuanGarcia-lh1gv 2 года назад
Now that most, if not all new computers are 64-bit, can't Itanium make a comeback?
@charleshines2506
@charleshines2506 2 года назад
The tricycle at the end seems like it would ride very poorly
@borisvokladski5844
@borisvokladski5844 3 года назад
I have heard that the Itanium CPUs was called "Itanic". It says everthing about this CPU.
@mknights5618
@mknights5618 3 года назад
Suggestion: not sure if you have done this but how about the evolution of RAM from delay lines to DDR5 and beyond.
@GorujoCY2
@GorujoCY2 3 года назад
We shall make ARM the standard kek.
@ImtheIC
@ImtheIC 3 года назад
No way..
@piekay7285
@piekay7285 3 года назад
RISC-V
@johnsontzu41
@johnsontzu41 3 года назад
That's a RISC-y statement
@Kubush1
@Kubush1 3 года назад
@@piekay7285 Arm is far superior to Risc-V
@DacLMK
@DacLMK 3 года назад
@@Kubush1 It's the same pleb
@The_Murdoch
@The_Murdoch 3 года назад
Oooooo I got a Linus Pulsway ad before the video started. He was in skiboots too!
@skilz8098
@skilz8098 3 года назад
x86 won't go away anytime soon, but I have a feeling its towards the end of its lifespan. Not sure which direction Intel is going to go, but I think Risc-V might have an impact towards their newer design decisions...
@hanrinch
@hanrinch 2 года назад
Sure hardware scheduler is faster and OoOe/register renaming are powerful until you met the meltdown/spectre on every risc/cisc from 1983 (x86 started from p6 in 1995 and onward). Sometime you just can’t have security and performance at the same time.
@fvckyoutubescensorshipandt2718
@fvckyoutubescensorshipandt2718 3 года назад
They should have killed it off. After 50 years x86 just isn't the best way to crunch numbers anymore. It was great when all it did was basically run a suped up calculator and the CPU's didn't even need heatsinks and came in DIP packages. Today not so much. Maybe when a CME from the sun takes out every electronic device on the planet what rises from the ashes will be better, because it seems it will take no less than that to make the switch.
@kiwimonster3647
@kiwimonster3647 3 года назад
Only the future can tell what we'll get next
@frozenturbo8623
@frozenturbo8623 3 года назад
We would get RDNA2 APUs
@random_person618
@random_person618 Год назад
If Intel had released the Itanium today, it would have been a success. Considering that today's smartphones only support the arm64 architecture, it's safe to say that this CPU would not have failed that bad today.
@polybius223
@polybius223 2 года назад
1:44 Win XP has some beta versions too
@samvega827
@samvega827 3 года назад
Just got that tool kit today! its sooo badass
@serifini2469
@serifini2469 3 года назад
I'll see your Itanium and raise you an Intel iAPX 432. This from someone who was subjected to having to write multiprocessor real time code in assembler on the Intel i860, a RISC chip that came out at about the same time as the i486 and which had user visible pipelines and delayed execution slots. The nightmares have mostly stopped...
@kensmith5694
@kensmith5694 3 года назад
Some early DSP and bitslice machines were even more "interesting" to code on. ATT made one where a store into a memory location followed by a load from that location would get you the value from before the store because that would not complete until after the load instruction. There was always the branch latency to consider too. A branch would do the next instruction then the one on the new path.
@falxie_
@falxie_ 3 года назад
Hopefully ARM or RISC-V can overcome these problems
@dindongdindong8565
@dindongdindong8565 3 года назад
X128 new era
@blahorgaslisk7763
@blahorgaslisk7763 3 года назад
For at least 35 years I've heard people say things like: "X86 is old, slow and clumsy. There is no way to improve performance the way it is going to need to improve. RISC is the future in processor design and it will stomp all over the CISC architectures." Well 35 years down the road we are still using X86, but it has evolved from being 16-bit to first 32-bit and now 64-bit. Each time doom and gloom was predicted. Adding the 64-bit extensions was described as putting lipstick on a pig, and yet this pig is pretty damned spry. Meanwhile the RISC projects has evolved back and forth. PowerPC was pretty impressive in it's heyday, certainly better supported than Itanium, but MIPS and SPARC were where things seemed to really happen. Then ARM came about and suddenly it was THE RISC architecture everyone seemed to love, but only in low power applications, at least up until last year. Perhaps it's finally time for RISC to shine but I wouldn't hold my breath. Somehow it seems that every time the death of x86 looks like it just may happen Intel, and lately AMD, dig in and produce new processors with even better performance. As long as they continue to do that the customers win.
@shadowwolfmandan
@shadowwolfmandan 3 года назад
The motorola 68000 architecture was also CISC but inherently superior back in the day. The Amiga and Macintosh computers being the main systems that used it, and while in many aspects they were also superior platforms Commodore lacked vision with it's terrible management and Apple jumped ship to PPC since Motorola had no interest in developing the 68k architecture further. Personally I think RISC chips have their place but I just don't see them taking over. I mean heck, Nvidia brute-forced their way to the top in the early 2000's... x86-64 certainly has pulled off scalability.
@blahorgaslisk7763
@blahorgaslisk7763 3 года назад
@@shadowwolfmandan The superiority of the 68000 was largely due to the fact that is was designed as a 32 bit processor that was scaled back to 16 bit to make it cheaper and more viable in the market at the time. They even had an answer to the original x86 with the 68008 (if I remember correctly) that only had a 8-bit bus. That was used in for instance the Sinclair QL. Scaling the 68000 up to 32 bit was mostly a non issue as it was originally designed for that with a flat memory topology and wide registers available from day one. From what I was told it was also using a much cleaner and more logical instruction set, but that's really beyond my experience. The biggest thing I think was memory management where the 68000 family was much easier to work with without all that bank switching that the x86 processors were forced to when running 16-bit code. This enabled the simple implementation of bitmapped graphics like in the case of the early Macintosh machines. These were technically much simpler than any IBM PC clone at the time.
@Gunstick
@Gunstick 3 года назад
Remember transmeta, which wanted to do faster x86 code via emulation. And nowadays the M1 actually does that using ARM.
@catchnkill
@catchnkill 2 года назад
Do the same thing with very different implementation. Transmeta is runtime translation. Rosetta 2 is install time translation.
3 года назад
That HP idea was not HP's idea, but based on the much older idea of the RISC, Reduced Instruction Set Computer. Simplify, and use that to speed things up (by being able to parallelize better). And let the software do the heavy lifting. See SUN's SPARC, DEC's Alpha, MIPS' MIPS chips (yes, indeed :D). Oh, the idea is *old*, much older than SUN's and DEC's (and certainly HP's) developments. Stanford and Berkeley were the origins of that idea (of course, it's older than that even, but at those universities it was kind "put in writing", fully developed). And one of the core parts was the compiler that knew how to create machine code optimized for the strengths of the design. Another bit: there was no x86-64 when AMD developed the Opteron. It was (and is still) known as AMD64. Only when Intel *copied* it (legally since cross-licensing shenanigans in effect) did it morph into x86-64. Credit where credit is due, you know.
@davidbonner4556
@davidbonner4556 3 года назад
At the time HP 'had' this idea they owned DEC, through their ownership of Compaq, hence they also had expertise with DEC Alpha available.
3 года назад
@@davidbonner4556 I looked that up because I suspected that… but: they bought Compaq (thus Digital) in 2002, and Itanium roll-out was in 2001. So, while it was an enticing idea, the timing doesn't fit.
3 года назад
Though on the matter of Compaq/DEC: I was pretty shocked back then in the late 90s that this young whippersnapper of a company would *dare* buy old, legendary old company (Compaq is younger than I am, while DEC is almost 10 years older :D)
@MrDrewseph
@MrDrewseph 3 года назад
I tuned out half way through this video and started thinking about how much I adore Donna Murphy's performance in Tangled, which I didn't really get until I bought the cd soundtrack and listened to it without the visual, because during the movie I spent all my time hating the character
@RomanoPRODUCTION
@RomanoPRODUCTION 3 года назад
Yvonne is the opteron. Linus is the itanium.
Далее
Intel's biggest blunder: Itanium
10:35
Просмотров 365 тыс.
Every AMD CPU Ever!
8:31
Просмотров 753 тыс.
БЕЛКА СЬЕЛА КОТЕНКА?#cat
00:13
Просмотров 1,6 млн
Офицер, я всё объясню
01:00
Просмотров 2 млн
The Dumbest Connectors Ever
6:36
Просмотров 1,3 млн
How does Computer Memory Work? 💻🛠
35:33
Просмотров 4 млн
5 weird motherboards that shouldn't exist
14:22
Просмотров 4,1 млн
Are Expensive PC Parts Worth It?
9:22
Просмотров 256 тыс.
Bad Value PC Parts Everyone Loves
6:48
Просмотров 197 тыс.
PC Parts That Disappeared
5:39
Просмотров 1 млн
БЕЛКА СЬЕЛА КОТЕНКА?#cat
00:13
Просмотров 1,6 млн