I beleve that nobody should doubt LT's authority when it comes to giving his opinion on these issues. Beyond his public notoriety, has been programming at a low level for several decades, and dealing with hardware to achieve the best performance with the kernel. He has already said that he often writes code in C and then observes how it is translated into Assembler, and rewrites it if necessary so that the result is the most efficient (I doubt that those who criticize have ever tried something like that...) I think it refers mainly to those security aspects that, through speculative execution, generate the well-known Spectre and Meltdown attacks. To put it simply, the idea is that these techniques allow bypassing memory protection between user processes, and consequently accessing, for example, a variable containing an unencrypted password. All processors that use Cache prefetching are exposed to these vulnerabilities (ARM, Intel, AMD, etc.), but it is also true that without this technology it is not possible to achieve the high performance of current CPUs. I also think that some hardware could be designed to collaborate with the mitigation algorithms to reduce the impact on performance, but it is impossible to avoid these attacks completely, and I believe that's why it hasn't been considered (pay attention to those processes that constantly consume 100% of your CPU). The scenario of high-performance CPUs is really complex. It is not a design flaw but rather time windows generated by the techniques used to achieve these speeds. It is possible to mitigate and even avoid this type of attack, but with speed losses. Perfection only exists in our imagination!
Any kind of fast CPU needs shared structures with limited size like Caches, Branch prediction memories, Trace caches, Pipeline slots, Top Level Buffers etc. It will always be possible to exploit timing differences from those to create timing attacks. Building a system with constant access time would give you speeds in the Megahertz range (DRAM is too high latency and SRAM is too small). Spectre and Meltdown attacks aren't the variable, they're the constant.
@@boptillyouflop I understand. In other words, you are saying that there would be no hardware solution for these cases. While I am not an expert in high performance processor design, as I have some experience with FPGAs, I would believe that if there is a software solution/mitigation, something could be done in hardware, at least to help and alleviate these algorithms and gain some performance. What do you think?
@@Escu4Lo As of yet, the response seems to have been an acknowledgement that timing attacks are always going to happen, so they have made particularly sensitive processes such as encryption be constant-time using timers, and have moved sensitive OS data to a different page table... There's an infinity of hardware mitigations that can reduce the problem, but there's also an infinity of side channels, such as the data-aware prefetcher in a recent Apple chip. There's always a Belgium to let the German army around your Ligne Maginot, so to speak.
I think he may be refiring to some obvious mistakes like: - How you port applications written in x86 and arm to risc-v - Security (how the cache memory is managed) - Having support for compilers of different programming languages - Having a 64 bit integer to represent time - Having take care of thermals when the chips is at its top performance - Get a deal with hardware and software companies (specially OS) to use your chips - Also how do you deprecate instructions you no longer use (like intel with x86)
I would tend to agree, unfortunately there are too many issues in too many different parts meaning you can't just easily enumerate them until you've specifically looked into them for an extended period of time.
Linus is just ranting and trying to be smart. All hardware and software evolve over many years and that is fairly normal. You don't make the perfect hardware or software on just first release. Linux wouldn't be what it is today without billions of dollars and hundreds of engineers spent by big tech companies.
The problem is that new people aren't taught the history of the fields (hard and soft). They come in with the confident arrogance of youth and faith in their ideas, oblivious to the past failures, which they faithfully reproduce.
@@blvckbytes7329 well, when the old farts do not publish their failures, how could we? I can vividly remember all those time my wife came back from a conference where she learned on a side note in a poster session that the method they were trying to develop had already been tested and failed years ago with ZERO publications on that. That makes teaching how things DON'T work much harder (and sometimes, they didn't work in the past, but ended up working later on when the _actual_ problem was identified)
David Petersen was involved in it and he has been around since 1990 and literally co-wrote the book on procesor design with Hennessey and won a Turing Award. If you read the original papers they have rationale why they didn’t chose and existing architecture. So how much more experience and pedigree would you like?
I disagree mostly with this. Most of the issues that come up aren’t really classifiable as ‘mistakes’ - they’re just shortcomings that were very hard to imagine or predict ahead of time. Fundamental mistakes are very well avoided since their effects are mostly rather obvious, and they are mostly well listed as shortcomings of previous iterations. It’s the insidious ones that only rear their head once someone manages to find a way to make it go wrong that bite us. I think those are what Linus is referring to… the hardware choices that end up making it impossible to implement reasonable code safety without compromising performance or flexibility of systems. Any new ISA is bound to present new opportunities for people to break it in this way.
I think t would be awesome if Linus Torvalds joined some steering/advisory group of RISC-V and help them avoid as many mistakes as possible. But still better having open and royalty-free ISA then closed one, even if they would have the same mistakes.
RISC-V is literally just an ISA, What Linus is probably referring hardware that will feature the RISC-V ISA, because these "mistakes" and "issues" occur at a level lower than ISA itself, i.e. at Micro Architecture Level.
@@kllrnohjDevice Trees and such were understandable back in the day because 1. Hardware was static 2. The concept works pretty much on all embedded devices 3. ACPI is owned by Intel 4. And PnP+ACPI was buggy as hell for about a decade.
I'm tired of reading people blame Linus for not providing examples. It's just what you were expecting because of the title of this video. In fact, there wasn't even a question about risc-v mistakes from the interviewer, Linus just quickly mentioned it.
J'ai vu hier une vidéo qui affirmait que la fondation Linux ne consacre plus que 2% de son CA au développement du noyau. Tout le reste, c'est des projets de multinationales membres du CA qui utilisent sûrement le noyau mais n'ont plus rien à voir avec les logiciels libres. Alors je voudrais pas casser l'ambiance, mais avant de parler des erreurs des autres architectures, je vois en ce moment des erreurs de noyaux régressives. Sur la Mint l'autre jour c'était la perte du port HDMI lors d'une montée du noyau officiel sur un portable ASUS. Sur une machine fixe montée, passée de la Ubuntu 20.04 à 22.04, c'est la mise en veille mémoire qui plante désormais sur un contrôleur SATA additionnel avec un noyau 5.15, avec une erreur sata_pmp_eh_recover.isra, là où le noyau 5.4 n'avait pas de soucis. Bref, même si le noyau est robuste, et qu'on peut heureusement revenir à d'anciennes versions, un usager normal et basique se posera quand même des questions. Pour ma part, la principale erreur des distributions, c'est de forcer la mise à jour des noyaux, quand la machine marche parfaitement bien, et de ne pas offrir un mécanisme graphique de retour à l'ancien noyau quand les choses se passent mal... Enfin, et là c'est plus un reproche global ici sur l'interview : je veux bien que M. Torvalds soit un excellent ingénieur. Mais sans les logiciels libres et la licence GPL de Richard Stallman, il n'est pas sûr que son noyau aurait eu le succès qu'il a. Or je ne vois jamais de reportages sur R.M.S., luttant aujourd'hui contre un cancer, qui certes a une autre vision, mais qui au fond reste le véritable papa des logiciels libres, et à qui on doit d'avoir aujourd'hui une vraie alternative aux OS privateurs de liberté. Une fois encore, on cache les origines de GNU/Linux, en glorifiant un ingénieur qui est tout sauf un libriste, et en oubliant que le monde du Libre, c'est aussi une philosophie, une vision du monde, la défense de la démocratie et de la souveraineté numérique.
Not sure, but, it is not open source architecture, only ISA specification is open source, implementation is on per implementer. Makes a big difference.
Correct, and I think the main advantage will be in compatibility between different RISC-V implementations. Ultimately the benefits will be in the software ecosystem, not in whose CPU is a tad faster.
I feel rubbed after purchasing many servers and then a couple of months later to find out that an unauthenticated attacker can compromise my systems remotly and patch them on level 0, or below, in such way that I would never be able to find out about it. And the fault is in the CPU architecture and there is nothing anyone can do about it. All CPU's are vulnerable.
RISC-V mainly excites me because of its use in pedagogy. people who are new to the hardware design community will likely understand it better because they can practice on an open, highly developed architecture like RISC-V. that said, I don't actually believe RISC-V is a better designed architecture than its predecessors, nor is it *supposed* to be. it's just supposed to be *an* architecture, that does all the same things as its predecessors because they work, but is open-source. in many ways, RISC-V's rising stardom can be attributed to geopolitical reasons; there's a huge push in China to pivot away from the American-owned x86_64 and British-owned ARM, to the Chinese-owned LoongArch, region-agnostic RISC-V, etc.. that said, I think that RISC-V will still be a net benefit worldwide, rather than just for China; and I think that unless you're an investor or some kind of nationalist, you really should be cheering for RISC-V too.
The main problem is that RISC-V is a real RISC-architecture in the sense of the early RISC-CPUs. It has a minimized instruction set and misses a lot of instructions which are naturally for nearly any current contender. F.e. a conditional move is just a extension-proposal. And there's no DWCAS-operation which is necessary for fast userspace memory-management.
It kinda sneaks up on you. I was 20 at the morning mirror and went about doing things and I just noticed the other day that I'm 70... Just walk around a lot, eat only what you need, and work on interesting things. It's not so bad. Oh, and never look back, only forward...
Thank God for Linus ...I could'nt agree more ... My future is risc-V and risc-V only (or any new variants) I run KALI debian exclusivly .open source and only open source.
ARM still has made big mistakes. They're still dependent on devicetrees cause there's no way for OS's to detect everything. Every OS needs to be customized for each ARM hardware platform and it's ridiculous. I hope RISCV doesnt have this problem, but I doubt it.
I can't see that a problem since most devices powered by ARM are SoC's anyways. ARM is just the ISA part-- everything else is going to be different anyways.
My home server is running on RISC-V hardware and yes, there are gaps but mostly with embedded chipsets having no drivers like e.g. iGPUs and "closed source" drivers not beeing rebased to current kernels. But the platform itself is rock solid. I can not complain.
He is just giving his opinion based on his experience of decades. Maybe his opinion is more credible than a fan boy with zero experience who only base his opinions on ear say and trends.
@@estranhokonsta decades of experience of what exactly? how is his experience related to ISA design? his guess is entirely uneducated and completely unexplained. also funny when you say stuff about fanboys when that's what you are literally being in your comment
@@HolarMusic Lol. Me a fan of him? I do not even use linux or wish to use it. The rare occasions i used it were because i needed to and certainly not because i wanted to. But besides that, it doesn't mean that i am blind to who is the expert here in this question. Him or you? Who do you think i bet my money on?
Isa design and processor manufacturing are rellatively independent. This even was a motivator for amd to separate it's foundries as another company years ago The techniques and tecnologies for wafer production that actually translate a chip design into reality are mostly trade secret kept at closed doors.
For those wanting Torvalds to simply give a list of the mistakes, you have to realize that in order to be able to list those mistakes, one has to spend CONSIDERABLE time diving deep into the RiscV architecture, while having a LOT of experience optimizing systems for other architectures and seeing them improve over the years. Torvalds is probably not the right person for that, and his answer shows me that he knows that. He is just warning the RiscV team that, watch out, the path you are treading is not new, others have been there before, and you should learn from them. Probably the best people to talk with are compiler builders, people who implement e.g. the Java virtual machine on specific platforms, people who design and study performance benchmarks, and those who optimize system performance for various computing platforms.
Is allways the same. Change the persons, the times but at the end the human fall in the same "litle things" of is human esence. Power, money, dominance, then the technology is the victim again and again ......
Torvolds complaining about this but not pushing for professionals in these spaces to share and exchange knowledge to bridge these gaps. It's a hard gap to bridge but if there were spaces where these ideas and documents can be aggregated to establish mutual understanding
In my honorable opinion, recent architectures as x86, arm and risc-v are the same bloatware on silicon. I don't need read books of many thousands of pages of these chips (that deviate the attention). I need simple architectures, safe architectures, minimalist requirements, books of fewer hundreds of pages, etc.
The base RISC-V integer ISA spec may be a few hundred pages but by the time you add in the specs for the many extensions required in many use cases you're going to be heading towards a lot more than that...
That sounds great until you have enough silicon for vector units, and you need add a new SIMD ISA to take advantage of it. Most of the crufty old instructions just take up a minuscule amount of space in microcode ROM, and they don't "bloat" the CPU.
The whole point is to make money not a very efficient chip, just in 1 month someone will says its obsolete off of a breath of air. x-86 lives on the desktop and server. Arm for phone and tablets.