0:14 AFAIK OpenSSH does not directly depend on lzma. Some distros (e.g. Debian) patch libsystemd into OpenSSH for readyness notification, which in turn depends on liblzma. Upstream ssh should be safe. More on this in the HN thread.
@@gingeral253hi definitely is an accomplished developer who stumbled upon the Valgrind issue and was motivated enough to investigate and then find it. This guy probably wanted to help fix a memory issue and had the perseverence to go to the bottom of it.
Kudos to this „just a guy“ who probably just wanted to help out an open source project in fixing a memory leak and performance issue and then probably spent weeks of his time getting to the bottom of this. This is what makes all of us safer. He deserves a medal or something.
@@cigmorfil4101 well, Microsoft has a bug bounty program, as most big companies have nowadays. The ‚just a guy‘ is Andres Freund, some blogs call him ‚microsoft security researcher‘.
X.x What interesting events, a tinfoil hat might start to think supply chain issues, and notice the same kind of ticks in other things like 'heartbleed bug' and just how 'slightly off', but not enough it was to be spotted by all but the most ocd or diligent person for their own thing. Just sayin.... Maybe Open source is more at danger of being infected by forign bodies than closed source, and taken for safer .. when most don't have a single clue about wtf is going on, or just why an issue or even performance hick-up exists. Who would ever be absurd enough to question 1-2 second single hick-up in a sign in process for something meant to exist on the internet where that delay would be unreasonable to impossible to spot in the live environment. Speaking of cool things, Isn't it great that there are so many easy to access cloud-based hosting solutions meant for personal & small business users? That convenance is great, just have to deal with a tiny bit of internet lag because tech stuff. I mean it would suck if they all used a common supplier that managed access in some way or something, or data security... *cough cough*, because some big big target would be pained on the backs of such a supplier and be closely watched. Man, the amount of qq from governmental bodies about data encryption and traffic routing has sure gone down, I bet they only realized how foolish it was for them to keep on that. I guess all is well, and the days of the governmental bodies pulling overly complicated & deep state affairs on the people they care about is over for our own people right?
Using a single makefile line that looks weird but not totally out of the ordinary is so creative. On the other hand how was this mf able to commit compressed binaries without getting nuked by a maintainer, i do that on accident occasionally on my branches and it draws the aggro of every devops guy in a 10 mile radius when i submit the PR
@@twentylush they passed it off as a test case. It makes sense at a quick glance, having a premade file that _should_ decompile properly, then using that in testing to see if everything works as expected it's just that the file wasn't only used for testing
Important Clarification (since I feel this isn't clarified): upstream OpenSSH doesn't use liblzma, however many distros like Debian patch OpenSSH to use SystemD Notifications through libsystemd, which in turn uses liblzma. Distros like Arch (which don't patch OpenSSH) or distros without SystemD like Void should be fine with regards to SSH (however most distros are already downgrading xz anyway for obvious security reasons) Source: the latest Arch News post regarding this backdoor EDIT: to quote directly from the Arch news post: "Arch does not directly link openssh to liblzma, and thus this attack vector is not possible"
@@mx338but if your systems are sufficiently firewalled... What is the exploit going to do? This is not a good situation, but it won't break the world either. Vulnerabilities are found almost every day it seems in so much software.
@bzuidgeest you're right, ideally you do not expose SSH to the internet without restrictions, I run everything on prem for work and don't allow SSH from the internet. However in cloud computing that's often different, if you run and AWS EC2 or any other cloud VPS, SSH is going to be exposed to the internet by default.
I can already read online people saying that this is proof that open source code is less secure than proprietary On the contrary, the fact that it was caught, and caught relatively quickly, shows that open source is more secure against these kinds of backdoor attacks
"this" was caught but there could be many more backdoors which will never be discovered. Imagine if a war breaks out between countries. Of course no government agency would trust proprietary software during an active war. In that scenario, the country that managed to put backdoors in open source software will have immense advantage.
Indeed.If someone can do this in code that's opening visible, imagine the shit that goes on behind the curtain in non-opensource crap. (Solarwinds comes to mind.)
Also, we have a clear method to identify and track the malicious actor's influence. We know what account they're using, and can thus see every open source project they have touched.
as a sidenote from what I gathered: The person who "contributed" this backdoor was not just some person who randomly came out of nowhere with a Merge Request. It was someone who contributed to the project for an extended period of time to a point where they themselves became a maintainer (not the main one, but projects like this often have multiple).
@@JordanPlayz158they were obviously being paid already for this. Plenty of maintainers are on the payroll of Google, Microsoft, Redhat etc. This is just the first time a malicious group decided to go that route as well
@@Pharoah2 just the first time? Lol do you think that between the thousands of FAANG+ employess none of them is on the payroll of some 3 letters agency?
After this, he better be. This is really well designed malware. The ways they are cleverly hiding it, all the involved packages, the linker, removed the symbols from the malware in xz, using the testing branch, those mentioned in the video, or the fact that it's not even in libssh, and it only works if you run through systemd which systemd would be compiled to use xz. That's really well orchestrated, that's gov malware I think.
@@brians7100 Some of his suspicious push requests date back over 2 years, so its possible this isn't the first time he has tried to do something like this, just the first time he got caught.
@@brians7100 Even if he isn't active anymore, the damage is done and needs to be detected and reverted in each repository this guy has committed to. I bet it'll last some time as an active issue until things are resolved for most packages affected, and many machines will be compromised until then
Kudos to the person who found this. His modesty does him credit, but the realization that something was amiss, the desire to delve into it, the analytical process, depth of research, and the willingness to share what he discovered with the wider community - totally aligned with the OSS ethos - shows he definitely has the right mindset to be a security researcher and has hit the ground running. Hats off, sir; bravo!
Idk if I would call Andres Freund “some guy” haha he’s a Postgres contributor/developer and Principle SWE at Microsoft. I get your point though. Not technically a security researcher.
Yeah. Just got the news about it 9h ago. My Manjaro installation has the aforementioned versions installed and now I don't know what to do. Rollback, or directly wipe the system and return to some more newbies OS like Mint.
Loving these security vulnerability breakdowns man! I can only imagine the effort and background knowledge it takes to put content like this together, thank you.
Bullet dodged. For real. This could have been the worst money grab backdoor by far. It's literally in every system. It's especially scary that the project owners approved the compiled binaries. Hopefully it's not a maintainer behind this.
It's literally the project owners or a representative of the project owners org. Both of the main devs that are the contact points for package maintainers have had their GitHub accounts suspended reportedly
It sounds like the original, single maintainer was overloaded and started passing off duties to another contributor who "helpfully" stepped in recently to reduce the load. Fast forward a few months and it looks like this helpful contributor was (likely) a state agent...
They aren't compiled binaries. They're compressed files. Which is totally normal for testing a compression tool. What was missed was potentially looking into the contents of the compressed files, which I know very few people would bother to do, or looking at the quality of the new tests. If you're just making existing code paths faster, why do you need new compressed files? Edit: auto-incorrect
You realize that this backdoor was caught but so many other could be successfully installed in open source code without our knowledge and without extremely deep code reviews. Though I would expect that other project would start scrutinizing things more deeply in the next couple of weeks and we'll be hearing more surprises
I agree. Even the build systems I maintain for PHP code run tests and then (if the tests passed) checks out the Git repo again in order to do final deployment actions on a clean slate.
it doesn't, really... the test fixture just exists in the repo doing nothing, but with the modified build script, that test data is used to reconstruct the malicious code. in fact the test fixture in there isn't even used at all in a test (which should have been sus)
the actual git tree of the project doesn't have the M4 line which triggers this. The M4 line was added to the public source tarball post release by someone who had keys to do that.
After this report there were some extra findings that people might feel valuable. 1. Some time ago the only mantainer got a lot of pressure from different users to accept another mantainer to the project. 2. The binary files in question were compressed files that are usually made to test that the decompressing tool was working, in this case the malicious tests were two, one for a large file, and another for a corrupted file. As you can see both made for making it difficult to find that there was hidden code there. 3. The malicious mantainer got enough trust to be able to sign the distributed tars with malicious code and to contact linux distro mantainers to presure them to update to the backdoored version, the attacker even sent a patch to the google repo that was harmless by itself but a requirement for the exploit. 4. After the backdoor was created random accounts submitted prs to different projects to update to the vulnerable version. This was a large well orquestated attack that was most likely planned by more than one person and only discovered due to it having performance problems and certain bugs, otherwise we might have never noticed.
@@Dratchev241 Eh, if you're going to do an intelligence operation involving names but no face-to-face contact, you'd probably use a foreign name. I'd definitely say a state actor, but frankly, it could be anyone. I was leaning towards someone Western but smaller, but I really have nothing to base that on.
Yeah this was definitely orchestrated and planned for a long time and took a lot of planning. They just got unlucky that someone noticed the performance decrease and investigated and found it by pure accident.
@@Dratchev241 Hah, because obviously Chinese intelligence agencies would use a fucking Chinese name to do this, right? If anything this was probably done by american intelligence agencies using an asian name to generate propaganda headlines if it didnt work.
@@Dratchev241 You don't need a state actor to create and coordinate multiple accounts. There is no proof that those multiple accounts requesting the update on client programs belonged to different people. It only requires coordination. But one person can easily coordinate with themselves. Your point 1(pressure on the single maintainer of a core library to accept other contributors) is actually what you should expect. You don't want a single unsupervised individual to be the alpha and omega of such a central library with zero backup. That's what actually scares me about OSS and Linux. There are so many core technical libraries that are maintained by a very small group of persons. Any rogue in one of those core groups can cause wide damage. There way too little people interested in working in those dark places where you get no recognition when everything works and all the blame when something goes bad in another software just because someone else used your library in a way that it was not supposed to be used. Point 2 is the actual issue: porous build process. Test files stored in the test directory should never find a way into the compiled binary. There should be a stronghold protecting the source path and compilation tools so that nothing outside of that scope can get into the binary files that are signed and published.
That is definitely one of the most crazy, complex, and sophisticated backdoor injection attempts I ever seen in my life. The engineering behind it is very impressive. The guy who discovered that deserves a reward, he just literally saved the world
It is slightly more subtle, from what I understand. It is not that openssh uses liblzma, but liblzma is used in systemd. On systems where openssh is patched to use systemd as well, you end up with a security issue. This appears to be limited to the combination of x86_64 and linux and systemd. That is still a significant fraction of all linux systems.
@@ok-tr1nwthis is a very specific backdoor that only affects software that uses lzma and does RSA with this specific library. How many programs fit that description...
so if I'm understanding this, the intent is to be able to use a specific key that doesn't have to be installed [legitimately] on a system (no direct attack necessary) to effectively gain "authorized" access to a system. to simplify it, this behaves like a master key or a master lock (iykwim).
Not exactly technically speaking, but the exact same effect in practice. The difference, fortunately, being that this way the attack can only happen under specific conditions. For example, some environment variables can make this not run, since the attack depends on specific code being injected. Of course, even with this stroke of "luck" it still is one of the most serious attacks/vulnerabilities found in the last couple of years.
Apologies if I missed it, but it should be clarified that this did not make it into any production releases. The fact that it was caught before release is a demonstration of the strength of the open source model.
I'd say we got very, very lucky that someone noticed some odd behavior and went 10 extra miles tracking it down. In fact I'd say this was a pretty big failure in terms of the model. As it looks like there is there was no verification that the source being used was even the same as the source in the repository(You know, the stuff people usually check for things like this). If it had been complied with the repo then it wouldn't have been an issue at all(vestigial, non executing code aside). Kind of clever really. Leave out the easiest to see bits from the source. Slap them in the release tarball and as long as the behavior isn't too different from the original nobody notices(and if it is you can probably just patch and blame it on something else like the did in 5.6.1) So all the normal checks and balances that would normally catch this in OSS(as far as I know) just don't work here. Only weeks after release and by blind luck did this get caught and that's not something to celebrate. Say what you will about closed source, but at least the back doors in their software are intended by the company ;)
OpenSUSE was vulnerable. MicroOS was vulnerable as a result. It's an os built for containers targeted at scalable services and bigger entities. Curious how many kube clusters got taken over.
@@haifutter4166 As someone stated here Arch doesn't use patched OpenSSH and since manjaro upstreams directly from Arch, I would say it didn't affect it. For Fedora on the other hand, it affected Fedora 40 beta in a few flavours, rawhide and silverblue (iirc?) being two of those. Older versions did not get affected
The malicious commit was made Feb 23 and entered xz-utils stable on Feb 24. Detected in Debian-unstable. 5 points for early detection, before it should have been distributed to production anywhere?
Besides being obviously malicious code, let's take a moment and appreciate art of creating such convoluted injections and doing enough of social engineering to get this artwork to be successfully pulled into FLOSS repo, on eyes of all watchers. That's pure hackercraft and deserves low bows despite of colour of hat that you're wearing.
Wow - this is absolutely crazy. Never would have thought that this passes a review. Obfuscated code… I have thought several times - e.g. when people talk about hardware wallets’ software being open source - you dont know what ends up on the device, and even if the source is fine, you dont know what happens during the build process. A case like this shows me this suspicion isnt that unrealistic. Thank you for the video - I subscribed.
These videos are great. But please add more context or explanation about how these exploits could be used. I forwarded your previous video on kernel exploit to the management and they came back saying "we dont understand the implications from this"
It’s pretty simple, they could ssh into your server. What you have running on that server and whether or not they could they could perform privilege escalation would be required knowledge to determine what the bad actor “could do”, and from that would you determine the implications.
both of these are only doable if the attacker already has access to the network and most companies have an encrypted secure vpn and the ssh would be in the vpn right.. ?, so both are relatively harmless from external attacks, they do have to worry about sabotage from inside. Even then records would be kept and the culprits found.
How does management feel about everyone in the company who uses those servers having access to all of their files, including customer data? From an insider threat perspective, that's what we're talking about. A larger concern would be if one site gets hacked, then everything on that server would be hacked.
@@italianbasegard that's doesn't provide enough actionable context. It should include "in bold" the identifiers of the compromised releases. It wouldn't take too much effort to highlight that this only about "RSA's implementation" + "liblzma [...] tarballs for 5.6.0 and 5.6.1" and NOT just "SSH libraries".
@@italianbasegard I don't think they would require privilege escalation. From my understanding of the video, they could hypothetically log in as root, using their key, and get root access.
@@Cobalt985That's not the whole truth. Arch was affected but OpenSSH on Arch was not affected. Quote from Arch news: "Arch does not directly link openssh to liblzma, and thus this attack vector is not possible". But in xz package history you can clearly see that affected upstream versions 5.6.0 and 5.6.1 has been released by Arch. Still and forever user of Arch 😍
No. The problem is patching security sensitive programs to use 3rd party libraries and, of course, not letting the most important daemon on the system (systemd) that handles everything using 3rd party libraries. There's a reason people are monitoring openssh a lot more then some compression library and this problem wouldn't have been found for who knows how long if a persons OCD hadn't kicked in to thinking a program takes too long to open (half a second). It shouldn't have happened and on systems that didn't try and screw around just to make a config file look prettier, it didn't.
@@MrFujinko Actually... I wonder, if someone could Chainfuck EFI. You just need a lib, which is most likely run as root. So the same crap, but write code, which drops a EFI binary into /boot and hooks that into NVRAM and the loader, which in turn reloads or mimics GRUB. Part of this EFI binary goes straight to Ring -3 on AMD or Ring -1 on Intel, acting as a hypervisor. The PC boots normally, but the Chainfucker is in control of the system by opening ports outside linux control. It's just a theory though, I am no expert.
The backdoor payload was hidden among dozens of innocent compressed test files, and the hook to execute the script that embedded the payload was slipped into a makefile in the release tarball but not committed to Git. We absolutely should be in the habit of diffing our release tarballs vs. the Git tags from now on, but we're going to need to come up with a smart way of analyzing any high-entropy files checked into Git for potential hidden payloads.
@@CFSworks I feel like its a failure of keeping up with the times though! this exploit is a complete dead-end if you policy-out artifacts in code repos and instead only pulling them in containerized environments when the "tests" are actually run. Not to say its all the maintainers fault, I mean the OG seemed pretty burnt out and overloaded, and there's not a lot of up-to-date devops guys with a golden-heart and clear schedule to bring these OS projects to the modern ci/cd era
@@twentylush I suspect that an attacker trying to pull off an attack like this in the future would just switch to hiding the payload in a non-artifact binary file instead. Perhaps a base64 chunk hidden in the metadata of the project's logo, or a loose object under .git. There are too many opportunities for outright steganography here. :/
@@absurdengineering Definitely. But here we had a malicious maintainer, so even with auto-upload of the git-archive output, they would have just secretly replaced the tarball 3 seconds afterward with the backdoored version.
Microsoft employees have names and adresses, they'd be found on an audit, fired, blacklisted and possibly jailed for something like this. Open source maintainers? Good luck going after a dude you're not even sure exists on the other side of the world.
That is actually a huge problem of OSS. Many "popular OSS libraries" (aka core building blocks used by hundreds of projects) are actually not popular at all. They are maintained by 1 or 2 people at best and everyone else just blindly integrates those libraries into their project trusting that some magic being has actually independently reviewed and validated what those guys did. And for that exact reason those libraries are very easy to infiltrate and alter for rapid damage on a lot of projects at once. There is no actual incentive outside of pure benevolence for someone to actively participate in securing some of the most technical and sensitive core blocks of the OSS architecture. Being open or closed software is irrelevant here. It's an organisational issue of the dev teams. Everybody wants reviewed binary that they can trust but nobody wants to be the one that spends time doing the actual review.
I found a keylogger in some custom keyboards back around 1990. Some of the extended function keys were producing the wrong scan-code combinations. While working out what was going on, I found it. It was a bit of a rabbit-hole.
That sounds implausible. When keyboards weren't even USB yet, and the Internet didn't have a WWW yet, I don't see how a keylogger would do much good on a peripheral unless it was combined with a sneakernet.
@@AllAmericanGuyExpert The methods of data retrieval for keyloggers in pre-USB and pre-WWW era would differ from those today, but the fundamental utility of capturing keystrokes covertly does not. If such keyboards existed, they would be used much differently, focusing more on targeted data collection from specific individuals or systems, while relying on physical access for data retrieval. I understand the skepticism but the absence of the WWW doesn't necessarily render the concept of hardware keylogging implausible!
unpopular opinion: workflow 1 pulls the repository, builds everything and then tests everything, workflow 2 pulls the repo without using workflow 1's output, builds and publishes, that way malicious code must be in the build commands or toolchain where it will be noticed, tests just test and cannot affect the release
The malicious code is in the build commands, added to a random .m4 file in the release tarball (not Git) only. The payload was just hidden among the test files since that's a non-suspicious location to put a large binary blob.
@@CFSworksYou can still implement segregation of testing and building. Which to me as a simple physicist seems like the natural thing to do if you care about security. Same reason the manhattan project was broken up, as well as any other modern project with high security
@@1495978707 For sure, but I think that's done already. The tests are primarily run by the XZ project's developers and the CI system when each change is made. The backdoor is only targeting RPM+DEB package builders, and usually those buildsystems don't run the tests.
im gonna be honest ... i am subscribed to like 200 youtube channels. i am starting to get the most excited when you have an upload. super interesting stuff, and some of it above my head. but i have a feeling little things like this are going to be of outsized importance in the years to come.
Thanks for putting this all into language that normal IT guys can understand without needing to have much experience with Linux library management and deep security knowledge.
Some more additions: As far as investigation of this problem went to this point, this backdoor was introduced around the 5.6.0 release. You can run "xz --version" to check, what version liblzma you run. If it is pre-5.6.0 you _may_ be safe (pending final analysis). But if you are on >= 5.6.0, you got the backdoor. At this point, it is recommended to downgrade to a version prior to 5.6.0. Regard this post as an immediate stop-gap measure for your system, not a proper security fix of the issue.
Manjaro received a fix. Also, do you use it on the server? Regular desktops aren't effected as tehre's a little reason to use ssh server there and with rsa key.
Andres Freund isn’t a security researcher, but he’s not “some guy” either 😅 He’s a principal engineer at Microsoft and a PostgreSQL developer. (Not a developer who uses Postgres, but a developer who builds and maintains Postgres itself.)
Thanks for breaking down this issue so fast! I was made aware of it but wanted to understand how it was done and had a hard time doing that just reading the sources.
@@Rozenmorteso it lasted barely a month. Is it tasteless to make an infant mortality joke about backdoor malware? Eh, eitherway just remember that next month someone tries to say open source is less secure when even the pretty bloody advanced backdoors like this don't even last long enough to be picked up by a single release cycle.
Yes, it is that damn SystemD crap breaking subsystem isolation and hoping for the best again. But the core problem of patch reviews in open software projects and maintainer responsibility remains valid regardless.
Found a similar issue (in our Librarys) from 2017, in the City, Which is owned by the State. Doesn't help when the Admins are using out of Date Software to *Secure* those Computers.
Question : when was this backdoor added originally? It'd be quite interesting to compare how long advanced backdoors like this exist in Open Source compared to known backdoors added to closed source software Edit : couldn't find a date skimming through the video, but XZ 5.6 was released in February from a quick search, so at least about a month
Dont worry, the backdoors you should worry about wont even be found, only honeypots like this to keep the open source community smug, overconfident and blind to the real threats.
Saw the article and wasn't sure if I should click on a video about an article I read, and security issues that I've already taken care of, but this is a really great informative video. Thanks for making it!
@@fwfy_ Sigh... Stuff like this really needs a /s on the internet, but thanks for making me look/hyperventilate. There are 15 CVE's mentioning Wireguard, three from the year 2024, all of which are tagged as vulns in Cilium specifically.
After this, you can be sure, people will verify all tests now, to see if that lib was "infected" the same way. Expect some funny updates the next week.
There is going to be a very long conversation about a lot of issues raised by this incident. Retirement and transition of FOSS project leadership, trust of upstream build systems and lots more. I don't have answers to any of that but I do have a simple request for distro maintainers. Can we not take a well engineered, mission critical bit of code from the security focussed OpenBSD folks and then add a bunch of extra library dependencies that most people don't need, want or expect. sshd is the last bit of software I expect maintainers to be adding random patches and additional library dependencies. ldd on arch lists 12 less dynamic libraries than on my debian stable systems but for my purposes the arch package lacks nothing. I think distro users need to push back on this stuff and not demand their pet feature be added to core libs.
It appears that this backdoor fails to run properly on NixOS due to the non-standard FHS and the dynamic linker by default does not work as expected compared to Vanilla Linux. But steps are being taken in nixpkgs to downgrade xz anyway. Malware targeting vanilla linux has a hard time working on NixOS without careful forethought. I know that I am talking about security through obscurity, but sometimes obscurity just works. Also, in NixOS liblzma in not used as a dependency for sshd or openssh.
I saw a few videos about something called xz pop up on my feed and I thought, I gotta get a proper rundown from Low Level Learning. Thanks for being a go-to
If I understood it right, it was a random compressed file which was used in a unit test to "test" some stuff. But instead it executes some code, which changes the build process (unit tests are usually ran during the build process), which then changes the linker, which hooks into a call to a third party library, which then can grant access to the malicious user. It wasn't even clear that it was a non-readable binary file. At first glance it just looked like a compressed file for a specific unit test which tests uncompressing.
So this dude is listed as a developer officially for the project: "The current project members are Lasse Collin and Jia Tan. Jia became a co-maintainer for the XZ projects in 2022." Unless he's got some info on how his account got compromised, everything in the history of mankind he's ever touched needs to be reverted and his name plastered on security blogs to the top of google results. In either case, an obvious laugh to OSS maintainers in general.
It is hard to believe that his account got compromised, the oldest commit related to this incident is the one that introduced the compiled binaries, and it comes from January 23, it is a bit hard to believe that him, a co-maintainer did not notice strange commits coming from his account during the last 2 months.
Doesn't affect most distros (Debian, etc) unless on testing branches or rolling release. Arch Linux has already released an updated patch to repos. Sad to see this happened, but glad it was found and fixed before damage was done.
The title is misleading. The back door was not in anything relating to OpenSSH upstream. Distros patch the OpenSSH server code to include support for notifying systemd of logins. This is not built into OpenSSH, as the OpenSSH devs would never include such a silly feature. This is Linux distros shoehorning insecure systemd nonsense into everything, and yet again, it has burned users.
@@nou712 openssh should be built without PAM support if you intend to use it on a remote accessible machine. Linux and FreeBSD PAM are both hot messes.
Yeah, it's a great video but that title makes me sad, it's misleading to get more clicks, I mean I know he wants to grow the channel and all, but throwing SSH under a bus to get them?... The OpenSSH project did nothing wrong here.
Valgrind isn't just about leak detection. The main tool, memcheck, also validates that memory is initialized when it is read and that reads and writes are to valid memory. Both of these are usually far more serious than leaks. In this case it detected an invalid write below the stack pointer.
I’ve been writing some ssh code in C using their header files and stuff. I have always said that OpenSSh is wayy too big. It is over 130,000 lines of c code. For reference Amazon was able to make a ssh library that’s as good as openssh and it was only 9000 lines. Let this be a lesson that bloated software is hard to manage and that complexity allows hackers to take advantage of it.
It wasn't an SSH library though, so your point isn't really relevant here. Complexity does create security challenges, but this wasn't complexity that contributed to this issue.
Awesome report! The best way I was able to explain this to a non technical person (and I'm barely able to follow this myself) was as follows; Imagine the index at the back of a book - the entry you want is actually on page 5 but the index has been changed to say page 11. That would be obvious, because the index page isn't in the same font, etc - so you'd be instantly suspicious. Imagine if someone then hacked the printer to print "11" instead of "5" - you'd never know and would go there instead of the "real" destination. I realise this is a massive simplification and makes it a lot more trivial (whilst also not being exactly correct) but it's the best anaolgy I could draw.
No matter what, you have to admire the ingenuity behind this. If mistakes weren't made, it would probably have not been found by anyone. Hidden in plain sight. You really have to know your stuff to find an attack vector like this.
If I read it right this ‚just a guy‘ actually was a maintainer of the fedora linux distribution and the malicious actor managed to become the maintainer of the XZ library by posing as an active developer for years. The ‚just a guy‘ did valgrind memory tests to verify if the library is safe to include and it failed. The malicious actor tried to convince everyone the valgrind errors are GCC bugs to hide this, and ‚just a guy‘ even helped fixing the memory errors before finding out the true nature of the patch. Edit: I can no longer find the mailing list post that confirms the version above. The newer posts by Andres Freund (the ‚just a guy‘) tell the story a little bit differently. See my response below for sources
This is really bad... He's one of the main maintainers.. he has also forked a bunch of other compression related libraries. There will be more to come the coming days, I'm sure of it. Someone found he's probably Chinese too, could be state Actor
I was listening to Darknet Diaries and there was an episode about a pen tester who was trying to gain physical access to some US company. They had a Russian guy on their team who decided to get lunch at a Chinese restaurant located in the small town near the target location, he was just doing some general recon. The Russian guy happened to speak and read Mandarin and noticed the staff were Chinese, and the Chinese language menu at the restaurant had a bunch of very traditional Chinese dishes (in addition to the normal American Chinese menu items) which he thought was odd because there wasn't a big Chinese population in this small town... Turns out this restaurant was a Chinese intelligence operation, and the Chinse nationals on staff who were working at the building they were pen testing, were going to the restaurant to eat and give up company secrets to the CCP. So long story short, all Chinese nationals are basically spies, not willingly, but the CCP will literally disappear entire families if they call on one of their citizens to help the CCP in their espionage efforts and they refuse. So it's very likely this guy was blackmailed or worse, and he'll likely face hell now that he's been caught trying to insert this backdoor on behalf of the CCP. It's terrible but until the CCP falls, all Chinese nationals are potential spies that can be 'activated' for lack of a better word if the CCP decides to threaten them. If you don't believe this stuff is real just watch the China show, its scary good at exposing the realities of the CCP and life in modern China.
We've gotten WAY too lazy when checking whether commits are actually good. There is a reason Linus sometimes comes across as REALLY MEAN because he actually wants to make sure all code going into the Kernel is as good as possible. We need to be more like Linus.
Did you notice that this was removed from that same commit like they didn't want people to report vulnerabilities wtf "While both options are available, we prefer email. In any case, please provide a clear description of the vulnerability including: - Affected versions of XZ Utils - Estimated severity (low, moderate, high, critical) - Steps to recreate the vulnerability - All relevant files (core dumps, build logs, input files, etc.)"
We could hope this was merely an arsehole who wanted to highlight a security vulnerability -- perhaps for reasons of contemporary politics -- which they have now succeeded in. It is great that we still have geeks who dig into anything that seems new just for the sake of digging.
And that friends was why in our shop, build files had to be as transparent as possible. They did get checked over to make sure they did ONLY what was needed and NO tricks, etc. Now the fun part was FOS libraries.
Where I work, all oss dependencies are built with cmake. If they only use autotools, we port the build to cmake. It’s crazy how much cruft is there for irrelevant platforms of the past. Truth is, most gnu code that’s “autotools only” does just fine with rather simple cmake scripts that anyone can understand.
The weird thing is that the malicious stuff is only found in the dev's hosted archive file of the source code, it is not found on github or other locations. Very odd.
Indeed. This is almost certainly spook stuff -- either having someone under a false identity become the evil contributor, or blackmailing a real contributor to become your pawn (intelligence agencies worldwide could credibly threaten any number of fates-worse-than-death to people in the countries they control). And chances are reasonable it may even be a western one (or whoever claim to be 'the good guys' or 'on your side' while attacking the security of humanity as a whole wherever you live), though there seem to already be efforts to use it to stoke jingoism about collaborating with people of the 'wrong' country/ethnicity. I really hate when these international power games hurt us all.
It isn't in ssh, it is in liblzma, which gets pulled in to ssh on some distros due to distro-specific modifications. (liblzma after being loaded replaces some of openssh's functions with its own)
Part of the exploit was rolled into released source tarballs on the Github website (but not the Git repo IIRC). If upstream package maintainers used the provided tarballs to build the binary packages and then made the signatures available, they would be 'trusted' regardless. In which case, you're screwed. There are multiple issues the perpetrator exploited to plant this backdoor; there's going to be a ton of lessons learned.
Holy crap I just noticed this was only 5hrs ago, and it *has* been a CRAZY week, especially for me wanting to potentially shift to pursuing a career in cybersec *and* only finding your channel at the start of this same week 😅