The interviewer basically asking "why is your software a world-wide single point of failure?" was one of the best questions I've heard a reporter ask in a while. Shame it stayed unanswered.
@@strayedaway19That makes absolutely no sense. They cant explain why there software bug messed things up because they dont make the devices?? But the devices werent at the fault there software was as he just stated he absolutely could have answered that question with a straight answer.
Because capitalism. Big fish buys small fish, big fish gets so big most "competent and professional" companies always choose big fish to assure clients, all depend on big fish. This is the case in many sectors.
I am surprised companies accepted two services so widely. The Windows and Crowdstrike combination, I mean. Thie is a cautionary tale of "putting all your eggs in one baskey"
I believe the safest approach is to diversify investments especially under professional; guide. You can mitigate the effects of a market meltdown by diversifying their investments across different asset classes such as stocks, ETF's etc It is important to seek the advice of an expert.Dustin Dwain King is the licensed fiduciary I use. Just research the name. You’d find necessary details to work with a correspondence to set up an appointment..
Yes. It is very easy to buy in on trending stocks but the problem is knowing when to sell or hold, which is why a coach is important. I've been in touch with one for about a year now and although I was initially skeptical about it, I will say I've made more progress within a year generating 6figure profit
⚠️ ⚠️ warning ... spammers ⚠️ ⚠️all the comments above are from people belonging to the same group. ⚠️ ⚠️they post the same comments under many videos ⚠️ ⚠️ Don't believe them ⚠️ ⚠️
You forgot about the importan part: no responsibility. So while your points are valid, they do not apply. Why would the management care? They will receive their money. If the company gets dissolved, they will move someone else. (This CEO had similar problems at mcafee). Bottom floor workers will pay the price, but who cares?
Yeah, I also do software development and can't understand how this wasn't tested thoroughly first in a isolated staging environment considering how many systems it could impact, which of course still mimics the vast ecosystems they deploy to in production - and if it was: how could this be missed, as it crashed basically everything? Either way, if it's a bug, I feel like whoever was responsible for it is going to get the blame, when really it's whoever was managing and establishing testing and deployment practices that should get the heat.
This is a legendary mistake. It’ll go down in history as one of the most incompetent mistakes in all of cybersecurity. They did not test a patch, they threw it right into production on a Friday. Now as gotta deal with this mess
That's true amd crowdstrike will shoulder the blame in the public eye, but microsoft is more culpable imo. They're the ones regular people and businesses are reliant on. Crowdstrike is just a third party agent handling the niche responsibility of monitoring processes for hacks, etc. Microsoft is the one responsible for redundancy whenever a process with that low level of access has a bug.
I've been working in IT for over 20 years, the Cardinal rule is that you never make a change on a Friday. This is what happens when you hire too many contractors that are underpaid lazy and take no accountability. Don't believe anything this guy is saying, they didn't properly test this patch I am blown away that they did not have a development environment where this was thoroughly tested before being rolled out so haphazardly.
Are you employing any Chinese nationals? Dig deep, this may have been a test to see how badly a problem with your software could damage US communications.
i assume it's a little different in cybersecurity. they must have 24/7 shifts like police/hospitals because a cyberattack could happen at any time. but yeah, how the heck did this not get picked up before release.
Mmm... as a non tech guy I wonder when is it supposed to be good to implement changes. On Monday everyone is frantic returning to normal work, on Friday half the world goes on weekend, I guess it should be Wednesday?
I am a Cyber Security professional, this will go down in legend. You never roll out updates without Pre-Deployment testing, that is skydiving without testing your parachute. It is literally federal regulation in the government sector. And you also never roll out updates all at once, you do it in phases to avoid the planet coming to a halt like this. If there's anything positive, it's a reminder to all us Cyber Security people about the criticality of our profession and all IT folks in general.
It doesn’t even matter if they had tested it for every possible scenario they could think of , this will happen and will happen more often as it becomes more popular. Everyone is so dependent on CDNs to make the cloud work. Being saying this for years
There needs to be more providers of cloud computing and security. I remember a couple months back when AWS went out and it shutdown the MTA most people in NYC were stuck where ever they were. Until it came back online. Someone must have changed the gateway address.
@NicolasPare You are perceiving this solely on your POV from your personal situation and not on the CEO's. Cash ain't no shi* when you failed on such big of an endeavor. The mental and emotional strain is so much many of which can't handle and end their lives.
@@kuyab9122 This is what most people don't understand, they don't know the feeling because they're not in the position, we work day and night, CISOs are having sleepless nights and are going to therapy cause of ransonware attacks, people pulling their hair out. Then you have a bunch of people on the outside talking about how you can retire with a bunch of money... It's super annoying!
I got a 7hour paid break over this lastnight while working from home. Where I work when we have system issues I'm suppose to tell the customer they are doing updates, when I hear the "update" excuse I never believe it anymore, I believe this was a hack or data breech or used to get people's attention focused on this and off something else. The story is NEVER how the news claims it is, always BS>
Not really when the issue is third party software having deep level access to windows which they shouldn't but constantly fight for as if it's a right. Microsoft should only really be held accountable for letting their "partners" cloud the fact that no third party software should be capable of bricking entire systems due to a bad update.
To give them the benefit of the doubt they are very specifically and only a cybersecurity firm. That's the one thing you really NEED to give forced updates and ASAP. that's also why it's so important those updates are properly tested and everyone is sure they won't cause messes like this. Honestly if they aren't constantly updating their service they pretty much the reason they exist and allow vulnerablities. Of course all that said look at the reality of the situation now. A cybersecurity firm update shouldn't be able to brick entire enterprise systems. And if it does have to be that deeply ingrained and connected to do it's job, it needs to be handled with extreme care and caution everytime they push stuff out. That a situation like this arises renders their entire existence kinda moot.
@@Name-gi8dr let's assume that could have been caused by some package corruption during a network call and it was as act-of-god level incident (I'm not saying it was). No matter how dumb the issue is - deploying to a small group first before deploying to everyone at least limits a blast radius
@@AKumar-co7oe yeah. As CEO said there are many configuration. They could identify cohorts of similar configurations and pushed updates in batches. But this is just wild. Also the null file was confirmed by multiple people. And update hashes are signed by the companies, so there's no chance of corruption in transit. They definitely pushed a faulty update to everyone at once.
There are more than just one company. Crowdstrike is just one of the most popular. Think like Google. Yeah there are a bunch of different search engines (like dozens) but the most popular gets the benefit of the most data and best reputation leading to Google having over a 90% market share for search engines and nothing else coming close. At that point what do you even do to protect against monopolies? Because yeah 90% market share is pretty bad and means Google has a lot of control over how most people access the internet and because it's the most popular even Apple products choose it for its default search engine. It's similar with Crowdstrike. We need to get a lot more creative with regulation for digital landscapes. Because just being popular isn't clear grounds to fault a company. Another interesting tidbit is the deep kernel level access that windows allows and partially led to this, is fought for by companies like Crowdstrike on the basis that windows closing off its OS is monopoly behavior and they scream antitrust. It's all a complete mess right now. We really need to reevaluate how we even approach digital markets.
microsoft and crowdstrike both of them don't have Monopoly it's not like other cannot do what they do there are other companies has well who provide same services but people prefer microsoft. Incase of crowdstrike I did not even know a company with such name existed never installed the software never got the issue.
The problem is that even if CrowdStrike fixed the issue on their end, the endpoint machines will not receive the updates since those machines are not booting due to the driver issue. Unfortunately for the Companies, they will have to rely on their local IT personnel to fix each and every computers that were affected. So imagine for companies having thousands of machines affected, those have to be fixed manually by the IT guys...
@@xiaolixi807 the problem is that the OS doesn't fully boot in many cases, at least from what I've seen. If Windows doesn't boot, the broken Crowdstrike driver/file can't be updated.
@@xiaolixi807dude I did a fresh start on windows because of this issue now I have to reinstall all my steam games this affected me on my personal gaming PC on Windows 11 dev build and I own a RTX 4090 and I can't even use it now...I need help with my error it's still gives me the BSOD
@@scottwilliams5642 You think it's what it was? I had a few of those and I never recovered. I had to excuse myself. It's hard to recover if you get one.
There is no patch. You have to slave the drive, delete the .sys file, reboot into safe mode and restore to Thursday. How you did this to 2000 systems in 24 hours makes me think you are fos.
@@AlexClo-x7k- What? From the recovery environment you can run the command prompt, cd into the directory Windows\System32\drivers\CrowdStrike\ and run “del C-00000291 ” and it auto completes the file name. Getting past bitlocker takes more time than running the command, and that takes ~2 minutes max. I can see how this could be almost automated if you are running mostly virtual machines. Not being able to copy/paste the recovery key was the biggest challenge. Auto mount a CD with a .bat file on it, and you could restore a device in seconds if you have the bitlocker information already pulled up. The problem file has a static name and location.
I can certainly confirm this. Friday was a very long day!! I work in HealthCare IT. It wasn't a difficult fix but "manual" and time consuming in nature having to touch every machine affected.
Basically every CEO ever. Though I don’t think I’ve seen anything Musk make really go under, if he stopped making EVs and made real cars he’d be even richer
A mistake doesn’t mean being fired. This could have happened to any software company. Updates have bugs. But yes an oversight indeed and needed ro be in a controlled environment before the release update.
@@wildeturkey2006 There will likely be large lawsuits. It will be VERY interesting to see what happens here. To your point, this could literally bankrupt Cloudstrike.
Imagine paying for a cure worse than the disease, now i get why some small businesses never bother with it at all just windows defender and youre goochie
It is insane that so many companies rely on this one corporation. Is there literally no one else doing what they do? This needs to be legally investigated.
All our data has been stolen from so many companies in the past 3 years, yet this company's stock has gone up to almost 400 bucks with a 600 PE? Are we to believe that none of those companies that had a breach use Crowdstrike, yet the entire world was affected in 10 minutes by their rollout? I noticed the outage on one of my banks last night, but there was no news about anything at that time, so I just blamed it on normal maintenance at the bank and didn't think anything of it..until I woke up.
The same thing happened in Canada with Roger's internet. They released an update that took down the entire country's banking system. Nobody could make any purchases for like 1-2 days.
@christaran Not insane when you understand Crowdstrike was the company that followed the establishment narrative and concluded that Russia hacked the DNC servers. When it was clear that the transfer speeds of the data out of the DNC servers could have only be done on site. This is how they are rewarded. They get contracts for cyber security from all the establishment companies.
@christaran RU-vid continues to delete my comment about Crowdstrikes connection to the DNC hack and the "Russia did it" narrative. I must be over the target.
"how could one bug cause this much problems worldwide?" wrong question asked to a guy whose money comes from the excessive worldwide dependence on his company
@@longkesh1971totally agree....I switched off the auto update on the system...must always have backup, so tat we can always go back....for salvage....
Almost everything runs on Windows. If something has such a fatal driver load it crashes the OS then you get worldwide boot loops on the servers that make everything run
@@stevenm732 doesn't most things run on Linux servers? At least that's what everyone says, if the same bug happened in their Linux client the impact could be a lot bigger.
CrowdStrike's clients are too cheap to hire competent IT teams to manage affairs in-house, because it is easier and more cost-effective to give CrowdStrike the responsibility.
Crowd Strike is negligent. Obviously they don't test their patches before deployment. Shameful. Crowd Strike and Microsoft should enforce rigorous testing before deploying patches from any company. They should also have a significant roll back process to be enforces at the first sign of problem.
The CrowdStrike CEO is giving us a master class in paltering; the misuse of facts to tell a lie. His line that they "remediated the issue" means they stopped pushing the toxic patch long after everyone was staring at a BSOD. The fix requires going from machine to machine and manually removing the patch, which is four simple steps, but these machines are also running BitLocker which complicates the fix. A rolling release of patches would have greatly limited the damage. Push a patch to one area, wait an hour or so, then push it region by region. If you're a remote worker and you've got a BSOD on your laptop caused by this issue here's the fix: 1. Get a box 2. Print a UPS label 3. Enjoy a couple weeks on vacation
Yes thank you, i don't Envy my Company's IT they have to work on the weekends to remove all toxic files from almost 10k servers and it's a small team, not only that many of the remote workers like me whose windows machines were managed by crowdstrike can't even turn on the machine in safe mode have to travel back to office to get this fixed and getting screamed by my non technical clients for their servers was a cherry on top for me.
I work remote and it's a simple fix, albeit annoying, especially if your hard drive is encrypted with Bitlocker. But it's just a simple delete of a few files. Might be a lot for an end user, but with guidance, it's fine. The outage isn't fine. Shouldn't have happened to begin with for sure.
@@KMFDM_Kid2000 yes I tried that but startup options have all disappeared called IT department they told it was a issue that a specific windows version is facing and they can't do anything remotely and I have to get to office so that they can remove my SSD and delete those files manually.I tried most things even tried to enter safe mode through cmd, nothing is working.
You can't roll out a fix if the system has blue screened. A reboot won't fix it, you manually have to go into safe mode and delete a file before it can reboot. Not good guys, test your software better!
Sometimes you can, if it's kernel level then it could prompt an update on boot. Most of these cyber security software and encryption softwares are kernal level. It's then down to the networks that these affected systems are running on as to if they allow those network connections for the update
so let me get this straight, this man has the power to grind this planet to a halt on a whim? i sit sometimes and giggle at how terrible we are as a society about having plan B's ready to go. he better get ready to get sued into oblivion
Wait until you hear about how many organizations rely on Microsoft / Google / Apple / Amazon / Adobe / "insert big tech company here"... Any reliance on IT systems comes with great risks, it's called supply chain risks. Do you want to benefit from the advantages of digitalization and accept the risks or throw away all IT systems and live in the Stone Age?
i say we shouldnt trust computers with real wealth, they are ok for games and research, movies, but to completely entrust all americas wealth in computers like digital coin is putting all your eggs in one basket.
@@sibu7 That's like saying we can't hold reckless drivers accountable because "vehicles come with inherent risks. We can't hold drivers accountable or we'll go back to the stone age!"
CrowdStrike should not be surprised like this. I mean that literally. They need to slow down the initial rollout of updates and monitor the health status of the updated systems. If they had been monitoring that the initial systems to be updated and detected that they were not coming back online; they could have automatically halted the bad rollout with relatively few systems negatively affected.
My husband has been in IT forever and his major company has never had a problem like this because he always tests on a small percentage of customers before something new is released.
Knowing how the industry works those devs were probably under extremely tight release schedules. Considering the global impact of these implementations though, they should definitely test everything rigorously.
Yes, but especially in Security Products, there is a so high demand of keeping the pace with all new security risks and patches the demand of fixing those is higher than in any other product. So I don’t think you should compare any kind of patches, updates or fixes with this..
@@Nicooouw You can roll out quickly but still in stages. Imagine your updater is staggered by tens of minutes and you also have your updater report back on the success of the update. You start your update push to first 1% of your customers and the updater immediately reports a high level of failure -- your overall update management system immediately halts and alerts your deployment team. This is what I would expect a company like this would do, it is grossly negligent if they are not.
@@C0D97 But I have read other comments that with this guy being the founder and CEO of the company then he believes in putting new software out into the world without testing it first......My husband works in IT and has been programming FOREVER and you have to test a new program on a small portion of customers to see it works and not just sending it out there to everyone and hoping for the best.
I studied cybersecurity. I applied to Crowdstrike. They turned me down because "other applicants were more qualified". Doesn't Crowdstrike test their own software before releasing it???
Apparently not! That’s probably why they didn’t hire you because they knew you would question them to test the software. What a crooked company. Maybe u used ur life luck to stay out of their clutches.
It is so ironic how they they teach you all about security & compliance, how to mitigate an incident, be transparent, inform your public just to see all those steps applied wrong smh
Yeah... I was trying to get 2 production lines running for 6 hours last night. A reboot definitely wasn't fixing it... This guy is gaslighting to save his life.
People in congress are older than the dinosaurs and the electorate are idiots in putting them in the first place. Let’s not forget lobbying. Their job is to ensure their policies protect consumers with proper regulations…
if he told the truth saying "we have no idea what really caused this" or "we did it as an excuse to install a virus on everyone", people only like to hear what makes them feel safe
30 years in IT, and I've had to deal with many software deployments by developers that were procedurally 'effed up. Rolling back is a b!tch. And that the fact that it's Windows makes it worse...
As IT Pro, I can't understand why in 2024, MS does not include the ability to roll back with ease across all OSs. It's as if MS is always in denial about BSODs.
I was on an IT call this morning discussing something totally different. His solution to an issue with a Windows machine, whether it's a workstation or a piece of vital equipment, whatever the problem is, is just reimage it. This is what we have to live with.
@@ryshellso526 Where I work that means just wipe the computer and build it fresh from a standard image. It's rolling things back all the way when the computer is initially turned on. It's not snapshots from a previous day.
1. Not properly tested 2. Deployed on a Friday 3. Workaround was posted on their website that required you to signup and login to see it 4. Saying it only affects Windows as it that was a mitigating factor. Even if your server is Linux, if the DC has enough Windows machines going down it can still affect you (i.e connected services, data bases, power surges, etc.) 5. Deployed on a Friday 6. Deployed on a Friday 7. Deployed on a Friday 8. Deployed on a Friday 9. Deployed on a Friday 10. Deployed on a Friday 11. Deployed on a Friday 12. Deployed on a Friday 13. Deployed on a Friday 14. Deployed on a Friday . . . . 99. Deployed on a Friday
Is he kidding??! This is major. MANY Businesses have had to close today. This is a disaster and a proper explanation and compensation will be required.
about what? this is obvious by now. They sent updates of their software to everyone worldwide, that patch contained a bug on Windows, they missed it during testing. This killed the internet. What else do you want to hear?
@@vadym8713 You are just describing what happened, not an explanation of why and how it was allowed to happen. Why possibly would they have an update system that rolls out worldwide without getting feedback about the success? The updater should push out the first 1% and monitor for field failures. Why possibly did they not catch this in their QA? It doesn't seem to be a weird computer configuration since so many of their customers were affected -- why didn't their QA farm have systems that represent their major customers? Why did a company that is supposed to understand threats not understand the magnitude of the vulnerability they themselves created? This company failed very basic IT principles yet have convinced important companies that they are experts...
Being a software engineer myself, the reporters are asking the wrong questions. News networks should have IT correspondents and experts who know IT better to ask more relevant questions, knowing how huge the impact of IT nowadays. They missed the obvious question. How did an obvious flawed update like this passed their QA??? There must be something more to this....
And you are also asking the wrong question. The right question is, "How is it possible that user code can crash the operating system?" CrowdStrike software is user code from the OS perspective and should never, ever, be able to crash the OS. Never. It is negligence of biblical proportions by Microsoft to allow that to be even possible. A good OS does not allow user code to run in kernel mode. Microsoft has conditioned the IT community to the point that they do not see that Godzilla in the room. It is epically unrealistic to expect that no program will ever have a bug in its update. Seriously. We have the wrong guy and the wrong company in the crosshairs.
@@draganbabovic3306 any software can crash an OS, especially a software like crowdstrike where it has to bypass layers and look at the kernels for it to work efficiently. It's just a matter of doing a series of thorough tests to make sure that it doesn't hurt the system. Even an ordinary user can crash the system if he doesn't know what he's doing. Everybody knows that you don't need to be a developer to crash a system :) Your comment sounds like you're trying to make yourself look all knowing when in fact it makes you look silly :) btw even the way you understand the cause of the issue is wrong. No user ran it on kernel mode and it wasn't the issue. The issue was in the auto update. Try to read things carefully and level up your comprehension please.
Best ads for IBM Mainframe too. There's only 8 people around the world that still know how to login to them, yet they are still running many of the biggest companies out there.
The file that was updated was not just a virus definition update as he is implying. They updated an actual executable driver file (with a .sys) extension that requires extensive testing and normally staggered or limited rollouts before being deployed to everyone. He is hiding something and not telling the truth. Were their systems internally compromised?
Two options come to mind... 1) They recently were bragging about their roll out of AI... 2) employees are laid off (because of point #1)... This is pay back?
Buddy, if this happened to Linux machines/kernel the whole world would be completely shutdown, not just inconvenienced. I agree though. This could send some on the fence to move towards *nix OS in their business methods and away from Windows.
@@Ghost2Most, unfortunately our company runs CrowdStrike Falcon on tens of thousands of Linux computers as well. I am worried that the same thing could happen to our Linux servers. We aren't the only big company running Falcon on Linux either.
I just read where a man was scheduled for open heart surgery because he had 8 blockages and an aneurysm but it was canceled due to this issue. I don't know about everyone else, but this family definitely can sue if this man dies because an emergency surgery could not be done on their account.
I think the most customers will be able to get is a refund for the faulty Falcon product. I bet Crowdstrike does a good job of limiting their liability in the agreements with their customers, and placing restrictions on what type of systems the customers should use the updates for.
The lawsuits are going to be against the countless companies utilizing their DEI run services not crowdstrike themselves. They themselves have protections written in their contract.
I dealt with a similar issue by a other vendor, it would crash on windows update. That vendor denied the issue for months. My system team wouldn't disable the windows update and when i disabled windows update on the computer,.they would revert it. So for months i had to fix the issue on the same 60 computers for months, vsrious hours, going from home to work at random times. No overtime pay
I like how this guy is like "we are working with customers to resolve it" but in reality I had to manually fix about 500 endpoints even before the first tech alert was published.
@@swdw973 The only time I ever have computer issues is after a Microsoft update. Luckily this time I had my personal laptop on paused updates. My Microsoft Surface for work went down while I was working at midnight lol I told my coworker that I thought that it was the update. Bingo
*Larry Burkett's book on "Giving and Tithing" drew me closer to God and helped my spirituality. 2020 was a year I literally lived it. I cashed in my life savings and gave it all away. My total giving amounted to 40,000 dollars. Everyone thought I was delusional. Today, 1 receive 85,000 dollars every two months. I have a property in Calabasas, CA, and travel a lot. God has promoted me more than once and opened doors for me to live beyond my dreams. God kept to his promises to and for me*
It is the digital market. That's been the secret to this wealth transfer. A lot of folks in the US and abroad are getting so much from it, God has been good to my household Thank you Jesus
My aunt is a dialysis patient in Hamilton ON and she didn’t get her treatment today because of your mistake. Let’s hope she can get it soon, or her suffering is your fault
@charleswhite758 Dialysis machines probably don't but in a modern clinic, almost everything is connected to a patient data management system (PDMS). If IT is down, you can't register the patient, check what therapy she needs, register what therapy she got, who treated her, the batches of the medicines used etc.
I bet you someone skipped a couple steps in their CICD pipeline. What concerns me more, is we are set to Alpha/Beta sensor update policies and they are N -1 and N -2, respectively. If you are pushing updates to previous versions of your sensors, that is also a HUGE issue.
Yup Charlotte airport is a disaster. Cancelled flights. No idea when we can get out. No rental cars anywhere. Hotels booked for miles. We were lucky enough to get an Uber for $680 from Charlotte to Richmond. What a mess. Should I even think about my bags?
actually it sounds completely normal. if you work for a tech company, you'd know things like this happen quite often. the last WhatsApp outage was due to an intern deleting a few lines of legacy code that didn't look useful.
@@kindlyignore its not normal to roll out pieces of code in hurry. Heard about baking time? or throttled roll outs? Think about it, if it was rolled to 1% of random users, it could have been totally contained. The exact details will emerge, but their 16th July tweet of release and todays breakdown paints a grim picture.
@@kindlyignorethis is going to have significantly bigger consequences than Whatsapp being down for a bit. I wouldn't be surprised if the damage in man hours to fix this plus lost productivity etc reaches trillions of dollars
While CEO will have to bear the blame, honestly they will not know everything that is going on at the ground level. So simply blaming him for this is a bit too simplistic. Some heads will roll. It might be his but I don't think it will just be him. All they can do is learn from that silly mistake and do better. Nobody is perfect. Mistakes will be made again.
I’m 100% with you. 1-Internal testing within an enclosed environment VM 2-Beta testing 3-Further roll out 4-Zero issues found complete public roll out. I smell the largest cyber company has been either hacked or deliberate internal sabotage, and or admitted the lesser of the two evils: 1-“we were hacked” 2-“we fkd up a basic update release” No international cyber security company would ever admit number 1.
Context matters here. When he was talking about the "bad guys", he was no longer talking about the root cause of this issue, but rather he was responding to the question "How can one bug have such a big impact on the world". He explained (in PR/marketing speech) that cyber security is complex and that Crowdstrike software is updated regularly and installed on many Windows computers. This incident showed that too many companies rely on their Windows computers and Crowdstrike, and they don't have good resilience / contingency plans, which isn't directly Crowdstrike's fault, but of course he couldn't say it like that.
He was explaining why their update didn't get thoroughly tested, they rushed it out because they're trying to stay ahead of the curve for potential security threats. It's an excuse, but not a very good one. Though the statement doesn't preclude any active cyber attack.
I accidentally shut down a factory (for just two hours) as a junior tech. It happens.. Ours prayers are with you sir..try to stay focused. It’s so complex it’s very hard for anybody to keep up with it now. Thanks for all you do and stepping up.
He's lying via omission: He said they deployed the fix and that computers are rebooting and coming up and working, this is almost entirely a lie. They are working AFTEr someone goes in and deletes the bad driver they dumped into c:\windows\system32\drivers\crowdstrike... This will be pursued legally and this video will be used as evidence.
During repeated reboots Windows detects it isn't coming up correctly. It will start trying to disable drivers. Some machines guess right, disable the CS update, and they come back up after repeated reboots. Our IT helpdesk's instructions to people who are down with BSOD is to reboot up to 15 times until a tech can get to them. Some percentage of them are fixing it themselves this way. So now, you're guilty of libel.
Honestly, Crowdstrike and specifically falcon view is a beast of a product! I feel so bad for this man. I’m telling you we should for real be sending love to Mr. Kurtz! The literal weight of the world is on his shoulders! This will go down in the history books for sure!
I have 30 years of experience in IT meadow of USA, One should NOT push the faulty update to back-end servers without properly testing to with all OS. CrowdStrike company hurt the people across the world very badly, which is NOT acceptable in 2024.
The level of incompetence is shocking. Throwing out a patch without testing it ahead of time, halting the world. Flights cancelled, hospitals held to a standstill.
They got compromised. Sent out a mass bug or something. He's covering. These kinda things don't get sent out on a day before the weekend. Pretty embarrassing to be cyber security and get hacked.
So you are saying, if someone hacked into CrowdStrike and pushes an update, the whole world will be down ??? WTH are you talking about !!! there should be several layers of security... this sucks!!!
Issues with such security apps are that they work at high privilege inside the OS. Anything wrong that happens at that level will result in OS just throwing its hands up and fail so as not to damage anything else. Recovery is very app specific so very less an OS can do anything about it.
@@strayedaway19 You are right, most of the AVs and security tools are running like a god over OS. Well, Microsoft should do something about it, I think they should build a protected layer where they can terminate such a process that messes up with the OS kernel or important system processes.
@@jackscalibur There are no kernel protections against code running in ring zero. It can do anything it wants, and any exceptions will result in a panic on Linux just as it will on Windows.
They keep saying it's not a cyber attack, but he kept saying we have to find out about what was the "negative interaction", "negative interaction",??? Sooooo,, how do you know that the "negative interaction" isn't a cyber attack?? 🤔🤔
Because we now know the exact technical reason that triggered the boot loop and it was definitely caused by improper input validation by their own developers. Not everything needs to be a Q-Anon conspiracy 🙄
Never deploy a patch or update across the globe in 1 go, you have to do it by batches! Same with production deployment, you dont deploy releases on all production servers at once - do it one by one - test after deploying to one server. This is definitely a process issue or someone didn't follow the SOP
Even if they did internal testing, he stated in the video that not every system out there, every computer out there was affected because some computers are running a different version or flavor of windows with a different patch level, and some of them are working fine so even if they did test internally, there are too many variables for them. To just test. They would legitimately then, to test every possible version. Or so would have to have every version of windows and different states of windows of different patch levels and go okay. Which ones are going to not work, which ones will so realistically. There's only so much that can be done. Obviously , they've done this probably a million times before and have not had an issue but they do now .
@@notuptome he is lying. The problem was a corrupted file shipped with their own update. Just look up the solution steps. Their agent just crashed when trying to read that definition file. Nice error handling on system which you cannot remotely reboot on crash.
It’s somewhat common in the security sector. There needs to be a channel to deploy an urgent update in the face of an emerging threat. But it would seem to be improperly used in this situation.
Excellent point. They don't do rolling patch releases because they're idiots and "high on their own supply" thinking they'd never release a buggy patch. I'm sure they test their patches in a lab but you're right, a rolling release of patches would've greatly limited the damage.
The laziness and carelessness across all businesses from workers who are underpaid disgruntled is at an all-time high, this was a patch that was released by probably a very small team that didn't give a crap and wanted to just enjoy their weekend.
He could not have answered this question as he only controls the software behaviour and not the whole system. Also the update might be available but the receiving system needs to be up for it to be received.
@@FEARTHEEER1 He is what is called "leadership" who is the face of the company so he will face the media. You cannot have developers answering this, they need all hands on deck to resolve this.
well backup wouldn't help, the real question is - WHY THEY HAVENT TESTED ON WINDOWS? I like how he casually mentioned it has only affected Microsoft OSs like it's some niche Linux build