CrowdStrike CEO George Kurtz joins 'Squawk on the Street' to discuss the latest developments in the global outage, how long the outage is expected to last, the liabilities facing the company going forward, and more.
I work IT for a large software dev operation in Australia... PCs started to blue screen at around 3pm... by 4pm every single PC was down in every office in Australia... within 1 hour our team determined it was crowdstrike. Although still silence from Crowdstrike... our 5 man team war roomed and found and tested the fix... rolled out the fix to every 500+ Windows PC, window servers and virtual windows machine... the worst part was fix could not be applied remotely as all workstations was bricked... we had to 1 PC at a time physically in person to apply the our fix... while ours phones was going nuts with people going nuts cuz they can't work... i walked out of office at 12am... no food no breaks as a broken IT Support.... thanks to crowdstrike...
CrowdStrike has so many issues. First, they were all sleeping when that happened, in was night on the West Coast. The CEO blamed it on bad data file, but obviously it was their driver(s) that caused BSOD. So their drivers are buggy if they crash on bad data and have no recovery mechanism. The other problem they deployed the new software on all computers at once. That's so irresponsible. When I worked on deployment we deployed new versions at 8am and all people responsible for the fix required to be in the office, and we deployed on alpha, beta systems during a week, and only on Friday it went on all other computers.
Craziest part is that this is a driver and has a 100% failure rate on Windows systems. Literally all they had to do was copy it onto a Windows test system, and no matter what extra stuff was going on it would crash 100% of the time, every time. Wild.
I work as IT. I can tell you one thing... In this video Crowdstrike's CEO is telling you all that his team is trying to find a way to automate the fix. I already know that it is technically not possible for everyone... many people don't have their computer connecting automatically to internet at startup, or just don't have a fast enough bandwidth. The blue screen is happening so fast at boot that it doesn't have the time to download the fix in the background and run it within this short time window. So yes, he surely doesn't want to tell you but many IT employees will actually have to manually go through most PCs and do the CMD command (there's a way without going into safe mode btw). I bet most IT people had already enough to take care of than dealing with this...
@@Litheas Just got done 12 hours going to each and every PC in my facility with Bitlocker encryption and auto-logon for plant floor systems. What a nightmare. 12 hours to hit roughly 200 PC's and boot into recovery, enter bitlocker recovery key if need be, then recovery command prompt to delete the c-00000291*.sys file. Not hard, per say, just intensive getting everything right. I've had 2 systems without a bitlocker code and 98% rectified in 12 hours so I had to leave or fall over.
I will be forever grateful to you, you changed my entire life and I will continue to preach on your behalf for the whole world to hear you saved me from huge financial debt with just a small investment, thank you Victoria Wiezorek.
Wow. I'm a bit perplexed seeing her been mentioned here also Didn’t know she has been good to so many people too this is wonderful, I'm in my fifth trade with her and it has been super.
She is my family's personal Broker and also a personal Broker to many families in the United states, she is a licensed broker and a FINRA AGENT in the United States.
This company is a NATIONAL SECURITY THREAT. I will be writing my senator and representative over this matter. My ENTIRE FAMILY was out of work today on unpaid time off because of this outage.
Well in that case it shouldn't be your family that bears the cost of this, they should still be paid by the employer. But that's corporate American dystopia for you I guess
He is talking nonsense. They fixed nothing. A system that got lost, can only be recovered by manual process at the system itself, not even on the network.
Its also what happens when you test an update with live data as opposed to an updated prototype that is not live. In Canada, one of their leading telecom providers, Rogers, did the same thing two years ago, resulting in a massive Canada wide internet outage that lasted more than a day. It also effected me as I used to be a Rogers subscriber for their Internet, Television, and Home Phone service. On July 18, 2024, I ditched Rogers (Toronto based telecom provider) for my internet and television service, and switched to Koodo Internet (subsidiary of Telus, a Vancouver based telecom provider).
They acquired Israeli company Flow Security in March, Kurtz sold $17 Million in stocks in May, Global attack July? It could be industrial/political sabotage? Their should be a proper investigation but I think it will be as thorough as the investigation into T r. Mp. secret se rv s security.
Well, I feel investors should be focusing on under-the-radar stocks, and considering the current rollercoaster nature of the stock market, Because 35% of my $270k portfolio comprises plummeting stocks that were once revered and I don't know where to go here out of devastation.
😂 This happen probably because he decided to save some money and fire some guy who he doesn't know and off shored the coding to some unqualified place. Just Boeing, there is a trend here folks. This is not some manufacturing job where any joe with arms and legs can do. When AI fully comes online this will be much worse because they think they will be able to replace humans and there will be no one to fix things like this.
😂 dude really 😂😂 he’s gonna get a multimillion dollar golden parachute to leave give me a break your part of the problem these guys are never held responsible
The liability they take on depends on the contract/license the company signs for the service. I know you don't care about them, but you have to respect that if customers wanted more garautnees than they provide they should have negotiated that. And I don't have a clue what they write in their contracts of course. Perhaps you're completely right.
@@williamhughmurraycissp8405it looks like it’s mostly CrowdStrike. They are responsible for making sure these update rollouts go somewhat smoothly, clearly they didn’t test it well enough or something wasn’t done properly while preparing for this roll out
I just got home from 14 hrs of manual fixing and half our servers are still down. Disabling auto updates is also not a feature built into this product which is a total disaster.
It speaks volumes to how inadequate their quality assurance testing is. An update that causes a bootloop should be relatively simple to catch during any sort of testing as all unit tests should theoretically fail to execute successfully.
CrowdStrike Holdings' CEO is George Kurtz, appointed in Nov 2011, has a tenure of 12.67 years. total yearly compensation is $46.98M, comprised of 2% salary and 98% bonuses, including company stock and options. directly owns 3.25% of the company's shares, worth $3.06B.
I am a Nurse and have been investing for a few years. I have reached a point where I could benefit from financial advice to improve my $160,000 portfolio, which seems to be stagnant, and to maximize the return on my investments.
You didn't provide detailed information about your portfolio makeup. However, I recommend seeking guidance from a financial advisor for a well-informed portfolio restructuring.
De-risk your portfolios, shore up your core holdings, and take some profits while balancing your portfolio allocations. I’d also suggest you go with a managed portfolio, but even those don’t perform so well, so it’s best you reach out to a fiduciary financial advisor to guide you, that’s what works for my wife and I. It's been 6 years now and we've grown our portfolio to $1m.
Your advisor seems competent. Could you share how I can reach out to them? I've recently sold some property and i am interested in investing in stocks.
This speaks directly to doing system backups on a regular scheduled basis, enterprise and non enterprise. This also begs the question, why enterprise and other systems insist on using mswindows enterprise instead of deploying OPEN SCOURCE enterprise systems? I have used open source platforms for many years now (since early 1980 through to present) and never once had any IT issues! Backups, regular maintenance, hardware and software, a MUST!!!!😊
Jim Cramer is either getting way too old or should NOT be covering tech issues. Watch @3:38. He asks a question about it affecting Microsoft Azure (Cramer probably has no idea how Azure is deployed). Kurtz (CEO) responds and then Cramer tries to create controversy by saying "You don't know if it affects Azure?" while looking like a complete idiot. Seriously CNBC, this is the best host you can offer?
@@djcacs15 It wasn't, Cramer had no idea what he was saying. Azure Dev ops is not the same as Azure cloud - which is what he was referring to. AC is infra, this problem was with the OS - entirely different issue.
That this wasn't detected by internal QA is beyond me tbh. A problem which is so obvious that basically every windows machine on this planet would be affected cannot be pushed unseen.... A cybersecurity company which doesn't have extensive testing farms and pipelines is a joke...
The question is was this not identified in the QA or was it included in the QA and procedures were set in place but somehow bypassed in the QC? I think your hypothesis on it being in the QA sounds more logical assuming such things happen frequently and nothing of this scale has happened before….but who knows
Or it was deliberate. Like you said, it is almost inconceivable for such an organisation to joke with pre-deployment testing. People should be really worried about the next update.
@@HermannTheGreat Na, I really I don't think that it was deliberate in any shape or form. It was either really really lazy where the update got premature green light or indeed a problem within the whole QA pipeline - which is what I think having worked as a developer at multiple gigs.
@@delamar6199 I'd have to disagree as I've been around this field for decades, nobody that absent minded would have held that much control of the update without any type of testing or oversight with a multi-billion dollar cybersecurity company.
absolute nightmare. Woke up at 6:40 to texts from everyone and their mother from my company complaining about blue screens. Crazy they didn't have an automated fix and CRAZIER still that they let this update out. no breaks, no lunch, manually having to go into windows recovery and deleting a selected file from each individual user's computer. What a day, time to get drunk if you work in IT.
I work at ups. The sign in forum was broken but also all of the computers in the building. The conveyor belts barely worked all day ( shut off every 40 seconds) this backed the whole building up and we had to manually sort 4 days worth of weight in 5 hours..
For that kind of software deployment, full QA, region-based deployment, and rollback procedures need to be in place. They must have somebody inexperienced in charge of the O&M.
He explains that once the package went live is when the outages started to occur. However, with extensive testing in a sandbox environment you truly just don't know the behavior. So, then package gets deployed and derails everyone on a global scale. If that's the case - with all the extensive testing done this should've been caught before it got rolled out. Id love to see the RCA on this incident.
Seems like a logic bomb to me because how the heck could a null file pass the testing environment and get pushed to production. Some say it went into an infinite boot loop. For him to say it has nothing to do with code .. huh? 🤔
I was at a Courtyard hotel in the morning and trying to get breakfast, when I was told that the computers are down so they can’t accept payment. I got free breakfast as a result. Kudos to Courtyard!
This interview confirms to me that "news reporters" are glorified script readers that are wanna be actors. Without going into too much technical details (me being a tech person), Cramer's questions and statements demonstrate to the trained individual that he really does not have a full grasp of the problem. But that's ok. Not everyone is a tech person, nor should be. What is wrong is to prop badly-acted emotions (eg, outrage) upon some of the questions he challenges the CEO with. It's as if Cramer were reading a script that instructed him to "furl brow now", "act outraged", "sternly ask the question ....". This to a trained tech person is completely transparent in terms of propping up outrage upon a topic that he does not fully grasp. Oh and I loved the Blue CIRCLE of Death question! (1:54).🤣 I guess if you can have a Circle of Life song (from The Lion King), you can have a circle of death??? The correct term is the Blue SCREEN of Death.
Yeah clearly reading the questions. Frankly the details of this will go over the head of 99% of the viewers, so i guess they need to do some stupid acting to get some emotion out of this and get views. Media should be the purest form of comms and yet it's the most disgusting one
Moral: When you get an update, whatever that piece of software is, always wait a couple of days. If something went wrong you would know. This is not best practice as you should perform updates as soon as they appear, but with an update, other things could break, remember that.
In the case of CrowdStrike, the updates are pushed down from their cloud servers to all the end points - AUTOMATICALLY. Users don't get to click "yes" or "no", nor even aware that some software update is happening in the background.
First. Thank you for owning the responsibility and he looks like a lot of people feel. So thank you for having the backbone to step up. That being said there was obviously a break down in testing before such a large push.
Not such good news for the IT guys who now have to work through the weekend non-stop to manually apply the fix to each individual PC, in order to ensure that flights continue to take off in the middle of summer. Mostly manageable for large organisations, but a for smaller company with one person who's even remotely tech literate? A nightmare.
Crowdstrike Updates that were sent out not properly tested? Was that issue? Will they be accountable for the Millions possibly even Billions lost from affected businesses globally? Wonder if there will be a congressional hearing with CEO Explaining to congress exactly what happened. Companies will rethink their business relationship with this company, now knowing how vulnerable companies are with their exposure to this company handling their Windows Security.
@@joet.2078 I am a 3rd shift worker in a warehouse that ships globally..No production for 10 plus hours..We got paid to sweep , clean and make boxes..there goes our next quarterly production bonus!
Majority of affected machines are not booting up, so the local IT folks will have to fix those machines manually. So imagine if an affected company have a thousands of machines affected so they have to fix those machines manually.
Me and my work buddy pulled off the biggest escape in IT history & kept our sanity alive . We had a code of leaving early if something big but mundane hits the network. Yesterday I found him near the lift by 3.30pm and he gave me a head tilt . I rushed to my place , snatched the laptop & ran straight home. He msged me , one file for the drive from crowdstrike has started bricking & causing blue screens .They ll call up the war room in next 1 hr & it’s nothing interns and other IT professionals can’t handle 😂 Mundane issue but a hectic process 🤙🏻 Man , I owe him one when I saw the news . After a good nap & morning coffee , Howz everything. After continuous project releases & enough lay off scare , we r mostly tired & worn out ✌️
I don't think they understand the gravity of the situation. Since I'm not sure how Ace would have plenty of time to sit here and talk, I understand there needs to be a bit of media spin, but let's be real. This is an unprecedented event: hundreds of millions of computers are impacted, and tens of thousands of businesses globally are affected by the lack of testing of binaries. This is going to have a significant impact on cloud updates, security manageability, and SAS-style discussions when it comes to privileged access. And don't assume for a moment that Microsoft or any other company is in better shape to handle this. They've all had their black-eye moments.
He has been getting 30 million per year for 13 years, he will shell out a few 1000 to a shrink who will tell him why are you worried when you got billions, start the motivational speaker circuit
The “0000….291.sys” file is odd which CrowdStrike sent out which is causing this issue looks very odd. It’s contents looks like it has been tampered with the code being full of null objects which anyone in the field will spot in a second. The question is therefore how could such a code originate and how was it sent out without QC? Are we certain there are no malicious intent within the company and this is a “simple” procedural failure?
My thoughts! Especially when he said it had nothing to do with code. If there are null files how did it pass the testing environment to even get pushed to production
Who would authorize a blitz rollout like this? Insane. A Canary deployment and a slow monitored rollout is vital when updating critical systems. Total incompetence.
Imagine Antivirus program such as Bit-defender had bad update. What would you do? Restart from external flash drive for example, and manual cleanup. There is no automation possibility, and it would take hours.
Keep in mind the CrowdStrike agent only failed on Microsoft devices - Mac and Linux devices were not affected. So that tells me to not just look at over reliance on one security solution but also look at over reliance on just one operating system.
Do you really think Linux and Mac OS's are always bullet proof? Also, on enterprise scale, the major OS used is Linux. And remember, Linux flaws have caused major cybersecurity incidents as well as outages.
@@bobsingh11 No that's fair. I am not saying that one OS is better than another, but consider not putting all of your reliance in just one operating system. For example, does a airline check in system have to run on all Windows?, or can it just be a thin client running on Linux that goes to back end servers running both Windows and Linux, esp. if you are just a kiosk type of use case.
@@QuantumKurator I hear you. In an ideal world that is how it should be but alas, when non-technical people are put in charge of negotiating and signing contracts, they tend to prefer one over the other 100% in the name of cost reduction.
I've been up since 2 AM on bridge calls and my entire team along w/ a whole bunch of other people will be working all weekend. Took down pretty much everything.
These guys don’t have a multi-layer system of approvals to screen out bugs and ensure proper deployment BEFORE rolling out a content file update??? Jeeez?!? What a screw-up??? I am posting this while stuck at the airport in Puerto Plata DR. The scope and scale of damage here is huge, to say the least. And talk about a corporate blackeye! They screwed us up better than any hackers could! And their stock is cratering! I immediately sold off my CRWD shares.
working in software development its actually crazy any thing could get pushed out like this. the amount of testing for a single line change is crazy idk how this happens
Before an important update like this is released, there are 4 major tests that are done first, -Internal development tests. -closed testing(beta testing), released to a closed group who use the software in real world scenarios. -Pilot/staged rollout, the update is rolled out to a small subject of users, with no issues detected it’s rolled out to more users. -Final rollout, with no issues found the update is released to the public.
Not every company do this that's issue! Just test it in uat and damn release in production because of cost cutting/layoffs/resource issues while management makes huge profits
@@harshmehta7125 I’m not talking about “every company”, I’m talking about CRIWDSTRIKE, the company that says they didn’t phase in this latest update even though they are responsible for half the worlds cybersecurity windows systems updates.
It does not matter how that "CEO" explains it, your product cause the outage. Also, a logic is a code. A bug is a code failure. LOL. "We've been doing this a long time" - yet still failed in a global stage. This "content file update" bug is a very easy find yet it was missed. I am curious of the damage caused and the compensation the global crowd struck by the outage.
The code upgrade wasn't tested to its fullest extent. This is exactly what happens when you allow a "Cloud company" control over your systems. All computers turned on during the time of Crowdstrike's deployment of faulty code resulted in every computer running Windows to go to blue screen - where it's impossible to recover from, especially for an end user. Help desks and administrators across the world had to pivot and train specifically for this manual fix. Crowdstrike needs to be sued into an oblivion where it ceases to exist. This is what I warn every IT leader about - putting your data or allowing your systems to be manipulated by Cloud companies who do not have your best interest.
This stuff happens, has happened and will continue to happen *if* lazy IT admins don’t have a process and risk framework in place for bad patches, updates, signatures then THAT is also on them. Just because it’s no where near as common as it used to be - doesn’t mean it can’t happen. It’s unfortunate, ideally it shouldn’t happen but it’s a known risk and not solely the fault of crowdstrike.
I'm sure you would've had suuuuuuch good questions little man lol. Now stay on your couch and don't forget to clock out of your sad 9-5 before your weekend
They should’ve tried this out on a very small system to see how it was gonna go. Clearly the roll out of this was not done properly and being that the general public was involved, they’re should probably be some sort of congressional hearing. And this is coming from someone who is pro business and doesn’t particularly love when the government gets involved and politicians do the whole grand standing hearing routine
I worked IT for almost 20 years. I feel for ALL of the poor agents having flipped out customers. Those updates are HARD to roll back! All the best folks. I hope this is not some kind of "test run"? 🤔
Artificial Intelligence was being implemented into this new system, whenever they booted the new AI into the system everything crashed. The new technology of artificial data mining in the global system is the culprit.
Crowd Strike is negligent. Obviously they don't test their patches before deployment. Shameful. Crowd Strike and Microsoft should enforce rigorous testing before deploying patches from any company. They should also have a significant roll back process to be enforces at the first sign of problem.
It didn't manifest itself because all of their testing was done on the code and not on a single dummy client machine. They were lazy and pushed an update that they didn't test, that's what happened.
They did not detect this outcome beforehand this time, how confident can you be that this will not happen again? Other than taking their word for it and keep your fingers crossed...
We've been working since 2am to resolve this mess, this is not just a roll back as he's making it sound. This just showed bad actors how to take down the world! As the Cybersecurity Manager for my company! We're due compensation!