One time fee… what a guy. The courses are timeless but mate how are you going to continue to monetise future courses if all existing people get them for free? Genuinely curious - not trying to be a d bag
Great video, and I'd like to reinforce your comments on the merits of unsafe rust. When we work on audits of rust code, we specifically look for unsafe rust first and give the most attention to that, because it is the most likely place for the serious issues within many open source rust projects.
honestly, I am by far more concerned about supply chain attacks, which I feel like are more probable to be exploited, have a bigger impact and need less cognitive effort compared to memory vulnerabilities. The sheer amount of libraries used in rust due to user friendly cargo - compared to C/C++ is somewhat scary to me
Same thing with every other package manager, like Github Actions. I maintain a fairly large repo that handles user account tokens and I recently wanted to add a job to automatically run my unit tests. I looked on the marketplace and found 9 different packages, and all they do is wrap the test command from the SDK. The scariest part is they had a fairly substantial amounts of downloads, making supply chain attacks fairly easy against any repo making use of those actions.
@@bdfb-th5ek the problem is that nobody has the time to check all (recursive) dependencies when updating them, while not updating them leaves known vulnerabilities open.. so someone just has to add something malicious to their package edit: Sorry I think I misunderstood you. Yes, for some projects maybe, others can get pretty deep into the system this way already.
Miri is *extremely* useful! when I was porting an existing C program, miri ended up catching multiple vulnerabilities that otherwise produced no noticeable side effects when running an existing test suite. Genuinely a gamechanger when writing unsafe code.
Another great tool is loom, which can be used to exhaustively check all possible thread orderings in a multithreaded test suite - this can verify the correctness of concurrent data structures so you can rely on their synchronization while using unsafe.
one thing I think you could have mentioned: Even when writing a good chunk of unsafe code in rust, the LSP rust-analyzer does a pretty good job of giving you a whole lot of warnings when you're trying to do the more "questionable" things you can do inside an unsafe block, it really is a powerful tool to avoid undefined behavior.
Since you mentioned you have never played with Miri, I'll take this opportunity to say that Miri immediately detects the use-after-free that cve-rs generates despite the fact that it's exploiting a compiler bug. :D
If anyone wants to do more reading on this, "Rust for Rustaceans" by Jon Gjengset has a fantastic section on unsafe Rust. The rest of the book is also great!
Is it possible to have a useful system level lang where you can’t do anything unsafe? So couldn’t deref raw pointers, allocate mem on the heap, etc. I thought the rust unsafe keywords purpose was to create safe wrappers around inherently unsafe code like a vectors get method which is safe because even though it’s trying to read a pointer at any index, it returns None if it’s out of bounds creating a safe interface around a unsafe thing (reading a raw pointer at any index). I just don’t think it’s possible to have a truly 100% memory safe system lang
No, it's not possible. A lot of system architectures have memory mapped IO registers at a particular address, just being able to create and use a raw pointer to one of those registers is a requirement for systems language.
That's the thing people forget. The point of unsafe blocks is you create a safe wrapper around unsafe code that cannot cause any memory unsafety when used in safe code. If your program segfaults the only possible reason are unsafe blocks which are very clearly marked
Unsafe just means the compiler can't guarantee its correctness, you just need to manually and carefully examine the code to make sure it's safe. Everything not marked unsafe in std is safe to use, even if it uses unsafe internally
The only reason you like Rust is the safety? Hmm... I honestly don't care all that much about the whole safety thing. It's everything else I really like. Sum types, blocks as expressions, traits, etc. It's like the language was designed for me, I like (almost) everything about it. It also has all the features I've been wanting in other languages (most notably sum types and powerful pattern matching/destructuring)
Yes, coming from dynamic typing and haskell background, not having proper sum types (*tagged* unions enabling pattern matching and overloading) in other languages often annoys me (especially when inheritance is overused).
What I love is how powerful the type system is. You can encode every possible si unit, do dimensional analysis at compile time, and all with zero runtime cost!
From what I know, the windows crate is a gigantic crate in terms of code size, mostly since it's all generated from windows interface definitions, for all possible windows APIs you could think of (of which there are far more than of eg the linux kernel, since Windows APIs are concerned with more than just the kernel, and also Windows retains backward compatibility for far longer). So yeah, combine that with it being fundamentally FFI calls with a C (like) ABI, it's not really that remarkable that it has the most uses of unsafe of all crates.
You know what would be cool? If the OS running the executable could tell you which unsafe block it was in when something crashed, because the compiler left them all labeled and it tracks every time an unsafe block is entered
Sadly, this is not always possible. An unsafe block might not cause a crash outright, but might instead put your application into a UB state, which would lead to a crash at a future point. As an example, it may create a NonZero which actually has a zero value, and crash will only happen when some other code expects that value to be non-zero.
15:42 why do you describe zig as a memory safe language (or a language that tastes like it's memory safe)? The first zig program I wrote segfaulted because I was careless with pointers. Is zig even in any way safer than C++? Both have tools to help guide you towards safer patterns, but neither compiler actually verifies that you use them correctly. Don't get me wrong, zig is an excellent language and a huge upgrade safetywise over C, but clumping it with memory-safe languages is a stretch
Is not safe in sense of Rust but has defer and also you must opt-in for null pointers so good start over C, but using pointers you can always shoot the foot off 😅
@@AK-vx4dy yeah, I guess not having null pointers by default is an upgrade over C++. C++ has RAII just like Rust, which fills the same role as defer most of the time. overall, it seems to me like idiomatic C++ would be around as safe as idiomatic zig, but I'm probably missing something important?
@@asdfghyter I wrote C not C++, I don't know full posibilties of zig, I wrote what I remember. And comment was about Zig. But generally zig and idiomatic c++ should be comparable. But I and many people see zig more like current era C replacement.
@@AK-vx4dy yes, absolutely, I was the one who started talking about C++ in my top level comment. There's no doubt that Zig is safer than C, but the question I asked in my first comment is if it's any safer than C++ and if C++ level safety is the standards that LLL means when he says "tastes like it's memory safe"
@@asdfghyter maybe difference is in defaults, starting with that you can literally or just in style write C inside C++, and maybe get warnings, maybe zig directs people to safer paths just by language construction and less safer path need more work to use them and clarity (no behind scene magic), but you must ask LLL himself ;)
Starts off implying that the 'unsafe' keyword stops the borrow checker. Hmm... OK the rest of the video turned out much better than that opening laid out. Zig is my chosen language the past few months but I wouldn't classify it as a memory safe language in the same category as Rust, VM languages, and scripting languages. It's more memory safe out of the box than C, sure, but that's not the same thing.
"34.35% of crates make a direct function call into another crate that uses the unsafe keyword" - remember tokio has to use unsafe to make use of the context api which is necessary for async runtimes. Additionally, if you're doing anything with Pin, you pretty much have to use pin_project or something similar, which has unsafe under the hood as well. BTW there is an interesting question of methodology: let's consider crate like pin_project: the crate itself just exports a macro and doesn't contain any unsafe code by itself, but the code the macro expands to does contain unsafe, how is it counted? Does the crate using pin_project contain unsafe according to this methodology?
A lot of unsafe exists in mutable iterators, simply because borrow checker is too restrictive. A container that is otherwise 100% safe, would still require unsafe for a mutable iterator.
My bias is leaning towards zig. While still generally memory safe, it feels much more ergonomic than rust. But I can acknowledge that I should do some more bigger projects in rust to get a better idea. I think zig is the perfect bridge from c to the modern world
excpet that zig sucks. zig users also called ziggers are idiot who dont know C and dont know rust so they learn the useless language that is never used so their garbage subhumen code cannot be audited because no one uses it.
Totally agree. I definitely find to be Zig more ergonomic than Rust. Though that may be because Rust is supposed to replace C++, while Zig is supposed to replace C.
I'm not surprised at that number. Heck, I'm surprised it's not higher. Even syscalls are an FFI, so must be considered unsafe. I have a feeling that the Rust devs have "cheated" some with some of the basic syscalls, so that the number isn't closer to 100%.
How does rust work for microcontrollers? Writing to memory has inherent side effects that the compiler can't know about. It feels hard to have memory safe code if the memory goes and changes values when you aren't looking
If ever Rust "how to" didn't sound like a cult recruitment drive I'd be more likely to adopt it. The problem with "safe" C is that is not taught. 5:00 willing too bet that 90% of the 70% is not validating an external input.
Or dealing with people that makes stack overflow feel loving. I'm fuzzy on the numbers but input sanitation and process chain validation for fault tolerance would be top 10 culprits.
There's another Undefined Behavior detection tool called Rudra, which the team used to detect UB and submit CVEs for numerous crates. It's based on a specific version of nightly Rust though, and needs some updating. It still works on crates that can be compiled with its Rust version
I dont think Rust is more difficult than C/C++, I think they are on the same level of difficulty, I think though that C/C++ is a more stable foundation to begin learning because of Rust's more "modern" features.
Most of the top crates use unsafe as that's the only way to get all the performance. As such most likely a lot of the code you use will have unsafe in the libraries used. And that's fine.
I feel like there's a similar mental reminder with requiring to explicitly define an unsafe block that happens when forcing to handle errors. By forcing developers to actively do something, it reminds us that something can go wrong.
Run Cargo-Geiger on your favorite crate that has substantial dependencies. Rust still builds superior software and the abstractions possible in the syntax are extremely underrated (traits, blanket impls, macros etc) as a consideration for the languages value. Theres a lot to be done in language research and Rusts ambitions have definitely left some syntactic loose ends but having gone back and Forth from Rust to C etc. Rust is objectively better for what it sets out to accomplish.
what it does is give you a "standard" to follow when it comes to the question of memory which is better than C which has no standards or principles or guidelines in regards to memory management
Started with Python. Studying C now using the Zig compiler to compile C code. Rust may have the spotlight but Zig is pretty awesome too and easy to work with.
I still think the best motivation for learning a non-C language is when you wrote your umpteenth vector-like library and still find valgrind issues in it. The university I went to IMHO taught C the correct way. Any and all exercises had 10 tries, all of them had to compile without warnings and had to have zero valgrind issues. If you didn't pass in 10 tries, too bad, try again next year. I still remember some people literally bursting into tears while doing the exercises in a computer room.
I love rust. The whole memory safety thing makes the compiler intimately familiar with your code so you get *correctness* for free. Correctness being how accurately the contacts you've defined operate by the rules you intend for them to follow.
Could you please cover cve-rs (a repo which contains some examples of how to corrupt memory in 100% safe Rust)? Would like to know how it works and how the Rust team will fix it.
You should really put a timestamp for people who know what is the concept of Rust! Because the beginning of the video was so boring to me: I use Rust a lot so I just wanted your insight on the report! good vid tho
I am a new Rust programmer. How would I do this if I wanted to be safe in Rust. I have one master thread. I have 1023 threads that churn out lots and lots of numbers. I have a global structure called status. Each thread can read the structure. When a value in that structure is set to finished, each thread returns all of it's work. Only the master thread is intended to change it. The other thread just watch it to see the system status.
If it's just a bool and missing the value once is acceptable you could easily use unsafe rust to set the value, but if you want to avoid unsafe, atomics like atomicbool are thread safe and cost I believe one extra CPU instruction per read, which, on the scale of a ghz CPU, even 1024 threads aren't going to have a major performance impact over all from a single atomic
@@DissyFanart Depends on the architecture. In x86 most operations are already atomic. The ordering is mostly used so that the compiler doesn't mess it up.
"not calling destructors is consider safe - because memory leakage is considered safe" I am developing a python library and it's main dependency is another library that's basically python bindings for a rust backend via ffi. Bug I run into tons of rust panics or hangs. And it's not trivially understood or even debugged. So I might need to really learn rust to fix some bugs up-up-up-upstream. Some of my code is really awful because I am constantly cresting new descriptors and stuff because nothing seems to be reused, mutable or even just pointing correctly. But its graphics programming so the rules change quite a bit.
SDR, Downgrade Attack (Changing LTE to GSM). Attacker collects your device information? For what? With Device ID can other attacks be performed? Push? Install? MITM apps? Keyloggers? What is the worst that can happen?
I think the amount of Rust unsafe calls might decrease in the future if developers put an effort to rewrite those crates that use unsafe to make calls to foreign functions. For example, I think most crates that deal with database connections, Vulkan API binding, OpenGL binding, device drivers etc are written in C/C++, not in Rust, so if these API bindings get re-written in Rust then this will reduce the amount of unsafe calls. 🤔
@@tiranito2834 The term "memory safety" has an actual meaning in its context here. I don't even particularly like Rust, I went the Zig path myself, but this argument doesn't even make sense as a "gotcha" against Rust. I personally am not aware of a language that is immune to memory leaks, and AFAIK, no one has ever claimed that Rust is. I think too many people simply don't understand what "memory safety" means, which is evident my some of the replies here.
@user-gi3mb3eu1m This refers to the "leakpocalypse" -- Rust was originally going to prevent memory leaks, but it turned out that it wasn't really possible to isolate them, and Rc (a reference counting pointer type) can always cause memory leaks when used incorrectly. So, safe code is allowed to leak memory.
How much of the memory safety could be put into the C compiler like if there was a flag that would pause compilation and ask for confirmation when there something detectable like an allocation call without a free call? Obviously not the same as Rust, but if some safety could be imported as a harder version of a warning or a soft error (since it is still valid code, just bad code), maybe we could get some benefits in C or C++ as well.
Love your vidoes and the way cover topics. Have you looked into Dynamic Linking (or Shared libraries) in Rust? Would like to hear more opinions about this. Personally it would interest me to have such functionality.
I will never accept tha CISA/DISA statement on code safe languages, not because I don't think its an important point to make, but just because it feels like a such a buck pass for the overall security issues in both public and private infosec applications not just within US infrastructure but outside of it as well.
hey! would it be possible to ask you to have a longer VOD where you write the mentioned HTTP server say in rust and then try to break. basically just like you mentioned. I think it would have a really great learning value ...for me at least :)
Would the compiled unsafe code be distinct from safe code , would the compiled protective mechanisms or their absence give away a section of a program that is unsafe. You talked of when auditing sources for unsafe key word your attention would be raised, I wonder if possible detecting the absence of the safety mechanisms in compiled code would also possibly be a red flag to a hacker, "here is where to start looking".
@@tdsdave unsafe doesn't "disable safety mechanisms", it just allows the programmer to do 5 things that are fundamentally not statically verifiable to be safe, and that were covered the video.
@@skeetskeet9403 Ah ok , never actually written a word in rust, let alone a program , as you say it was mentioned in the video, my brain fart , so its all a compiler safety net , without unsafe usage various expressions will generate errors and prevent compilation. Will look into it more, though direct de-referencing has me wondering still. Thanks.
15:30 let's quote my professor regarding that: "Why do we use C? Because you need to learn how computers work. Once you are done here you can get yourself a job coding in Python, or C#, or any fancy language you want, but if you don't learn how computer memory and CPU cycles work, your code will be terrible."
rust std library also full of unsafe code. There’s no escaping unsafe even if you took away the c bindings
Месяц назад
It's like people still using raw pointers in C++ because the second you use smart pointers everything breaks because they are safer and then all of the horrible practices that had been used don't work... and people moan that they are not good enough and continue using raw pointers. At least rust forces you to specify you are about to break things
Unsafe isn't actually unsafe. My disappointment is immeasurable. and my day is ruined. I will now learn rust and use unsafe everywhere. _I hope you're happy._
Do real Rust programmers have to rely on non-rust code much (C++ libs and other external code)? And, if so, how much and I presume this negates a decent amount of safety. Edit: Just got to 12:50 or so and see this is particularly addressed. Sorry all.
@@techpriest4787 Investing into Rust doesn't mean Windows is gonna be re-written in that language. Now more than ever Microsoft is focusing Windows in backwards-compatibility.
true, but to be fair they haven't done anything bad in like a year. the trademark thing never even went through which a lot of people aren't even aware about
This is a really important point to hammer in on: 20% of Rust crates use unsafe at all, and of those, less than 100% of their code is in unsafe blocks. Less than 20% of all published Rust code is unsafe. 100% of published C, C++, and Zig code is unsafe. Yes, modern C++ and Zig can be much safer than C, but it's still inherently an unsafe context.
@zactron1997 technically all binaries are unsafe it just depends on the context from a direct memory corruption bug sure but indirect attacks where you coax some esoteric operating system feature to mess with the binary indirectly that's a different story. I believe indirect attacks will be the future for attackers in the realm of cyber security. Like this whole safe rust/unsafe rust argument is an illusion and a scapegoat style argument because people act like memory corruption is the only type of exploit. While the most common class of bugs there's other classes such as indirect and side channel attacks.
Well keep in mind that a lot of the safety in Rust is by programmer contract. It's common practice to wrap unsafe code in safe functions with the assumption that the original code is using the unsafe parts safely, and thus the end user of the function doesn't have to worry anymore. Of course, it's important to note that this is an assumption. If that assumption doesn't hold true, even using "safe" rust can cause bad things to happen. Really the advantage of Rust is that when there is a problem with unsafe things happening, it's a lot easier to debug, because you know specifically what code blocks do unsafe operations, so it's a lot easier to find the error at least once you know it exists.
@taragnor nothing bad can happen in safe rust at least directly, people tend to repeat that but can never demonstrate a proof of concept exploit it's all nonsense. The entire point of my argument is that it doesn't matter how many fancy compile time features you have to prevent direct memory corruption all that will do is cause attackers to pivot to side channel remote memory extractions and these indirect memory corruptions that cause adjacent process memory to corrupt. There's no magical safe system that can prevent all that to prevent indirect you would have to patch all the DoS holes that lead to the types of resource exhaustion that can actually corrupt random process memory. Maybe one day in bug bounties DoS bugs will be taken seriously instead of being considered out of scope when indirect memory corruptions gains popularity.
@@coolperson5479 And seatbelts don't prevent 100% of injuries during car crashes, but they're still required by law. All security is a game of cat and mouse, malicious actors are always improving their tools and trying to gain access to bigger and better exploits. Unfortunately, C and C++ have effectively stagnated on this front for the past 30+ years. Not because they can't make the languages safer (Zig and Rust prove you can make safer systems languages), but because they're afraid of tackling the change management required to get people on board with a better version of the language. C++ has some pretty good safety features, but they're harder/more verbose to use than the unsafe ones, so of course people don't use them where it matters. At the end of the day, a malicious actor can always break into the data centre and bash at the servers with a hammer until they get what they need. What matters is removing as much of the low hanging fruit as possible, rather than just letting these people have unfettered access to increasingly important critical infrastructure. Facebook being hacked 20 years ago would suck, but not really matter. Today, I'd rather have my bank card stolen than lose access to one of those SSO accounts...
@@coolperson5479 My point was that you can very much create a safe function that calls unsafe code, and thus anyone calling that function could potentially do unsafe things (potentially unknowingly). You're relying on the writer of the original function telling you it's safe. So if you're using a create that has what you think is a safe function, it may not necessarily be, because that safe function can contain an unsafe block.
I'm thinking lately that my dream language would be something as simple of possible, like C with something like the built-in standard library of Python to back it up and perhaps some of its keywords (with, in and exceptions).
It's actually not. In fact mere printing to stdout is a fairly complicated thing under the hood. And inherently not thread safe btw. Especially when you realize that stdin/stdout of your process can be manipulated from outside. It opens a way to a whole class of nasty hacks.
It's not hard to learn, though. The argument of "it's still unsafe" is just "I don't want to wear a helmet while riding a motorcycle, because it's still unsafe"
Grammar nazi (if I'm actually right): I the opening you said 'and if that underpins Rusts security'. I think you meant to say undermines? AFAIK underpinning something is making it stronger.
Yes its safe if you use it as intended. I hate to be a pain but when was the last time you used any language as it was intended??? Its my humble opinion that the definition of safe and unsafe languages is irrelevant, when the devoloper writing the code is more unsafe than typhoid mary!
I have participated in an ai programming contest recently in rust, and was forced to use the unsafe « global static », which was( to my limited knowledge) the only way to store information between calls to to the player_turn function.
Basically there's a bug in the compiler's type inference which they use to extend the lifetime of a string reference to borrow its memory after the compiler frees it. It's not something you would do on accident, though. The file "lifetime_expansion.rs" in the cve-rs repo has an explanation, RU-vid won't let me link it
I think you are downplaying the language spec bug with the lifetime allowing "safe" rust to access released memory. The problem of having even convoluted way is that it is not expected and probably not possible to catch in reviews, so determined malicious contributors can eventually hijack complex projects with high probability where nobody expects that. Also IMHO, it is not as convoluted to not appear in code just by accident, when even inexperienced programmers are forced to use one of the most tedious and hard to understand feature of the language.
my opnion on rust is that it will be used on online servers and things like it but on machines that do not need to connect to the internet people will probably chose c or c++ instead
So calling linux api is not wrapped in unsafe? Or is whole (g)lib(c) rewritten in Rust? So calling linux kernel by fewer functions but with dynamic parameters is safer? Counting by lines is not fair in this example.
A lot of the libc crate is unsafe FFI calls to glibc functions. I don't think that's what you mean, though. The Rust stdlib implements a lot of the functionality that would be in glibc, in pure Rust, using the libc crate for unsafe bindings to things like kernel calls. I'm not sure the point you're trying to make... that every project which uses stdlib is unsafe because stdlib contains unsafe code?
I don't get the point of this. If some language was more performant, and/or offered a way better or simpler syntax and general usability, then yes I would use it (over C#/C++). But I'm supposed to use this monstrosity just so it can track my pointers or whatever? Git gud and write good code.