Watch the original: • From C ➡️ C++ ➡️ Rust Amazing author: / @code_report Recorded live on twitch, GET IN / theprimeagen MY MAIN YT CHANNEL: Has well edited engineering videos / theprimeagen Discord / discord
I don't understand how on earth can anyone think any of the (modern) C++ versions is better, all the advancements in modern text editors and LSPs to make typing faster and for some goddamn reason some people are obsessed with typing less even if that means writing the most cryptic expressions known to man. The C version is not only orders of magnitude more readable but also more versatile, you can tweak it here or there if the requirements change, however the C++ ... Also the C version has no function calls and thus translated almost 1 to 1 to an equivalent assembly and thus the compiler may be able to optimize it much better, notice the C code had the least instructions in O2 and then in O3 the most probably due to loop unrolling. If I had to take a guess I would say the C code is noticeably faster than the rest. Both in O2 and O3.
@@julealgon How on earth is that less error prone?????? It seriously boggles my mind. The rust version I can understand but the C++??? The original C code is as simple as it gets and it is much more explicit in every aspect and on top of that it is orders of magnitude easier to tweak, how in all hell is that more error prone than a single Statement that uses layers upon layers of abstraction that you have to remember specifically how every piece works including the return types and corresponding operator overloads for those types. Absolute insanity, what has happened to software development? And then what if you realised you had to handle a handful special cases? The C version you just add a couple ifs inside the loop or whatever and you are basically done, you can make little adjustments here or there no problem, the C++ version on the other hand is painfully specific and you would probably have to rewrite it just to make a minor modification. Would you rather maintain a 250 lines file that uses a different std::nonsense function every single line complete with overloaded operators everywhere or a 1000 or heck, even 2000 line C file that does exactly the same in the most explicit, simple way possible? I would take the C codebase any time.
Us Haskell giganerds avoid the possibility of off-by-one errors by not having iteration at all, and instead optimizing the language for recursion. Yes, that means no while loops, no for loops, no flow control made for a toddler. Off by one errors are straight up not possible since there's no mutability. index++ can't be an iterative step--It mutates a variable, and that is not allowed. Instead, we use function guards and define a condition for when recursion should stop. If going across a list, this is when the head of the list is empty. And yes, I can use C, Java, etc etc etc as well. I actually don't work in Haskell. But it has taught me to be a better programmer overall. On the surface this leads to memory usage concerns, but in practice the compiler will compose your functions together such that your entire program is more analogous to an end-to-end stream of data.
You don't have to debug Haskell because either a) you didn't use it in the first place b) you git everything right because you wrote it in Haskell and you are a genius
"burst" 😂 There's an old article by Burst lib showing how to refactor a function that calculates the distance between two points to be generic. It's absolute insanity
Also the entire thing is kind of pointless cuz.... MATH This is the objectively correct version: int calculate(int bottom, int top) { bottom += bottom & 1; top -= top & 1; if (bottom > top) return 0; return (bottom + top) * ((top - bottom) / 2 + 1) / 2; }
@@random6033 I compiled your code and his code and after removing the comments and empty lines yours was 105 lines long and his was only 102 long. so hardly no difference. I decided to use my twisted edrich mind and write it in my way i got for the following code only 99 lines of assembly: int calculate(int bottom, int top){ int sum = 0; m1: if(bottom
C++ is a double edged sword, and the further you run down the blade, the sharper it gets. You can either leave legacy code and cut yourself with hours of painful debugging, or you can use modern features and cut yourself with hours of documentation review.
Haskell does require you to pretty much throw away everything you know about imperative programming. It's a completely different paradigm, but once you do learn how it works the code here is very readable. I recommend learning a bit of Haskell just because it requires you to think so differently about solutions.
Also causes your solutions to be painfully slow. Computers are fundamentally imperative, if your functional language doesn't guarantee tail cal optimazation (only one I know of is Scheme) it quickly becomes a stack destroying monster that also ruins cache coherence. Pure functional languages manage to always do the worst case scenario for the CPU.
@@marcs9451I feel like you don't get the idea behind Haskell and company. The language is a tool that is supposed to help you write readable and maintainable code. Code is a medium to communicate with other developers. That a computer can understand the code you've written is merely a side product. Therefore it is better to write succinct code than fast code. The compiler's job is to produce fast code in more cases than a developer could do and still be correct. GHC is doing impressive work in that regard. Scala is available and production ready doing similar and it allows you to enforce TCO with @tailcall. There's a bunch of languages that try to push generally available optimizations. Rust is yet another example. For example, the optimisation behind option are available to all data structures of similar form. Looking at the current state of application development, it's green threads aka async / await all over the place. Your declared worst case scenario is the current state of every application in any language anyway. So why does it matter? A ton of programs transfer a few bytes between a client and a database. It's not like they actually need to have tremendous performance.
@@marcs9451 To be fair (and yes I know this is not pedantic since languages aren't the same as their implementations), but Haskell typically outperforms Python which is arguably more imperative than functional.
@@thomassynths It's usually said to be in the same ballpark as Java or C#. Not really that surprising given it's compiled and they are JITed, all of them have a runtime system and all do GC. It _can_ perform terribly ofc. If you use data structures not fit for the job, implement a slow algorithm or your code doesn't play well with the GC or laziness.
@@marcs9451 I can't see my reply anymore. Maybe it got removed or maybe youtube is just having a fit. I'll just reiterate: functional languages are more open to optimization than imperative ones. And if you benchmark them, they perform incredibly fast compared to imperative. Just look at how Ocaml performs against Java. On top of this, the abstraction in which programmers think, doesn't need to concern itself with the underlying machine. That's literally the whole point of high level software. There's a reason compilers exist. Also Scheme is obviously not the only language with TCO lol. That's most functional languages. There are tons of compiler optimizations that a) improve cache locality and b) reduce stack usage. Rust is proof that you can have tons of functional stuff in a language while still keeping it fast. And Rust doesn't even have a runtime/GC. Just look at the logos crate: github.com/maciejhirsz/logos. This crate generates faster lexers than you could ever hope to write by hand and all you have to do is specify regexes for your tokens. You don't need to concern yourself with the algorithm used for lexing.
what i've learned: C is evergreen, can look really nice and is super readable, but will spontaneously transmute into a nuclear bomb if you look away for an instant. rust is the new kid on the block with fancy hair, but is still just C's cousin on mood stabilisers. haskell is an eldritch horror that will annihilate your mind but in return allows you to astrally project your code into a pocket dimension were it runs optimally. c++ is a hulking, grotesque abomination enslaved by sadists to run the entire world at our peril and would benefit from being taken round the back of the shed and shot squarely in the base of the skull.
the problem with C++ is people like the guy who "refactored" the original C code if you use C++ sensibly, it's pretty great, but that usually requires a strict team-wide discipline namespaces are very useful, you can hack namespaces in C with struct and headers but it's not the same const ref& is extremely useful and allows for a lot of flexibility every other feature or library is to be used strategically and not "because it's cool" - the more features of a language you use the bigger the barrier you make for readability and maintainability - templates, std and boost features are to be used very strategically, with the exception of idiomatic things that will be commonplace anyway (like, say, vector stuff, ranges, safe strings etc etc) a massive problem with C++ is that when you have to use someone else's code it's often completely alien to your internal practices, but this happens with C as well just not as much generally you don't want to work with other people's code in C++ if it's not a readily usable library that you don't need to touch and that you can trust with your life that it works as is (namely std, or boost)
"iota avoids off by one errors.... Oh, by the way, don't forget to add one to top if you want an inclusive range" I like the code report channel but he's definitely more into arcane programming aesthetics than pragmatism. No idea why he decides to compare number of assembly instructions produced, its just as easy to benchmark the run time.
"iota avoids off by one errors.... Oh, by the way, don't forget to add one to top if you want an inclusive range" thats because every range is exclusive, every single one. Its not a hard thing to remember to youre always working on [a;b[
"iota avoids off by one errors.... Oh, by the way, don't forget to add one to top if you want an inclusive range"" How is that not an improvement over the original where you had to notice that the range was inclusive by looking at a single character?
"readability is a function of experience" this is the truest thing I've heard this week. how have I been programming for 30+ years and never, ever heard any dev say this? I say it all the time, and everyone just smiles and nods when I do, and acts like I need to get back to my fucking nursing home.
Readability is a function of experience divided by how much your language sucks sqared. The C version is readable by an infant with neurological disorders, the Rust version is kinda nice, you just need to explain 2 things and it makes sense. The C++ version just doesn't make any sense, yes if you have experience you can read it but you shouldn't to when you could have just written it in C or in Rust and any random joe would have been able to read it.
@@marcossidoruk8033 "yes if you have experience you can read it" - this is all programming. that was the point I was agreeing with. don't shave yaks. get used to things that make you uncomfortable. eventually it'll all become easy.
@@blarghblargh You didn't get the point at all. My whole point is that the C version is so much more explicit you barely need any experience or anything at all to understand it, its beautifully dumb code, compared with the C++ version is much more clear to a greater number of people and doesn't gatekeep people for being unnecessarily complicated and language specific, it is objectively better code. Hence why I said "it is a function of experience divided by how much your language sucks squared", the whole point is that experience is not the only relevant factor here. Saying as a response to this "thats all of programming" is a remarkably stupid answer. Its like saying "suffering is all of life", yes but does that mean that the ammount of suffering doesn't matter? Same thing here, with arbitrary experience you can understand anything, with arbitrary experience you can even understand languages like brainfuck or malbolge, yet nobody does that because why would you. For some reason people don't apply this logic to (modern) C++ and end up writing utterly pointless unmaintainable code like this just because it makes them feel smarter.
@@marcossidoruk8033 "a remarkably stupid answer". keep up this junior mindset and you'll never grow out of it. I never said C++ doesn't suck. I said it's all easy once you have experience. git gud, kid. and keep your negativity to yourself.
That C++ calculate function is an abomination. Here is a non-nested version that doesn't hurt my brain (and doesn't have an early return even though it really should): int calculate(int bottom, int top) { int acc = 0; for (int val = bottom; val
@@blenderpanziINT_MAX,INT_MAX should return 0. It just.. Doesn’t return. Im not sure if INT_MAX,INT_MAX should actually return zero or just throw, because whoever called this probably doesn’t actually want to know the sum of even numbers in the range of one odd number.
I can't believe there was a clean C++ program with LITERALLY a "for" and an "if" and the dude turned it into a program with functions which you gotta search what the parameters mean, passed functions as arguments to the said functions, declared a namespace (which you also gotta see what exactly it is) and brought a lot of new syntax that only works on the last versions of C++. It also takes 10x more to understand the "upgraded" version. Btw, love your videos, Prime (curly braces on new line lol) { }
What do you mean the code was absolute garbage and the final c++ result was readable like a comment it literally written: sum all numbers from top to bottom that are filtered to be even. You can understand this code in like 5 seconds where you would still be figuring out the control flow of the original example. Ofc its not as beautiful als rust but its basically readable the same way
@@thomasziereis330 I mean, if you prefer it, alright. I still think the original one is cleaner. What about having to check what tf the iota function does?
As a javascript developer I have never used rust before and I have gotta say, I understood the rust version of the sum function really quickly by just looking at it for 30 seconds. I've got to learn rust, such a neat looking language!
sum is also way more descriptive than "accumulate". Accumulation could be done in a number of different ways. Sum does what it says on the tin. Add up and get a result.
@@DBGabriele That doesn't explain why there isn't a sum method in C++ standard library (partial_sum is the closest but requires you to pass a range). Rust also has "accumulate" in forms of fold and reduce for more generalized accumulation (reduction).
Considering how much you love rust I really suggest at least giving Haskell a chance. On it's own it's not the most useful language but it is really elegant and fun when written properly. And for some use cases, such as compilers, it is crazy good
Haskell is brilliant but there's some annoying stuff in it. For example all the strange characters you can define to mean what you want. Good look understanding someone who went crazy using monads and the entire hierarchy of category theory using symbols and one liners. This is why it will never be really popular. I feel like they need to simplify it a bit, at least the syntax. Also there are annoying thing like name clashes when importing other Haskell files. Rust is easy and efficient to use which means a lot.
@@Boxing_GamerI feel like Haskell is properly a research lang, perfect for exploring programming languages at depth but not for actually programming. If you need something practical right now that feels the same, use OCaml for a pure experience or Rust if you’re a performance junkie. If Unison ever gets off the ground, that’s much closer to pure Haskell with a lot of QoL improvements.
@@romannasuti25 I'm not sure, I think the ecosystem, compiler and package manager are very mature. Why not make projects with it? I know there are companies out there who use Haskell.
Kotlin is also very nice: fun calc(bottom :Int, top: Int) = (bottom..top).filter { it % 2 == 0 }.sum() Differences to Rust: Return type is inferred and you can leave out the braces if your function is just one statement 'it' is always the name of the first unnamed lambda parameter
To make the Haskell code more explicit, here it is: calculate :: Int -> (Int -> Int) calculate = \bottom -> \top -> sum (filter (\i -> even i) [bottom..top]) This is literally equivalent to the code in the video because all functions in Haskell are implicitly curried. I also removed the dollar operator and made the even check more obvious that it is a function used for a callback. Anyway one major reason high-order functions are is not the Hellish Nightmare they are in JS is because all variables in Haskell are immutable and all functions are pure. (Haskell is able to model state and impurity through its type system and some primitive language-given types such as IO and ST.)
Another way to read it is just: Normal way: calculate bottom top = sum $ filter even [bottom..top] Way of reading: calculate(bottom, top) { return sum(filter(even, [bottom..top])) } The dollar sign is just a way of removing parentheses. So, instead of: sum (filter even [0..10]) you can write: sum $ filter even [0..10] Edit: I'm mostly writing this for prime
@@soldierbirb Yep, to read it imperatively you go right to left. But it takes on a declarative meaning when read left to right, which is why the apply function ($) has its operands in that specific order. The declarative way of reading it is how Primeagen read it: "The sum of a filter of even elements from a range from bottom to top." The inclusive range is also a declarative approach to ranges since its essentially set notation from mathematics. The idea of the top of a range of integers being non-inclusive is something that's left over from an imperative loop approach using i < top.
@@DryBones111 " The idea of the top of a range of integers being non-inclusive is something that's left over from an imperative loop approach using i < top." Nope. Ranges being non-inclusive is the correct choice for any programming context because 1. the difference between the endpoints equals the length and 2. non-inclusive ranges compose together much more nicely since with two adjacent ranges the endpoint of the first is the first element of the second. Haskell is using the wrong convention.
Expanding a bit on the haskell syntax, indeed haskell is a language that requires you to think very differently about programming, and as such, its syntax tends to look ugly, but once you start using it and understanding it, you can see that it is actually quite clean in haskell, functions are king, and as such, the syntax is worked around clean composition and use of functions ( to various degrees of success ) ignoring type signatures, functions are defined by naming them first, then listing the parameters calculate top bottom then, to apply it, you do the same calculate 1 100 this does cause a problem, filter takes a function to use as a filter, and the list you're gonna filter, if you tried to just list all of the arguments sum filter even list the parser thinks you're trying to apply sum to three arguments, in c-like terms: sum(filter, even, list) to solve this, you could just use parentheses sum (filter even list) but we also have the $ operator, which is simple function application (f $ x = f (x)) sum $ filter even list since $ is an operator, the parser now knows that filter is a function, and even and list are the arguments, and now it type checks neatly this does map directly to the rust solution: [bottom..top] = (bottom..=top) filter even [bottom..top] = (bottom..=top).filter(even) sum $ filter even [bottom..top] = (bottom..=top).filter(even).sum() as an extra note, the type system in haskell also tends to have an intimidating syntax, but is is because the type system is very rich and expressive for example, epressing filter in rust would look something like this (ignoring whatever complications might arise from lifetimes and such) fn filter (f: T -> bool, vect: Vect) -> Vect { … } while in haskell it would be: filter :: (t -> Bool) -> [t] -> [t] filter f list = ... (yes there is a reason for the arrows but this comment is very, very long as is) which one you prefer is up to you, but imo when you move functions left and right, haskell's syntax is quite ergonomic, while the parentheses of c-like languages gets in the way
For those wondering iota is a term borrowed from APL, where there is a glyph represented by iota which gives an exclusive range from 1 to n. It is used like ⍳5 and this would give the sequence 1 2 3 4. It is represented by iota because the proper name for this is interval or index generator, with iota being the equivalent to the English letter i in Greek
wow cool reference, what is this a marvel crossover or something? xD (why cant cunts just name it "range" or "inclusive range" or at least std::aids::inclrange)
I mean, in the context of APL iota is a beautiful operator. It's just kinda bloated for C++. But the fact that it is even there just show some capabilities to perform declarative programing in C++. That's IMO kinda nifty
Because he uses assembly size as his metric, O3 in C and C++ optimizes for speed, the compiler will unroll your loop and inline aggressively if it thinks it can make it faster and won't really care about the size of the code. The reason adding the various library functions reduces code size in O3 is likely because it adds a lot of things that have to happen to now work with arrays instead of just a few stack variables and therefore can't unroll your loop to make it all that much faster. While I can't say for sure, this might be slower when not threaded. So who knows how the speed is actually affected, you would need to actually benchmark it to know.
Heyyyyyyy this was my "Prime should react" video suggestion! I'm so happy that this was on yt now since I missed the stream due to other stuff I had to do. Awesome, had a blast watching it!
Wait, you *don't* like seeing 2 or 3 ternary operators nested together with a bunch of inline lambda expressions? Log debugging is in the past, just guess the problem correctly the first time
30:39 It's semantically equivalent. Yes syntax is different, but semantically you're just composing functions. And yeah, as other commenters said, Haskell can be weird, but it can teach you *a lot* about programming and give you many "WHOA" moments, once you get through initial learning curve.
Honestly, the most cursed thing about the C++ example is the fact that *in C++20* they didn't even have a ranges overload for accumulate... or a general fold function
It's moving a bit slowly: design by the comittee and then waiting until tooling actually implements now. It's 2023 and C++20 is still not fully implemented. For instance, the compilers mostly support modules but the build systems are far behind. The barebone support for coroutines is there, but, again, the library support is off and one doesn't want to use it without libraries, because it is quite low-level and meant for... library creators implementing useful, easy-to-use stuff on top of it. One single thing about C++ I hate most is building. Whereas elsewhere it is matter of specific dependencies, in C++ it is a game of makefile generators on top of makefile generators trying to guess where dependencies are, these can from the OS or some package manager or manual inclusion.
As the official intern (and future CEO) of The Startup™I'm legally obligated to say I'm loving this channel. A chance to watch something I missed from my genius CEO??? Perfect way to try harder.
When he mentions that he would need to relearn programming to understand Haskell: well, indeed you kinda do, and that is actually the point. Not a big fan myself of functional programming but one must recognize it is such an interesting paradigm and worth knowing (specially its theoretical foundation).
So, Haskell can be a bit of a brain bender. Every function in Haskell takes one parameter and returns one parameter. A function which takes more than one parameter is effectively a function which takes in one parameter, and returns a function which takes in all the other parameters (this happens recursively). So to use the example in the video, Int -> Int -> Int means that it takes an integer and returns a function that takes an integer and returns an integer. Higher order functions in Haskell are pretty nice for the simple reason that in order to deal with mutable state, you need to use monads (basically a means of passing the current state as a function parameter, and having the new state be returned). This means that if there's a bug involving mutable state, you automatically know which subset of functions it must be in, because you had to declare the use of the monad; much like how unsafe in rust reduces the surface area for memory or concurrency bugs.
@@isodoubIet MySequel is correct, as SEQUEL was the original name of SQL. But how is stood correct? std:: clearly means "Standard" as in Standard Library!
@@QuantenMagier Actually the MySQL docs state "The official way to pronounce “MySQL” is “My Ess Que Ell” (not “my sequel”), but we do not mind if you pronounce it as “my sequel” or in some other localized way." I hereby declare that "MySqueal" is the localized pronunciation of MySQL in my culture. As for stood, it's just that saying ess-tee-dee- all the time gets old. The "ee" sound tenses your mouth, and it's three syllables instead of one.
@@isodoubIet Who cares what MySQL docs say. SQL just gave up its original name SEQUEL because of alleged copyright infringement. Officially they had to state it is spelled different, but just removing the vocals from the original name implies 'FU' to the fraud claiming copyright, therefore SEQUEL is still the correct pronunciation. And there is no world where stud is the correct pronunciation, you either say "standard" or you spell es-te-de, everything else is just confusing and doesn't make sense, the reason to spell it is for people who listen being able to type it, which is not possible is you say stud and they don't know you mean std.
@@QuantenMagier I don't really care either; it's pronounced squeal. "And there is no world where stud is the correct pronunciation" The world of C++ standard committee meetings happens to disagree with you. When you disagree with reality, it's time to reassess, no?
I would actually enjoy seeing Primeagen trying to learn Haskell. Or well, something smaller like Elm or Idris. (Both are basically Haskell but done from scratch, with the focus placed on a slightly different use)
@@ccgarciab I'd say that they make the type system way simpler. Or maybe sleeker would be a better word. Tho its learning resources are sparse and usually assume you already know Haskell. I guess I would say that while some Idris features are more advanced and harder then what Haskell provides within the language, there are many topics in Haskell that are way more theory-heavy than what Idris adds. Still, these topics usually apply to Idris too. And finally, Idris tooling is basically limited to the compiler|interpreter and some editor integration. Though this includes being able able to write code semi-automatically (with a couple shortcuts to generate partial definitions, case split, search for a correctly-typed implementation, etc.)
i love haskel. It was one of the first langues i was taught. I am math person and it just makes sense to me. haskell programs read like proofs. i love rust too, been learning it for 6 years now lol. But I write cpp code for my telecom job, I must say that cpp code was just pp move. I work with deveopers that do this in real life, it just looks downright obscure and ugly to me. It is a nightmare to onboard new people if anyone in team write codes like that and doesnt write comments. that stood was awful as well. lol.
"that stood was awful as well." Extremely common pronunciation in the community, probably the most common. From your post it seems like you write C++ as C with classes. You should stop resisting the coworkers who want you to do better.
The big problem with `else` and `else if` statements is that the condition to execute the code block is not at the start of the statement like for `if`s, so you have to mentally juggle with negating the `if` condition and applying the extra conditions from the `else if`. tl;dr: else's have code which is run on an implicit condition, unlike `if`s where the condition is explicit.
About the C++ code: The initial for loop was perfectly fine! Easy to understand, easy to optimize by the compiler. Anyway, there is a problem with the C and C++ versions. The problem is when top is the maximum integer value. Assuming it is really running for that long the C version will overflow and never stop. Always cycling all the numbers. The C++ version starts by adding 1 to top and as such will immediately overflow and immediately stop (will do one loop if bottom is the minimum integer value). In reality that case might not happen, but maybe it can happen if user control the input and that situation would cause some security issue or something. So depending on your context it might be a good idea to handle that case. Hope I got that all right in my head. Rust ranges seem to be handling that case correctly when testing it on Rust Playground. And the difference between the Rust and Haskell version is just methods Vs functions. The -> syntax for the function type is maybe a bit weird when you see it the first time. In Haskell all functions are curried. Meaning if you pass in less parameters than possible you just get a function (closure) back that takes in the rest of the parameters. The -> basically means returning. As such the calculate function is a function that takes in an Int and returns a function that takes in another Int and returns an Int. A bit weird, yes, but handy if you like this high level stuff. E.g. IIRC you can define sum like this: sum = foldr (+) 0 Note that there is no parameter in the definition of sum because foldr takes 3 parameters, but I only pass 2 here. Meaning there is one remaining parameter: The list of numbers to sum. The product function would be: product = foldr (*) 1 foldr is usually called reduce in other languages. There is also foldl. It's about from what side the list is processed. Since addition and multiplication are commutative that is not important here, though I think foldr has better performance in Haskell because of lazy evaluation. Haven't touched Haskell since university, which was more than a decade ago. Btw. IIRC Haskell and currying is both named after the same person: Haskell Curry. So of course everything is curried in Haskell! XD
37:16 a call to a library function itself adds an overhead of a few instructions, like stack push/pops, call/jump instructions, returns etc. which can vary based on the calling convention
What the assembly comparison tells is that the early things all compile just fine and the compiler understands exactly what is going on, so when you compile with -O2, it can make things tiny, and when you compile with -O3, it can unroll the fuck out of everything and makes things ~blazingly fast.~ On the other hand, the fancier stuff where there's less of a difference, the compiler no longer understands what is going on and so it can't make the optimizations you would want out of it.
3:41, should the bile come up for swirly braces left at the end of a scope header? What happened to properly spacing out code so that it's readable as opposed to a giant wall of text? What happened to making scope headers easily distinguishable at a subconscious level by separating it from it's code? Every time I look at a scope header with a swirly brace next to it instead of under it my knee jerk reaction is always "ugh" instead of straight away having my attention drawn to the code instead of the header.
You can manually reason about what constitutes the count of evens in a range, and find out that you do not need loops, just a few rithmetic steps. You calculate the half of the length of the even aligned subrange, and then correct it with the endpoints. Simple constant time goodness. int calculateCompact(int bottom, int top) {return !(bottom%2) + top/2-bottom/2;}
Haskell is a purely functional language while most programming languages are just imperative with some functional features. The thing with such a language is that it basically never sees any practical use outside of academia where it is used as a proof of concept. It is a very beatiful language however.
I'm mostly with you on this. I really like C++ though. I don't care about iota or filter much. I do like lambdas and some of the helper functions if they make the code more clear. I don't like operator overloading when it changes meaning depending on the context, it's fine if it's consistent across C++. I guess it's good we're all a little different, find the flaws in each other. XD
This has been an emotional roller coaster as a mathematician (so haskell feels the most natural for me). Like first he gets mad of the type declaration thingy and I'm like "wtf programmers don't know currying?" but then he gets it an appreciates it which is nice. BUT THEN HE SAYS "I wOulD hAndS dOwN haTE debUGGinG haskell". Bruh everything is pure functions and type-safe. It is literally the debugging dream.
@@nan0s500 Mu point is that if you want to debug then a pure type safe function is the dream as you can just test all values you wanna test without having to think about anything else than it should return the right values and you can trace back exactly the source of the error as there are no global variables that you need to keep in mind.
Optimisation typically will produce more instructions when dealing with loops (unrolling - essentially reduces the number of cmp and jmp instructions by making explicit instruction repetitions with the incremented pointer position or register[] value). If you're using library functions your assembly will be smaller as the looping is now reduced to a single call instruction (plus a few mov instructions for arguments) because the the loop work is done elsewhere - unless the functions are inlining the assembly. So the number of lines of assembly does not map to performance for a single function or executable. Furthermore, with CISC (e.g. x86) you can use the same complex instructions such as mul (multiply) and how you supplying the source and destination data can affect how well this performs. With modern CPUs how you supply data to certain instructions can affect how CPU graphs your instructions and this can affect execution speed. So again compilers may use a more verbose set of instructions because it helps the CPU firmware map your code more efficiently. How do I know this - because I code in C! Grrrrrrrr..........................
The funny thing about the C++ example is that new lines and tabs are just for our benefit anyway. You’ll notice he chose to just include a whole block in one line between the curly braces after the auto statement.
Ah, C++; the language everyone loves to hate. True, there are a lot of things that come out of the standards commitee that make you go "what?!". On the other hand, it's ~sad~ funny to see how little people know about the language before criticizing it.
I'm learning Racket here at University, a functional language similar to Haskell, and debugging is a nightmare. Recursion is such a prominent feature of a functional language, and it is very efficient, but very also hard to debug. We have a stepper to step through the code and it always explodes in lines of code being executed on each recursive call, making it hard to read and actually debug lol.
dont know about racket, but in my experience with haskell, the type system really helps, and though at the beginning it was very difficult, later it isn’t an issue that is, with modularity and clarity of what your functions are doing in your loops
Coming from micro controller I get the C code and the rest looks horrible to me, probably cause of syntax I do not see anywhere else than here. But, in big O are any of these better than C? Will any of these performe better than the C solution? Or worse? Ps. English is not my native language so apologies.
Honestly, the biggest issue with the C solution is that it will blow up if you make one of the variables MAX_INT. This can be easily resolved by just checking for that, but otherwise none will perform all that better.
Hey, one question to the rust code. Wouldn’t it handle the case bottom==too differently than the c version? Wouldn’t calculate(4,4) return 4 for the rust version and 0 in the c version?
You can't always just rename to cpp. Things like no implicit cast from void* breaks malloc, realloc, etc. There are some other oddities (with function pointers iirc) as well, but that's the primary one.
Yeah they fucked up the C++ standard and compilers not to be synced any more with the C standard, but in former times every C code was also C++ code and C++ was just compiled to C before being compiled to assembly.
The Int -> Int -> Int stuff makes more sense when you learn currying. If we had, say myMultiply x y = x * y then myMultiply x would give you a function that took one parameter, y, and multiplied it by x. That is, it is equivalent to \y myMultiply x y In e.g. Python, we have to write something like lambda y: myMultiply(x,y) It comes from the maths roots of Haskell. Things like if f:X→Y is a function and g:Y→Z is a function, then (g.f) is a function (g.f)X→Z where (g.f)(x) = g(f(x)). Haskell is a bit of a 'programming language for mathematicians' at times.
Readable really is a case of what you're used to looking at. I'm fairly used to mapping and filtering over collections in the kind of manner shown around 18:31, so to me it is pretty readable. I find it nice to look at. I would prefer the return early approach to the branching too, but the rest reads easily for me. I read the accumulate body something like this: ACCUMULATE over a RANGE from bottom to top + 1, and then keep the entries that are even.
Can someone explain why curly brackets on new line is so bad? My professors at engineering school all taught me to do that as best practice with c++. I’m genuinely curious, or is it just a hot take?
iota comes from APL, where it means "indices", and generates all the numbers from the index origin (0 or 1) up to the number of indices indicated by the argument.
once you learn haskell, the code reads almost like poetry, though, the problem is learning haskell… haha gotta say, it’s a bit difficult, but it expands your mind like you never thought it would
Learning Haskell to expand your mind is like diving in to flat earth communities to expand your mind. Like, yeah, you learned new things, but those things are all useless.
@@enzoqueijao I'd suggest that the rust implementation demonstrates that the very few loosely inspired by functional programming concepts are not functional in nature, they're just more often found in FP (ptooey) due to the earlier industry forming around OO (ptooey). One might view compiler level support for tagged unions (Algebraic Data Types) as an obvious step forward from C upon acknowledging that OO (ptooey) is bad, and the same goes for support for functions as first class citizens. Rust also borrows concepts more frequently found in OO (ptooey), such as tight grouping of state and behaviour, as well as their version of encapsulation.
Learning Haskell makes Rust feel that much more clunky lol. Especially when it comes to state and error handling. Though Rust is a lower level language so that is to be expected.
Me at university: I love C++, it gives me so much control! Sure some concepts like pointers are hard for some people, but they're fundamental to programming. Me seeing actual C++ production code: Yeah no I'll never use this language again wtf
@@olaniyanayodele5986 depends on your inclination, I don’t think the order alters the product here. Maybe learning rust first might be less daunting and them later learning Haskell as a step up on your game
16:12 couldn't you put a function that returns an int but also logs for you in the turnary expression? allows for more adaptability and you can just go to definition now so you can explain more on your logging process and focus on that rather than refactoring that current function if it works.
the reality is that i want to make _good_ react content. a 14 minute video -> 20 minute video == very little additional content. i would feel not so happy about this. either i can add with my experience or its not good! (at least that is what i am thinking these days)
@@ThePrimeTimeagenhey prime! really love that you put the effort for your reactions to be more valuable than most reaction content (wayyyyy wayyy more than others), it’s noticeable that you don’t just “nod your head” with a word or two, but you actually add something to what you are reacting to really love it and thank it, keep those vids coming !
Looking at the generated assembly code he shows, there are some interesting observations I made. Due to the way he used godbolt the compiled rust code didn't even include the calculate function since the results were able to be calculated at compile time. The C and C++ code also calculated the result at compile time, but still included the calculate function. O3 produced more instructions than O2 because it made an attempt to vectorise the code. Using std::accumulate prevented the C++ compiler precomputing the result of calculate(5, 12) when using O2, although it still managed it when using O3. Trying to compare the speed of languages by looking at the number of generated instructions is flaky at best, but even worse when the code examples used can be trivially precomputed.
I liked the cpp refactor, the iota naming is weird though, a `std::view::range` or something would be way better anyway I don't do cpp since school so whatever lol
iota is just a convenience method and views is just a sublibrary of the ranges library. With C++20 and now C++23, there's probably a hundred different ways to write the same loop. :P
From my 2 days of haskell experience (so I’m probably wrong), the calculate :: int -> int -> int -> line is defining the arguments within the subsequent function “calcuate = …” will be integers (and therefore not strings or booleans)
Not really. The reason O3 blows up the instruction count is because it unrolls loops more aggressively; and simply executing one instruction after another is gonna be faster than jumps
Maybe it is intentional, but it bothers me a bit that no one mentions that you don't even need the loop for this problem. You can solve this with algebra.
$ in haskell is an operator. This operator will take a function on the left side and apply it to the parameters on the right side. It is often used to hide parenthesis
Don’t program in rust (want to learn) I’m a C# dev and that rust code is 100% readable. We have things in C# virtually the same so that helps, but any developer should be able to look at that and understand what it’s doing regardless if their primary language
@ThePrimeTime I love your content and very much appreciate these videos and i would love to watch whole VOD , i have a request can we get VODs of every stream from now on or however you want??