From 2018 - 2019, this channel focused on showing solutions to competitive programming problems. In 2020, I transitioned to focusing more on programming languages and showing how to solve problems in different languages. I also upload prerecordings of the Programming Languages Virtual Meetup and livestreams.
This is all good stuff but the fact that you need to spend so long justifying using glyphs tells you why they are the wrong choice. And typing words which turn into glyphs is very clever but just makes it worse! Just stick to the words!!
This is the thing, that bothers me about Haskell, imagine explaining the code someone and the it comes down to 5 min talk about just 3 ascii characters that do something, which can be done with just one extra function 😂😂
Hi man! I'm back here from a couple of years away and I see you completed shift the channel. It's not about algorithms anymore but about these esoteric languages. Please go back to C++ and algorithms or "fork" the channel, a lot of ppl here is hostage to the previous content and don't think is fair.
Hey, i want to thank you a lot for all of your incredible videos, recently, while watching your videos, i got the idea to make a stack-based programming language. Here is a solution i made for this problem my new language GRP (grape, totally did not steal APLs naming scheme): MaxParenDepth :: {$> . '(' = {if , 1 else ')' = {if -1 else 0} }} {&> 0 /=} {<# +} {<.> =+=} :: "1+(3-(4*6)+(7/(2-3))-1)" MaxParenDepth |<< Here is a version without the custom operators and with documentation: MaxParenDepth :: {Map . '(' = {if , 1 ; Maps over the string/char array, and returns 1 if the character is equal to '('. The comma ignores the duplicated argument (current character) to make this solution point-free else ')' = {if -1 else 0} ; If the character is equal to ')', return -1, else 0 }} {Filter 0 /=} ; This part filters out the elements that are equal to 0 (keeps 1 and -1 only) {ScanLeft +} ; Scans left with the plus operation {Reduce Max} ; Self explanatory :: "1+(3-(4*6)+(7/(2-3))-1)" MaxParenDepth Print ; Calls the function and prints the result
It makes sense to terminate the contains algorithm early if a prime greater than the number being looked up is seen. Like this: template <std::ranges::input_range R, typename T> constexpr auto contains_early_terminate(R&& range, const T& value) -> bool { for (const auto& elem : range) { if (elem > value) { return false; // Terminate early if an element greater than the value is found } if (elem == value) { return true; // Return true if the value is found } } return false; // Return false if the value is not found } constexpr auto is_prime(int n) -> bool { // return std::ranges::contains(primes, n); return contains_early_terminate(primes, n); }
This came out of a leetcode question for which the test cases are not known at compile time so in that context, constexpr would only be useful to generate the prime table rather than hardcoding as literal values. For that case, the most interesting solution is the 100 bools one. By stuffing the bools into individual bits of a 128 bit SIMD register you can get them out again with a short instruction sequence of shifting, masking and moving to a general purpose register. Presumably C# would have the advantage there with its native SIMD types although we do have `std::experimental::simd` and various non-standard libraries and architecture and compiler-specific intrinsics.
Scala 3 can run any code at compile time. In fact I think it provides three different ways of running code at compile time, although one of those - the type system, is ~decidable rather than arbitrary code.
Smullyan and Priest write some of the most accessible logic books. I don't study much in this area of logic, but I'd definitely pick through the Smullyan text if I wanted a quick comprehensible dive. Good recommendation.
This should work too int calculate (int bottom, int top) { if (top < bottom) return 0; if (top < 0) return -calculate(-top, -bottom); if (bottom < 0) return calculate(0, top) - calculate(0, -bottom); bottom -= 2; bottom += bottom & 1; bottom >>= 1; top >>= 1; bottom *= bottom + 1; top *= top + 1; return top - bottom; } 30 instructions with o2
im kinda stupid, it turns out; you can simplify to int calculate (int bottom, int top) { if (top < bottom) return 0; if (top < 0) return -calculate(-top, -bottom); if (bottom < 0) return calculate(0, top) - calculate(0, -bottom); bottom = (bottom - 1) >> 1; bottom *= bottom + 1; top >>= 1; top *= top + 1; return top - bottom; } obviously the 2nd and third statements can be left out if the numbers are always positive. anyway, same length in assembly since the compiler is black magic
When will people just accept that functional programming is a great way of taking clear and concise code and then completely murder it, obfuscating its meaning for everyone else in the process? Just because someone doesn't like the "normal" way of doing things. Yeah yeah, it's stylish because you are the special one doing it, I don't care. Just write the shortest version that uses the most basic tools and avoids std functions where they aren't really required for anything.
You could also use the shape directly rather than just one side length. I.e. instead of =⌜˜↕≠ you can use =´¨↕≢ Downside: It would no longer error out for rectangular matrices, but check diagonals from the top corners.
Python can be made even more intuitive: def maximum_difference(nums): differences = [nums[j] - nums[i] for i in range(len(nums)) for j in range(i+1, len(nums))] return max(differences) if len(differences) > 0 else -1
I was looking for this upload, but I had to swap to the Live tab in your channel, rather than my subscription feed. Thanks to Kaikalii for making Uiua, and Code Report for putting this podcast together!
The solution I came up with was first noticing that, instead of removing an element, we can just replace it with 1 (neutral element of the multiplication). This can easily be done using the "@" operator. The solution becomes (for example): {(⍳≢⍵){×⌿1@⍺⊢⍵}¨⊂⍵}
You know It's very easy to forget. You can use namespace like that. [That makes using ranges, and Chrono significantly easier.] Well I mean the calculations do happen. They just happen to compile time instead.
I like how dynamic languages are trying to be static like python hints and typescript Static panguages are trying to be dynamic with var, and removing types from definitions
I would like to see how to enact side effects from these terse array languages, more inline effects than just the implicit return at exit. Like in order to actually do something with all this efficient syntax as they run perhaps indefinately like a program loop and incorporating input from one or more streams of unknown termination.
I’m a stickler for verbosity. I don’t like your python solution because I’d rather a much longer but easier to read function, yes even if it’s less performant
Interesting because I would rank it… exactly opposite. I like the readability of imperative code and despise whatever that 4 character mess is. Er, no offense
Месяц назад
Hi! thanks for the great demonstration of what constexpr does by showing the resulting assembly code! This is the first time finally functionality of constexpr sink in with me. By "Does constexpr make is_prime O(1)?" I've totally meant "whether it'll be handled at compile time?". I should've been more clear about my intention to not confuse other commenters, but you got it right anyway. ^_^
Damn! I got the 1.18 right the first time but because of a syntax error (i didn't define double A, I was doing (* 2 A) because I'm lazy), and I named my main procedure "*" I was having an infinite loop. Syntax errors will get ya eh Thanks for the videos! I'm going through each problem one by one and watching your videos after I complete them!