Тёмный

One second to compute the largest Fibonacci number I can 

Sheafification of G
Подписаться 19 тыс.
Просмотров 352 тыс.
50% 1

Опубликовано:

 

14 окт 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 1 тыс.   
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Wow, this really got a lot of attention... thanks everyone!! I'd normally engage with the comments directly but I'm about as efficient as the naïve Fibonacci algorithm at that sort of thing... since there are some trends in the comments, I figured I'd at least address the most common questions/concerns that I came across: 1. *Runtime of the "linear" algorithm.* I swept the detail under the rug (didn't seem like the right time, but hindsight is far acuity), but it's explained a bit more in the definitely-not-hard-to-find greyed-out paragraph at 10:53. Briefly, the linear algorithm is O(n^2), where n is the *index* of the Fibonacci sequence, and the digit-length of the nth Fibonacci number has O(n) digits thanks to Binet's formula! 2. *Colour palette.* Classic "works on my machine" moment; the colours looked a *lot* better on my computer before it got uploaded to RU-vid. Sorry about that! I won't be using the same colour scheme in future videos; lesson learned. You can still read the actual source code at github.com/GSheaf/Fibsonicci 3. *Choice of number base.* Speaking of source code, there are some comments regarding my choice of using base-256 for my big integers. Just to clarify, I only made this restriction when dealing with Fourier transforms, with the justification being that double-precision floats wouldn't be able to handle larger bases. For the other, simpler algorithms, I used larger bases! This is summarised in the README for the source code. Most algorithms use base-2^32 (so that I could cast to 64-bits to do digit-wise products), and the "linear" algorithm uses base-2^64. 4. *Avoiding precision errors.* Many people mentioned the Number-Theoretic Transform as a correction to the FFT that doesn't suffer from the precision error. This would be a natural next step, at the cost of having to figure out a way of getting a sufficiently large prime p that is equal to 1 modulo the sequences being convolved (a headache I didn't want to get into after 25min of video). Alternatively, you can also implement "adjustable fixed-precision floats" to account for this. 5. Binet's formula doesn't render this problem "solved": how do you compute phi^n? 6. Memoisation is *not* a typo, and I'll die on this hill. Anyway, definitely enjoying reading all of the comments here!
@somatia350
@somatia350 2 месяца назад
Hello! I really like this video, but some of the concepts are beyond what I’ve learned. What would you recommend to first look at to get a greater understanding of the video?
@tolkienfan1972
@tolkienfan1972 2 месяца назад
@@SheafificationOfG are you sure you can't go bigger than base 256 for the fourier algo? Doubles have 51 bit mantissa's. 256 is 8 bits. Memoisation is correct
@SheafificationOfG
@SheafificationOfG 2 месяца назад
@somatia350 I think it'll depend on what concepts you want to go in more detail with, but a safe bet is probably The Book on algorithms (I.e., Cormen, Leiserson, Rivest, Stein). Not sure what it says regarding Fourier, but the book is excellent for giving you all the foundations in this kind of stuff, and you can build from there pretty easily.
@SheafificationOfG
@SheafificationOfG 2 месяца назад
​@tolkienfan1972 The reason for reducing the base to 2^8 is to ensure that the mantissa is large enough to hold the (implicit) *sums* of digits in the underlying convolution. Decreasing the digit size allows for larger sums. If I were to use 16-bit digits, I might see FFT break down after only the 47000th Fibonacci number, based on the same rough calculation in the video (granted this might be too pessimistic of a bound). On the other hand, if I used 4-bit numbers, I would be able to take the computations much further (past the 48 millionth Fibonacci number)... if I could compute that far.
@tolkienfan1972
@tolkienfan1972 2 месяца назад
@@SheafificationOfG if you add n 16bit (16 to hold the products) numbers you need ceil(16+lg(n)) bits. 51 - 16 is 35. That's about 32 billion limbs. Did I make a mistake?
@karlll3321
@karlll3321 2 месяца назад
This video is 25 minutes not one second
@landsgevaer
@landsgevaer 2 месяца назад
Yeah! If I had 1 second I would shout 89, not turn on a computer and start making a video.
@kingki1953
@kingki1953 2 месяца назад
The main problem is to find the best algorithm for calculate fibonacci number in one second
@PC_Simo
@PC_Simo 2 месяца назад
@thekatdev6007 Savantism. That’s, where it’s at.
@tavinyo2
@tavinyo2 2 месяца назад
Lies
@toricon8070
@toricon8070 2 месяца назад
it's in slow motion so we can follow along
@Ganerrr
@Ganerrr 2 месяца назад
diamond medalist: storing the number in code and printing it
@enderfun2852
@enderfun2852 2 месяца назад
Let's all just appreciate that this man used DFT, FFT, Binet formula, Karatsuba's multiplication, Linear algebra, Complex numbers and Galois groups just to compute some Fibonacci numbers, whereas SIMD just left the chat
@asdfghyter
@asdfghyter 2 месяца назад
SIMD can only give a constant factor improvement though
@pumpkinhead002
@pumpkinhead002 2 месяца назад
​@@asdfghyterTrue, but that doesn't mean it's going to be slower. The problem statement is how many can be calculated in 1 second, not which algorithm had the most efficient computation logic. Although that is what the video is about. Technically a SIMD or GPU solution could be faster even with a naïve implementation
@asdfghyter
@asdfghyter 2 месяца назад
@@pumpkinhead002 not with the most naive solution, no. that would be very impossible, since it's exponential. with any of the better algorithms it could indeed be faster though, as long as it's at least polynomial some of the last steps only gave improvements by a factor, so those might very well be surpassed by a SIMD or GPU implementation though, it's not completely obvious how to parallelize this problem, as the key part of the definition is a recursion. the main component that is parallel is the basic arithmetic operations, which are basically inherently SIMD already, but SIMD might be used to make bigint implementations faster. (and of course the matrix operations, which i forgot when first writing this)
@mtarek2005
@mtarek2005 2 месяца назад
​@@pumpkinhead002I'd love to see gpu parallelized multiplication
@BonktYT
@BonktYT Месяц назад
@@asdfghyter The fact that the most naive solutions are exponential still does not mean that they can't be faster during one second after a constant speedup from SIMD, it is in this case unlikely given the large problem size, but not impossible. You don't understand asymptotical performance measures.
@diskpoppy
@diskpoppy 2 месяца назад
My mind was wandering towards memory management and SIMD... but this is a math channel, and like you said - mathematicians can't program!
@janisir4529
@janisir4529 2 месяца назад
@@diskpoppy this needs a cuda implementation somehow
@tolkienfan1972
@tolkienfan1972 2 месяца назад
Find the right algo first. The rest is linear speedups.
@janisir4529
@janisir4529 2 месяца назад
@@tolkienfan1972 Certified "Mathematicians don't know how to program" moment. 1:55 You are going to run out of memory before your slightly better scaling algorithm catches up in speed to one that was actually well written to utilize the computer well.
@diskpoppy
@diskpoppy 2 месяца назад
​@@tolkienfan1972 You'd be surprised how much speed-up and overall efficiency you can gain when you're conscious of things like memory allocations, cache locality, and the hardware in general (that the program must run on in the end after all). The linear speed-up tends to be several orders of magnitude. Furthermore, there are absolutely cases where low-level details can affect the time complexity itself. On the flip side, even when the algorithm has a lower order, the constants that are left out of in the Big O notation can make it absolutely much worse for any inputs of interest. And I'd argue that actually implementing the thing and analysing it, can absolutely help with coming up with a better algorithm, especially if you find out all the redundant things a given program is doing.
@luigidabro
@luigidabro 2 месяца назад
simd doesn't implement a carry bit
@michaelearnest1983
@michaelearnest1983 2 месяца назад
This video is extremely refreshing! I've seen people claim that you can compute Fibonacci numbers in O(log n) time, because they saying that arithmetic operations take constant time, and there are only O(log n) operations. This approximation is often useful, but in the Fibonacci case, you cannot discount the added cost of adding/multiplying large integers. The way you showed the actual runtime increasing with graphs really sells this point.
@Avighna
@Avighna 2 месяца назад
I think people are generally talking about the n-th Fibonacci number modulo some integer when they say that.
@palmberry5576
@palmberry5576 2 месяца назад
⁠​⁠​⁠@@Avighnawhich is basically useless considering Fibonacci has linear memory requirements. Its similar in nature to the fallacy of O(1) hashmaps (maximum memory access speed is O(n^(1/3) for n bits, which has practical effects in the case of the various cache levels of a cpu and ram)
@Avighna
@Avighna 2 месяца назад
@@palmberry5576 It is a common task in competitive programming. And besides, why is knowing the 2 millionth Fibonacci number not under a modulus useful? It’s all theoretical anyway.
@palmberry5576
@palmberry5576 2 месяца назад
@@Avighna I meant it is useless to talk about modulo some integer considering the Fibonacci sequence’s length grows linearly with n
@QuadfishTym
@QuadfishTym 2 месяца назад
@@palmberry5576 Can you elaborate on the n^(1/3) result? Where does that come from?
@denizgoksu9868
@denizgoksu9868 2 месяца назад
This reminds me of the time our discrete math course had a quiz that asked us to "compute f_300" and I, naive and brave, unironically tried doing it by hand. I was and still am pissed off to an unprecedented degree that "compute" apparently meant "Express the general term of the sequence as a linear combination of exponentials and substitute 300 into the free variable without doing any reduction"-they could have just told us to do that yk
@landsgevaer
@landsgevaer 2 месяца назад
I hope you knew how to use f(2n)= f(n+1)^2 - f(n-1)^2 at some point... 😉
@denizgoksu9868
@denizgoksu9868 2 месяца назад
@@landsgevaer Proving it and using it was my original strategy but the computation part was left half finished as I handed the paper in. I also attempted it after the fact on a blank sheet and figured it would have taken far too much time at that point anyway. But after this whole ordeal I am now less naive than I used to be regarding how computations scale as numbers grow
@nikplaysgames4734
@nikplaysgames4734 2 месяца назад
Currently taking a discrete math shmmer course, our textbook linearized the recursive Fibonacci formula, it looked very complicated, can’t imagine doing that on a test lol
@reddmst
@reddmst 2 месяца назад
LMAO, I'm pretty sure the TA showed your paper to everyone in their lab and they had a ton of laugh about it xDD
@dank.
@dank. 2 месяца назад
As a Russian, I don't blame you for thinking Karatsuba is Japanese
@SheafificationOfG
@SheafificationOfG 2 месяца назад
All I had to do was google it smh
@dank.
@dank. 2 месяца назад
Just actually finished watching it - great video!
@filo8086
@filo8086 2 месяца назад
As a Russian, i- i thought its japanese too
@treint6751
@treint6751 2 месяца назад
​@@filo8086Yeah
@_wetmath_
@_wetmath_ 2 месяца назад
as an asian i thought it was japanese too
@cobo1678
@cobo1678 2 месяца назад
Man that "1 F for you, and 5 F's for your closest friends" joke had me cackling. solid CS and math humour here lol
@junyong0716
@junyong0716 2 месяца назад
i dont get it
@vari1535
@vari1535 2 месяца назад
i don't get it
@shortsornothing4981
@shortsornothing4981 2 месяца назад
Jim Rohn once said "You are the average of the five people you spend most of your time with".
@JazzyMaxine
@JazzyMaxine 2 месяца назад
@@vari1535 hexadecimal
@dinhero21
@dinhero21 2 месяца назад
1FFFFFF₁₆ = 33554431₁₀ but why these numbers specifically?
@Danylux
@Danylux 2 месяца назад
love how i came to the channel for set theory and stayed for computer science
@SteveBakerIsHere
@SteveBakerIsHere 2 месяца назад
Noooooo!!!!! Do not use this video as any kind of teaching of computer science. This is a train-wreck as far as computer science is concerned!
@kindlin
@kindlin 2 месяца назад
I definitely came for the computer science, a lot of the second half of the math *woosh*
@flsendzz
@flsendzz 2 месяца назад
@@SteveBakerIsHerewhy?
@aymangani5416
@aymangani5416 2 месяца назад
@@SteveBakerIsHerewhy?
@janisir4529
@janisir4529 2 месяца назад
Do template meta programming. Technically it just prints out a number, the compilation taking a very long time doesn't matter, as the task was ill defined.
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Template metaprogramming is a pathway to many abilities some consider to be unnatural.
@janisir4529
@janisir4529 2 месяца назад
@@SheafificationOfG constexpr should also work
@JoJoModding
@JoJoModding 2 месяца назад
The real answer is that you hardcore the largest number into the binary. Then the 1s time limit is mostly spend reading a number from disk and printing it again.
@janisir4529
@janisir4529 2 месяца назад
@@JoJoModding I'm already literally cheating with template meta programming, but even I have standards to not sink that low.
@JoJoModding
@JoJoModding 2 месяца назад
@@janisir4529 I mean, it produces the same program, but just gets there faster
@stefanalecu9532
@stefanalecu9532 2 месяца назад
For the matrix method, you can have a 4x4 matrix derived from F(n), F(n-1), F(n-2), F(n-3). This is nice because you can express those with coefficients as powers of 2, which means you can use SIMD and process multiple numbers at the same time. You could even reasonably do this for an 8x8 matrix and get to use AVX2, but it's a tradeoff. Asymptotically nothing would change, but having a 4-8x speedup because of SIMD sure is helpful in real life. This is getting deep into the territory of optimizing big numbers (and at that point, why handroll your implementation instead of wrapping GMP?)
@janisir4529
@janisir4529 2 месяца назад
I'm not sure SIMD would be helpful, moving data in and out of registers has quite an overhead. But the Number class should have been 2^32 based, that's basically just free speedup, because uint8_t is still done on the same ALU unit.
@SheafificationOfG
@SheafificationOfG 2 месяца назад
SIMD would be a bit more work to set up since the number class and its arithmetic is nontrivial, but those are ideas (and yeah, reinventing the wheel is definitely never worth it unless you think you're smarter than the many people who developed GMP haha but where's the fun in that). Also @janisir4529, while I used uint8_t for my FFT-based multiplications (for the sake of controlling the errors), everything else was done with uint32_t (the number class is actually templated)!
@janisir4529
@janisir4529 2 месяца назад
@@SheafificationOfG Okay, that makes sense.
@austinconner2479
@austinconner2479 2 месяца назад
Using Schonhage-Strassen multiplication (basically fft over a finite field, so no using doubles and being limited by precision) and matrix exponentiation by squaring, I get the 67108864th fibb number in 1.02s. Doing the same using arithmatic over the number field gets the same in 993ms, basically not improving. Pari somehow computes the 200000000th number in under a second. Would be interesting to show how this was achieved
@spacebusdriver
@spacebusdriver 2 месяца назад
Does it really make sense to compare these numbers when all calculations are (presumably) done on different machines?
@farlaxx219
@farlaxx219 2 месяца назад
@@spacebusdriver If we know the spec of each processor and memory, you could probably make some kind of generic average based off the performance stats, ie your processer is 2GHz and you get the 1,000,000th number, and i have a 3GHz processor and get the 1,800,000th number, we could scale these down to 1GHz on each machine, we would find you get 500,000th number, and I get 600,000th number, thus my solution is better. In saying that, it wouldn't be that accurate, but it might give a decent estimation of performance comparison. True performance equivalence is to have some standardized machine that people could have or test it on and run it. Could be a nifty website idea, you slap in your code and see it performs compared to others.
@DjVortex-w
@DjVortex-w 2 месяца назад
If you want to calculate the multiplication of very large integers as fast as possible, use the GMP library. The authors have done a huge amount of work to make it as efficient as possible.
@NoNameAtAll2
@NoNameAtAll2 26 дней назад
and at that point you can just use provided fibonaci function
@TheOneMaddin
@TheOneMaddin 2 месяца назад
Your channel is truly one of the best math channels around right now. I know I know, opinions might vary depending on whats your level of math, but I can say that it perfect for me. And you do not lie to your audience that everything is EASY and then hit them with axioms they are supposed to absorb in 5sec. You know your math and you are not afraid to show it.
@ke9tv
@ke9tv 2 месяца назад
When you got in the lecture to 'an F for you, and F''s for your five closest friends', RU-vid cut to a commercial beginning 'An actual letter to [advertiser]'. I stuck around in the ad far too long waiting for your punchline, when I realized that it wasn't your joke, it was a practical joke from The Algorithm. I'm a computer scientist. I'm familiar with Schönhage-Straßen, and with Moler and Van Loan's 'nineteen dubious ways to compute the matrix exponential,' but your discussion is hilarious, in the flavour of Carl Linderholm's 'Mathematics Made Difficult'. Bravo!
@subclassify.
@subclassify. 2 месяца назад
you got played
@johnchessant3012
@johnchessant3012 2 месяца назад
19:00 "a true measure of success is when you manage to unmake a name for yourself" LMAO
@trwn87
@trwn87 2 месяца назад
Yes, 😂.
@XZenon
@XZenon 2 месяца назад
Gauss things
@SeanCMonahan
@SeanCMonahan 27 дней назад
I expected Euler to be honest
@suhradpatel2322
@suhradpatel2322 2 месяца назад
14:37 I get it! 1FFFFFF is 33,554,431 in hexadecimal.
@leglaude6211
@leglaude6211 2 месяца назад
As soon as you said the problem could be written using matrices I immediately thought "It could be a good idea to diagonalize the matrix!" and kept going crazy because you just wouldn't do it (until the end). Good video!
@itoastpotatoes399
@itoastpotatoes399 2 месяца назад
Bro that's what I'm saying I was tweaking until the end. Ig it's because diagonalization is taught very early compared to the crazy shit he does with FFT, so we all know it and how fast it would be.
@Vaaaaadim
@Vaaaaadim 2 месяца назад
For an audience that may have people with varying proficiency or knowledge about the subject matter, I think it makes sense to start simple and gradually show increasingly more sophisticated methods. A choice could've been made whether to showcase the FFT or diagonalization first, but ultimately both those ideas are used in the final result.
@jm-alan
@jm-alan 2 месяца назад
As soon as you started explaining digital multiplication, I immediately realized you were going the Karatsuba route A few months ago I started working on an arbitrary precision integer library (for Fun and Profit™), and spent a whole bunch of time benchmarking exactly where the crossover should be to switch back to doing traditional multiplication vs the crazy allocation cost of doing recursive Karatsuba
@ghostrider3911
@ghostrider3911 2 месяца назад
Loved this video! I'm a math & cs student, I learned a lot from watching how you connected all of these different areas in math/cs to solve a deceptively simple sounding problem! Please do more stuff like this, it's invaluable how you seamlessly showcased the usage of linear algebra, complexity analysis, complex numbers, Fourier transforms, bit/byte representation of the numbers, optimizing multiplications (and anything else I missed) for optimizing this. I've read and studied these concepts but it was never made THIS clear to me how they could be utilized in practice in such a cohesive video. If you read this, I'm curious how long did it take you to optimize this and get all the material for the video?
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Really appreciate the comment! I kinda threw the code together a month ago, and then did some major refactoring halfway through before making it public. I kinda kept things honest (except for the bit-reversal in the FFT implementation), so I tried not to stress myself out with fine-tuning my optimisations, and I was already aware of the algorithms I was going to use before I put the video together.
@bloom945
@bloom945 2 месяца назад
Man I loved this video! Though I didn't understand much past the linear algebra, it was still interesting to see your analysis of the runtime and the possible solutions to improve it. Kudos!
@andermium
@andermium 2 месяца назад
Thank you for the subs at 11:40 💜! People that just copy their script have spoiled their jokes to me before
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Glad the effort was worth it!
@bingusbongus9807
@bingusbongus9807 2 месяца назад
i remember when and how i was taught the fibonacci sequence, it was year 4 and we were learning about sequences of numbers and the teacher said that this is a sequence not even mathematicians could figure out until they were told it and wrote the fibonacci sequence on the board, she gave us an attempt to figure out the pattern and no one did it
@wkingston1248
@wkingston1248 2 месяца назад
Great video, had a great time watching it. Looking forward to your next one! One peice of feedback though, dark blue text on a black background is very hard to read due to the contrast. It was difficult to read your code sometimes.
@DiThi
@DiThi 2 месяца назад
Not just contrast, but also video compression, where colors have less resolution (particularly pure blue and pure red).
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Yeah, the video definitely looked better on my computer ("It runs on my computer" moment). I'll be changing my choice of colours for code in the future for sure, thanks!
@BruteZ7957
@BruteZ7957 4 дня назад
I haven't been as stimulated and entertained and educated by a video as by this in the past 6 years. I felt like a kid again having newly discover numberphile and minutephysics on RU-vid. love it. thank you so much. love you man.
@zeFresk
@zeFresk 2 месяца назад
I have no idea how I ended up here, but this one of the best video I've seen. I look forward to your future videos :) I also loved your editing !
@mujtabaalam5907
@mujtabaalam5907 2 месяца назад
You don't need to compute evey Fibonacci number, only the largest - so your exponential matrix multiplication can just keep doubling for the entire second to get something huge
@Nikarus2370
@Nikarus2370 2 месяца назад
If you want an easy quick followup video. See how long it takes each other function to hit the number reached by the gold metalist number. (feel free to not caluclate it with the recussive function... pretty sure we'll hit the heat death of the universe before that 1 gets done)
@edwardfanboy
@edwardfanboy 2 месяца назад
When multiplying very large numbers using an FFT, doing it over the complex numbers is slow and may also lead to incorrect results due to rounding error. Instead, do the fast Fourier transform over the ring Z/nZ, choosing n such that there exists a principal (FFT size)th root of unity.
@drdca8263
@drdca8263 2 месяца назад
How can you pick such an n?
@edwardfanboy
@edwardfanboy 2 месяца назад
@@drdca8263 I'm not sure of the details, but popular choices are Mersenne primes n=2^p - 1 and Fermat numbers n=2^2^k + 1. Things to look up are "number theoretic transform", "Mersenne number transform" and "Fermat number transform".
@lollo863
@lollo863 2 месяца назад
I thought there would be a joke about constexpr and compile-time calculations to create a constant time result at runtime. Nice video
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Missed opportunity
@YEWCHENGYINMoe
@YEWCHENGYINMoe 2 месяца назад
11:02 EXPANDED FIBONACCI NUMBERS INTO THE REALS
@edsaid4719
@edsaid4719 2 месяца назад
Akchually it's only expanding into the rationals 🤓☝️
@Rando2101
@Rando2101 2 месяца назад
​@@edsaid4719 complex numbers:
@Luca_5425
@Luca_5425 2 месяца назад
Duuuude great video quality! I was impressed this channel didn't have more subscribers... got me at the end, though! For sure subscribing
@aronhegedus
@aronhegedus 2 месяца назад
This is a really cool video! In particular a very obvious reason for why the vanilla "linear" fibonacci is O(n^2) rather than O(n), which I didn't realise at first. Also having the direct form of the nth fibonacci number via diagonalisation is so neat! I knew the proof for it from a different kind of proof (en.wikipedia.org/wiki/Recurrence_relation), but the diagnonalisation is much more intuitive. Nicely edited as well! Might have forgotten this, but would have liked to know bit more about the specs of your laptop
@TerjeMathisen
@TerjeMathisen 2 месяца назад
Very nice! I spent the first 20 minutes or so waiting for the Binet formula, did not expect you to get close to the limit for fft multiplication...
@ricpb
@ricpb 2 месяца назад
This is essentially a speedrun in computer science. Well done! Imagine having this class on the first day of computer science and then learning all the details about this masterpiece.
@sebastianmestre8971
@sebastianmestre8971 2 месяца назад
I thought you were going to explain finite-field FFT (a.k.a. Number Theoretic Transform) at the end. FFT can be suitably modified to work on Z_p instead of C, for certain primes p. The main requirement on p is that 2^k | p-1 for some k > log2(N), because k bounds how many times you can do the FFT trick of splitting into even and odd parts Not only does NTT not have precision issues, it is also usually faster because it uses half as much space and basic operations are done on integers.
@mgostIH
@mgostIH 2 месяца назад
Wow I never considered the field method at the end, it can come out useful for other stuff whenever one knows you're working with just specific roots! I wonder how one can generalize this for fast diagonalization of any matrix, since eigenvalues will always be roots of polynomials, I will think about it, right after liking and subscribing!
@jamilhaidarahmad4092
@jamilhaidarahmad4092 2 месяца назад
This has been one of the best videos I've seen on RU-vid. While I'm already familiar with all of the steps you've taken, the way you merged them together neatly while still respecting and addressing the imprecisions added when you use the fourier transform made the video a very enjoyable and elegant demonstration. By addressing the issues at the end you scratched that itch at the back of my head and I thank you for that.
@RuslanKovtun
@RuslanKovtun 2 месяца назад
50 lines of C code with GMP and matrix approach (4 multiplications) with -O0 can go for 67'108'864-th fibonacci number in one second. Life lesson: do not rewrite yourself highly optimized code.
@luigidabro
@luigidabro 2 месяца назад
Why would you tell us your result by running it in debug mode? Try -O3 (obviously) -march=native
@janisir4529
@janisir4529 2 месяца назад
I mean, I have managed to write a better printf than printf. It could only print out integers, but it was very fast.
@SheafificationOfG
@SheafificationOfG 2 месяца назад
@RuslanKovtun used -O0 to really show me up, since I used -O3 and -march=native But you're right, there's unfounded hubris in thinking you can outdo the work of well-established large number libraries, even Python can put my golden output to shame.
@RuslanKovtun
@RuslanKovtun 2 месяца назад
@@SheafificationOfG , you and ​ @luigidabro missed that GMP is a library that is statically liked as -lgmp and has all optimizations in it. Yes, it is like in python where your code just calls it, but I will argue that python is still slow even for single "for" loop.
@tediustimmy
@tediustimmy 2 месяца назад
The thing is, GMP has like ten different multiplication algorithms to choose from, and it selects the presumed fastest given the input arguments. There is no purpose in doing a Fourier transform for 6 * 4.
@TalkingBook
@TalkingBook 2 месяца назад
This video is packed with easter eggs that are barely visible on top of a rapid if smooth delivery. I have not laughed so hard at a mathy video ever. Nor rewound so many times. Straight talk from a meme master.
@Filup
@Filup 2 месяца назад
I started watching this video shortly after it was posted, and decided to implement this all in Rust using benchmarking. I thought this would be a fun project since I am new to Rust. 6h later, and things are getting off the ground. I'll edit this comment and add a link to my repo when it is finished :)
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Even if (when?) you beat my golden record, I still won't convert to rust 🦀
@Filup
@Filup 2 месяца назад
@@SheafificationOfG I started in python and C# as my main, and then got into Haskell. I'm loving Rust, but it's definitely a chore to learn. It was difficult to find the time to learn between semesters 😩
@MooImABunny
@MooImABunny 2 месяца назад
I was waiting for the Binet algorithm from the start, but the journey was actually interesting, so I stayed. I honestly never considered the fact that if you work with numbers with undetermined bit size, you'd need fft just to compute a product of two integers, that's pretty crazy
@Xanthe_Cat
@Xanthe_Cat 2 месяца назад
The Binét formula was always going to win this competition. However you perhaps ought to have started by examining the Lucas equations to find better quick relations for obtaining large Fibonacci terms.
@saniancreations
@saniancreations 2 месяца назад
Great video, one criticism though: dark blue is really not that legible on a black background, changing the colours of the code highlighting to something more contrasting (e.g. around 3:54) would help a lot!
@CarrotCakeMake
@CarrotCakeMake 2 месяца назад
FFT is usually done using number theoretic transform, rather than real numbers. And there's probably a fast way to do this using Chinese Remainder.
@DarthWho01
@DarthWho01 16 дней назад
You can still use FFT, just do it over a finite field of some kind iirc. Pretty sure you can do it in the ring mod 2^2^n+1 as well which works nicely because you can use 4 as a root of unity or something?
@CarrotCakeMake
@CarrotCakeMake 16 дней назад
@@DarthWho01 Yeah that's basically what number theoretic transform is, if I remember right. Though 253 is also a nice prime because it turns things into bytes. And it may be faster just to use a prime near 2^64.
@antarctic214
@antarctic214 2 месяца назад
An alternate way to get from 4 to 2 numbers is to realize that calculating Mⁿ is the same as evaluating the polynimial Xⁿ at M. Because M² = M + 1 this evaluation factors through Z[X]/, which means you can just calculate X^n in this ring (with exponentiation by squaring) instead and then ebaluate at M. The last step is the same as adding the coefficients in front of X⁰ and X¹. The same can easily be done for a general linear recurrence by calculating X^n in R[X]/ and linearly mapping to R by mapping Xⁱ to the i-th starting value.
@mateusvmv
@mateusvmv 2 месяца назад
@0:44 Aah I see you have the Manga Guide to Statistics
@justlm228
@justlm228 2 месяца назад
a good optimization is to use number theoretic transform that is done on the ring Z_p, where p is such a prime that 2^n | (p - 1). avoids dumb floating point arithmetic and precision errors altogether while also having better complexity of basic arithmetic operations, but there's a tradeoff, a big amount of taking numbers modulo p. a good optimization tactic you could use use Montgomery multiplication, in which we take numbers modulo only in the start and in the end while replacing all modulos in fft with bitwise operations. saw this optimization in some blog on codeforces as a cool trick
@CraigGidney
@CraigGidney 2 месяца назад
This python code gets past the four millionth Fibonacci number in half a second on my laptop. Normally, python would be disastrous for speed, but most of the time is spent inside CPython's schoolbook(?) multiplication doing the last three squarings. The way I wrote this code was by starting from repeated squaring of {{0,1},{1,1}} and then simplifying by realizing the intermediate matrices always had the form {{a,b},{b,a+b}}. def fib_power_of_2(exponent: int) -> int: a, b = 0, 1 while exponent: a2 = a**2 b2 = b**2 ab2 = (a+b)**2 a = a2 + b2 b = ab2 - a2 exponent -= 1 return b
@fplancke3336
@fplancke3336 2 месяца назад
Python integer multiplication is quite optimised: it uses Karatsuba when warranted.
@SheafificationOfG
@SheafificationOfG 2 месяца назад
Yeah, I very conveniently left out how easy it is to outdo my implementation using well-established large number classes like those used in CPython or GMP :^)
@CraigGidney
@CraigGidney 2 месяца назад
@@fplancke3336 Hey, you're right! I thought Python used schoolbook but I searched "karatsuba" in the CPython's github repo and found where they switch to it. They also seem to be making decisions based on if it's squaring instead of multiplying. They don't seem to be using Schonnage-Strassen or SSE instructions, though.
@paintspot
@paintspot 2 месяца назад
Great video! My only qualm is the choice of "Which axis is which" on the graph. Like, the huge slowdowns in the graph at 9:45 look like huge JUMPS in progress, lol -Paintspot Infez Wasabi!
@csilval18
@csilval18 2 месяца назад
The analysis at 7:30 is incorrect. The sum of numbers is proportional to the length of the number, not its size, so it doesn't grow with n, rather with log(n). So the algorithm isn't O(n^2) but O(nlog(n)). Huge difference.
@landsgevaer
@landsgevaer 2 месяца назад
Nah, it is correct, actually. It indeed grows with n if you define n as the number of digits, like you say. Now, since the Fibonacci numbers grow asymptotically exponentially, their number of digits relates linearly to the index (roughly one extra digit every five steps), and that index is used as n in the video. So the video looks correct to me.
@williamjedig7480
@williamjedig7480 2 месяца назад
He also mentions this at 10:54 , but I agree it should've been mentioned earlier. Stumped me as well
@ddddddbingbong
@ddddddbingbong 2 месяца назад
Completely agree. I just typed out something similar and then saw your comment
@luminica_
@luminica_ 2 месяца назад
@@landsgevaer If you define n as the number of digits you perform 2^n sums of n digits so it is n*2^n not n^2.
@landsgevaer
@landsgevaer 2 месяца назад
@@luminica_ 2^n sums of digits? How is that?
@mateusvmv
@mateusvmv 2 месяца назад
@23:30 You can use Number Theoretic Transform to achieve the same time complexity as FFT without loss of precision. You just need to find a very large prime number (slow) and hard code it (fast).
@spinachstealer
@spinachstealer 2 месяца назад
Even knowing everything in the video already, the humour was quite good and I was thoroughly entertained, and seeing the runtime graphs was pleasing. Another banger from my favourite sheaf!
@Cracks094
@Cracks094 2 месяца назад
Man the world of Math is truly wild. I'd have done a+b=c, then b->a, c->b and repeat. Seeing you use vastly more complex things that I am unable to comprehend was just as fascinating as it was confusing to me. I learned absolutely nothing, understood even less than that and somehow, I was still entertained. Incredible.
@waiitwhaat
@waiitwhaat 2 месяца назад
Didn't expect a channel doing fast Fibonacci algorithms to be into osu! but somehow I'm not surprised.
@nikplaysgames4734
@nikplaysgames4734 2 месяца назад
Wait he plays osu as well? What’s the osu channel
@waiitwhaat
@waiitwhaat 2 месяца назад
@@nikplaysgames4734 My hint was 'wysi' in the chapter name for 17:27
@NTNscrub
@NTNscrub 2 месяца назад
I saw bionicle and had to like. Moreover, awesome video in regards to the consequences of abiding by ‘Big O’ notation for efficiency while ignoring practical limitations of memory. It also shows a good peek into the depths of optimization for beginners in the realm of coding. Thanks for the treat.
@tolkienfan1972
@tolkienfan1972 2 месяца назад
Ok, the last method I did not see coming. Nice job!
@jacquev6
@jacquev6 2 месяца назад
This is great work! I loved seeing more and more complex math theory appear to solve a seemingly simple problem faster and faster. Thank you for taking the time to produce this video and share it with us!
@fernandogaray1681
@fernandogaray1681 2 месяца назад
3:59 lmao best math joke I've heard lol
@blamethefranchise1473
@blamethefranchise1473 2 месяца назад
18:30 radiohead reference
@ptitbgdu8040
@ptitbgdu8040 2 месяца назад
7:26 Just some more details to explain why the number of digits of Fn is O(n) : the sequence (Fn) is equivalent to p^n where p is the golden ratio. So the number of digits of Fn is O(log(p^n)) = O(nlog(p)) = O(n). This being in a linear loop gives an algorithm with a complexity of O(n²).
@kevinosborn3258
@kevinosborn3258 2 месяца назад
Dude this channel is so good Curious about parallelization 👀
@lih3391
@lih3391 2 месяца назад
You have to do the computations in order though right?
@luigidabro
@luigidabro 2 месяца назад
​@@lih3391yes
@janisir4529
@janisir4529 2 месяца назад
@@lih3391 I kinda want to multithread this, but I don't think it's possible. The matrix multiplication could be parallelized theoretically, but by the time starting a thread for a single multiplication becomes worth it, we no longer fit into memory.
@MPKampersand
@MPKampersand 2 месяца назад
I love the idea that someone would come across this as their first introduction to the Fibonacci sequence, be able to immediately understand what it means that it's a "recurrence relation," and then make it through the whole video.
@aik21899
@aik21899 2 месяца назад
5 minutes in, already had to flip a table on the affine joke. Great video, subbed.
@benharris8382
@benharris8382 2 месяца назад
This is an incredibly neat demonstration of optimisation techniques that typical programmers like myself aren't familiar/comfortable with. Great video, well done!
@Patashu
@Patashu 2 месяца назад
The 6 million fibbonaci number limit is just because you're using double floating point numbers, right? If you swapped to quads or arbitrary precision you could go past that then. Though that's probably a little bit too much of a rabbit hole for the scope of this video...
@FunctionallyLiteratePerson
@FunctionallyLiteratePerson 2 месяца назад
The whole time, I was wondering why you weren't using the closed form. Great video!
@Foxtr0t1337
@Foxtr0t1337 2 месяца назад
I have no idea what you are talking about at 3:47. So I HAVE TO subscribe.🤣🤣🤣🤣🤣🤣🤣
@jgd7344
@jgd7344 2 месяца назад
What a fascinating number, great video! Happy birthday to your dad from Australia!
@chubphd
@chubphd 2 месяца назад
“[It] is known as the Cooley-Tukey algorithm, so-called because these insights are due to none other than the same person who discovered the Fourier transform… Gauss.” LMAOOO
@jacobcohen76
@jacobcohen76 2 месяца назад
Liked it! The last solution was beyond what I knew from school. You've inspired me to start studying maths again because i haven't thought of an eigenvalue in years.
@WoolyCow
@WoolyCow 2 месяца назад
nah im gonna stick to just doing it the simple way if that's okay with you... also you sound weirdly similar to physics for the birds btw
@SheafificationOfG
@SheafificationOfG 2 месяца назад
I'll take that as a compliment! I like his content
@jameslew7269
@jameslew7269 2 месяца назад
you can avoid the precision issues with fft by using the number theoretic transform and a well chosen sufficiently large prime
@JulianBliss
@JulianBliss 2 месяца назад
what the hell this video is sick so many great jokes, and a lot to learn. I'd never have thought to use Master Theorem to analyze an algorithm like this, and I would never have guessed that Binet's formula actually would turn out to be faster in the end, given how much floating point multiplication I assumed would have to be done
@fibbooo1123
@fibbooo1123 2 месяца назад
The whole point was that binets formula uses an implementation without floating point multiplication by expressing it as a field (except that there \is\ floating point multiplication hidden in the fast Fourier transform)
@lynnwilliam
@lynnwilliam 2 месяца назад
As someone who did Complexity Analysis in college, I love your video. brings me back ! Your not a YT programmer, you are a computer scientist !
@JohnBukkake
@JohnBukkake 2 месяца назад
Whats your osu name
@doronbenhadar7583
@doronbenhadar7583 3 дня назад
I think all of the FFT stuff can be done with fourier transforms into finite fields. That removes the precision problems and the overhead for multipling floats.
@sefron6207
@sefron6207 2 месяца назад
When you see it
@gilbertboys6719
@gilbertboys6719 2 месяца назад
when you fucking see it
@syrix5914
@syrix5914 2 месяца назад
This is completely above my pay grade. But I enjoyed it a lot. After the introduction of the matrix multiplication I was expecting some GPU shenanigans.
@sanoysgamingchannel
@sanoysgamingchannel 2 месяца назад
i dont remember what my record for 1 second was, but i did manage to calculate the 2^32nd in 5 hours and 30 minutes
@sanoysgamingchannel
@sanoysgamingchannel 2 месяца назад
Just checked my code again, the highest i got within a second was the 4194304th at 0.345263 seconds And the 4194303th at 561188 seconds Usi g both of thoose to calculate the 8388608th then only vompleted at 1.032570 seconds, i bet rerunning the code can manage that in a second if i am lucky
@RPG_Guy-fx8ns
@RPG_Guy-fx8ns 2 месяца назад
I would mix the linear version with a look up table of starting points that are pre calculated. Then you can start at the nearest 4096th value and its previous value. Also the linear version could be sped up, you have 3 values being assigned to their neighbors, shifting data around, instead of shifting your perspective. Use 4 values, and change which one you overwrite each pass, and unroll a group of 4 passes. then you are never just copying a value to its neighbor, you are always adding 2 numbers to set the oldest of the 4 numbers. Use more than 4 values, unroll more of it and use SimD to blast through the additions. Once you get extremely large values, you need to change strategies, but this should be faster for anything that can fit in 128bit numbers.
@GhostGlitch.
@GhostGlitch. 2 месяца назад
precomputation is cheating. With infinite storage you could get the time to be just one lookup if you allow precomputation. some amount of precomp can be useful in a real application, but it makes the question uninteresting.
@Lord-Sméagol
@Lord-Sméagol 2 месяца назад
Here's a little problem to test your maths and programming: Discover the null-terminated ASCII string in the least significant bytes of the 13,293,882,455,155,005,192,496,327,309,753,896,264,952,698,387,015,244,135th term of the Fibonacci Sequence. If you found that too easy, find which term is required to produce "You're only supposed to blow the bloody doors off!" (null-terminated without the quotes). * Here are the first 9 digits [124,120,758, ... ] just to prove that I know the answer.
@Psi141
@Psi141 2 месяца назад
How?
@Lord-Sméagol
@Lord-Sméagol 2 месяца назад
@@Psi141 Example: the 100th term is 354,224,848,179,261,915,075, which is 13 33DB 76A7 C594 BFC3 in hexadecimal. If you only need the last 8 bytes, you can ignore the other bytes during your calculation to speed things up MASSIVELY. This is "modulo 2^64 arithmetic". I wrote a program in Visual Basic Net that uses System.Numerics.BigInteger to do the calculations in modulo 256^(length of string + 1 [for the null] ). It just takes a string and searches for the earliest term that produces the desired low bytes. It found the term for this challenge very quickly, even though BigInteger doesn't use multiple threads internally for multiplication. I did try to parallelize my search algorithm, but it doesn't branch, so that didn't help.
@Lord-Sméagol
@Lord-Sméagol 2 месяца назад
Last night I decided to "share my source code" in a crazy way: I removed the redundant whitespace to shrink the problem and speed up the search [The IDE will regenerate it all]. After 90 minutes of searching, my program found the: 298,160,865,648,573,042,064,827,503,271,542,205,751,395,235,936,870,060,903,857,251,779,281,661,082,527,403,626,487,334,683,400,168,959,394,599,516,907,884,789,333,110,520,815,040,876,129,986,239,059,243,803,992,540,416,080,108,847,969,967,166,653,773,995,223,756,939,328,660,593,761,867,075,938,031,627,694,333,752,037,113,642,144,784,401,910,233,209,096,247,809,392,657,060,556,276,352,335,849,519,531,516,812,997,595,374,900,091,434,328,347,292,695,036,242,287,914,162,772,658,493,665,957,462,200,034,095,386,734,140,552,051,971,035,857,485,533,666,546,316,918,588,509,053,322,099,173,208,608,448,181,946,685,636,003,723,864,840,811,032,210,346,526,524,365,445,170,993,816,749,706,175,700,970,530,127,933,542,010,662,328,795,384,478,601,108,982,322,594,120,213,081,249,029,441,907,550,808,692,834,691,534,130,317,114,514,058,697,515,543,925,807,092,093,692,845,143,167,486,014,273,836,729,028,443,095,602,323,977,843,419,997,442,706,957,408,623,170,215,922,154,649,447,416,520,914,468,948,670,728,032,145,843,542,624,634,532,656,484,509,493,342,808,692,613,705,866,266,591,008,684,375,893,231,294,388,657,854,579,900,071,695,115,101,597,892,093,017,831,435,410,201,885,445,317,196,974,173,554,423,413,999,179,297,846,555,656,584,857,346,286,239,826,369,524,579,113,155,346,835,511,192,654,963,525,325,006,998,347,033,171,458,782,352,731,784,854,126,668,652,595,651,736,381,284,500,525,482,407,891,388,836,073,507,692,462,471,444,929,981,346,780,088,965,771,269,247,009,490,279,801,490,007,736,177,761,420,038,454,327,288,910,587,971,079,699,024,448,933,599,281,122,565,969,958,545,954,876,858,592,773,407,042,790,580,208,980,625,305,445,847,743,869,200,833,693,062,716,716,398,784,464,686,464,265,559,639,356,952,824,032,874,750,279,088,810,221,128,750,050,554,888,980,604,945,314,799,033,949,495,336,396,085,648,568,677,398,308,703,158,743,247,989,943,003,850,286,103,821,521,773,700,894,341,223,623,027,615,902,698,860,477,958,492,953,934,421,258,207,746,228,726,308,205,085,089,417,445,372,793,681,623,317,779,273,087,940,250,865,087,525,813,325,919,857,384,014,639,496,079,930,024,012,293,805,861,766,241,276,718,594,373,439,879,516,522,444,726,093,698,312,794,382,777,357,065,421,552,130,870,625,720,457,399,591,622,793,786,194,703,126,746,136,484,030,327,773,774,629,455,005,682,618,461,084,353,927,563,500,668,859,303,229,551,108,807,523,080,227,402,434,682,550,787,167,926,557,688,414,309,788,332,054,929,701,008,377,631,863,260,122,448,321,029,790,268,986,109,535,773,535,134,077,692,214,576,123,325,198,222,579,336,656,749,060,639,452,949,208,576,607,975,854,788,979,234,028,581,525,061,095,018,235,100,543,734,886,393,927,930,521,327,556,530,730,343,442,867,331,809,392,576,450,116,886,312,065,636,002,419,417,956,380,772,713,899,706,798,513,866,498,793,910,158,419,346,251,339,151,818,944,899,733,114,330,442,780,835,598,356,690,222,733,414,265,291,112,596,885,617,846,543,391,902,697,747,210,126,587,566,831,243,585,441,792,211,352,624,732,256,451,356,519,456,313,198,011,230,617,387,661,264,418,138,555,116,399,306,675,963,694,661,271,828,387,881,696,263,590,763,300,841,609,232,375,923,223,820,857,674,108,009,724,806,871,367,112,776,651,640,837,987,921,617,832,351,551,679,224,387,725,655,146,013,999,669,089,438,951,556,946,582,883,008,618,285,077,868,689,213,044,052,229,949,946,385,547,367,230,085,952,892,318,954,123,568,012,107,815,147,496,427,250,912,782,486,634,199,615,244,963,923,448,632,576,960,040,163,046,585,780,627,168,512,484,078,881,786,693,157,107,880,746,316,242,622,398,327,802,164,533,186,608,534,750,770,267,813,438,108,395,981,200,862,857,808,084,674,102,699,387,811,787,599,937,916,927,885,175,873,290,488,561,534,822,751,848,060,109,196,505,153,948,054,425,342,680,654,313,607,957,499,514,005,673,675,444,821,046,266,543,417,695,820,567,846,931,456,232,176,961,708,089,934,399,587,187,624,327,247,970,877,181,809,242,974,313,040,241,042,115,966,528,228,095,213,623,781,617,344,334,214,193,923,984,364,492,612,765 th term! I didn't include a null byte to mark the end of the text; It should be evident where the listing ends, as those bytes will not be text characters. Also, it ends at "End Module".
@miguelsouto2091
@miguelsouto2091 2 месяца назад
6:42 yes the reason why the graph is not linear is because of the class number that is being used and not because of the summing of a number with n digits being O(n)
@glorialee-goldthorpe1007
@glorialee-goldthorpe1007 2 месяца назад
Love your video ❤❤❤
@Baptistetriple0
@Baptistetriple0 2 месяца назад
the second I saw the 2x2 matrix equation at 5:28, matrix diagonalisation immediatly came to mind. Probably PTSD from math class. I wondered when you will talk about this and I was not disappointed when it came out !
@paulw987
@paulw987 2 месяца назад
"I'm a pure mathematician. I don't care about the real world." That cracked me up. 😂❤😅
@yves-loic9141
@yves-loic9141 2 месяца назад
@7:40 some more details where O(n^2) is coming from. The algorithm speed can be estimated as O(n ln(F(n))) as the number of digits on F(n ) grows as ln(F(n)) not as n. F(n)= golden ratio ^n so ln(F(n))=n.
@Landee
@Landee 2 месяца назад
Bro i didnt understand ANYTHING
@informitas0117
@informitas0117 18 дней назад
I liked when the lines moved.
@xninja2369
@xninja2369 18 дней назад
You need to be math major + CS both to understand this video properly I sucks at both both I am junier in math and I don't know much of C+ bit for what i know it old even making it more consuming at first ,😳
@Roy-K
@Roy-K 2 месяца назад
Before watching: My immediate thought would be to use x86 assembly and run the following pseudocode int x = 0 int y = 1 while(onTheClock): x += y y += x print(lastModified) print(numIterations*2) There’s obviously some cleanup to be done, but essentially, just adding and storing the result repeatedly between two 64-bit registers After finishing the video: How can I have forgotten the fast Fourier transform, and the myriad of other things are definitely did not fly over my head! How silly of me (incredible video!!)
@ConFusi0n
@ConFusi0n 2 месяца назад
Cool video, well edited, engaging, & informative. (the dark blue text on black is hard to read though)
@liesdamnlies3372
@liesdamnlies3372 2 месяца назад
A fascinating topic marred by the unreadability of the blue-on-black types and reserved words.
@mr.roblox9858
@mr.roblox9858 22 дня назад
Honestly the linear non recursive algorithm was the farthest I thought for this Fibonacci algorithm. While I may not understand anything past that ( I have a very vague idea of linear algebra) right now I’m learning calculus. I think this was a really cool thought about optimization. Kinda reminds me of overclockers avoiding usb to save on cpu cycles.
@Jayensee
@Jayensee 2 месяца назад
Great video! Audio tip: Audacity's noise reduction tool makes it super easy to get rid of the white noise from your mic (if you want help with that feel free to reach out, it's no bother)
@loganjoy-koer5936
@loganjoy-koer5936 2 месяца назад
I wrote the linear program on my TI-84 Plus CE graphing calculator and it only got to around F(70) 0->A 1->B startTmr->T Repeat checkTmr(T):End:"Wait until first (inconsistent) second is over" Repeat checkTmr(T)=2:"Main Loop" A+B->C B->A C->B End Disp C
@reecetilley585
@reecetilley585 2 месяца назад
When I was first learning to code, and challenged myself to write a program that would calculate the fibonacci sequence, 3:42 was actually pretty much the exact program I came up with after a lot of thinking! I'm glad to hear it was actually pretty efficient
@NubPaws
@NubPaws 2 месяца назад
This was such an amazing video, thank you so much for making it. I have always wondered where the closed-form formula came from for the Fibonacci numbers.
@trontotoro
@trontotoro 2 месяца назад
in the future can you make the colorscheme for the on-screen codes more accessible? dark blue keywords like 'uint8_t', 'vector' and 'complex' against black background is hard to see. other than that, thank you for making this excellent video. edit: just caught sight of your pinned comment. thanks for the considerations :)
@thebilliestsilly
@thebilliestsilly 10 дней назад
This is the least I've ever understood either a math or a coding video and I still enjoyed it thoroughly
@Лаврентьев-б9ф
@Лаврентьев-б9ф 2 месяца назад
if you are interested in a slightly faster version of the matrix multiplication version do some research on the fast Fibonacci transform
Далее
Fast Inverse Square Root - A Quake III Algorithm
20:08
When Khabib dropped Conor McGregor 👀 #nocommentary
00:59
How are holograms possible? | Optics puzzles 5
46:24
Просмотров 951 тыс.
Solving one of the logic puzzles of all time!
20:34
Просмотров 22 тыс.
The Bubble Sort Curve
19:18
Просмотров 600 тыс.
Cursed Units 2: Curseder Units
20:18
Просмотров 482 тыс.
The Alternate Math Universe Where Infinity is Tiny
33:06
Complex Fibonacci Numbers?
20:08
Просмотров 1 млн
New Breakthrough on a 90-year-old Telephone Question
28:45
What is a Monad? - Math vs Computer Science
13:58
Просмотров 24 тыс.