"As you can see, this young male stack frame is prowling through the forest of dense, tangled registers calling for a mate. Alas, with the stack currently empty, his efforts are unfortunately futile." *cue safari music*
basically it's interesting if you like studying theoretical software. It's some interesting concepts, but it's utterly useless for actually writing software. This would do essentially the same thing: #include . . . double time; increase(time, time); sleep( time ); . . . double increase( double time, double temp_time) { time = pow(2, increase(time, temp_time-1); return time; } it's clunky but I'm not going to bother optimizeing and cleaning it up. it basically sleeps for 2^time!, or two raised to time factorial. Edit: at least as far as wasting cycles it would behave simularly
I once tried to compute just a slightly bigger ackerman call like ack(3,3) computed in an instance and I tried ack(3,4) i think, but it exhausted the call stack. So I wrote the expansion to disk, to parse the deferred chain. The whole computation, even though it was just text, created a 10gb textfile while the slightly smaller call ack(3,3) just needed about 15lines of deferred chains. gives you a feeling of how insanely fast the output grows. This is due to the ackerman function actually abstracting hyperoperation. Where ack(3,3) was exponentiation, ack(3,4) was tetration. To put it simply: it is rasing exponentiation itself to a power. Then ack(3,5) would be even more extreme, a pentation - rasing the rasing of exponentation to a power, or just rasing tetration to a power.
Sir, thank you for this. From app engineers to web developers, I’ve always felt that it is vitally important for programmers to study the mathematical properties that make our projects work. To do otherwise, to me, is like trying to be a materials engineer without having an understanding of the physical and chemical properties of your raw materials.
+Herp Derpington Actually, I still think it would terminate, because once m and n get small enough, the sign bit will flip and suddenly we're back at a positive number.
@@EmptyKingdoms For the purpose of explaining, consider this: In principle, you can represent a large number by its digits, and turn the multiplication with pencil and paper method ('schoolbook multiplication') into a program. It can handle anything that your computer can store. (280 lines of characters is like *nothing* compared to GBs of RAM) It won't be fast, but it shouldn't be too slow either. Now you can just start from 3 and keep multiplying by 2, which is again slow but shouldn't be unbearable. (You can even specialize and just write a 'multiply by single digit number' function) There are faster ways to do multiplication (google 'integer multiplication algorithm'), and also for exponentiation, but I'm not really familiar with those, but they are pretty well researched by others so maybe just consult what others already wrote. But actually, big integer implementations, or arbitrary precision integer implementations (look ma, no precision loss for final digits) use binary internally (bytes and words are just chunks of binary bits). 3 * 2**65533 is really neat in binary, so the real deal is converting it to decimal. Again, someone else already figured it out like ages ago. Just google 'radix conversion algorithm'. But why computers sometimes can't handle large numbers? Because really they just can't handle them *fast*. Like *real* fast. Your CPU has dedicated *hardware* circuitry to handle not too large integers and not too wild floats, so that if your program does not need 'weird' computation like computing a very large number it ends up blazingly fast, and most programs actually do just need normal numbers.
Hi Professor Brailsford! I've really enjoyed all your videos, thank you for taking the time to make them. It has been a couple years for me since graduating, and I miss my senior level computer science classes. Your lectures remind me of why I got in to and love the field... and not for a day job
It's been 15 years I'm working professionally in computer science, writing code and doing something useful with it, thinking that I'm kind of understanding how it all works, from the theory of electronics, to the end user using it, I'm mind blown that some mathematician from the early 20s have already thought of all of this. Plus kudos to mr Brallsford explaining it like it's some basic course by its concise explaination. Love from France
It seems that some reductions can be made in the levels of recursion by adding cases for the following: ack(1, n) = n + 2 ack(2, n) = 2*n + 3 ack(3, n) = 1
Another optimization is to convert one of the tail-calls into a loop, that leaves us with just 1 recursive call. But thanks to your comment I realized there's a simpler way to optimize the A function, define it in terms of a single Hyper-Operation call: A(m, n) = HyperOP(m, 2, n + 3) - 3 Where the 0th argument of HP is the degree or "order" of the HyperOP we want to use, 1st arg is what we want to hyper-operate, and 2nd arg is the number of times to apply the "sub-HP". To implement HP efficiently, define it like so: HP(0, a, b) = b++ //add 1 HP(1, a, b) = a + b HP(2, a, b) = a × b HP(3, a, b) = a ^ b //a ** b The general case HP(n, a, b) requires recursion, it can be an implicit or explicit stack. It's essentially HP(n - 1, a, HP(n - 1, a, ...)), with b copies of HP calls, IIRC. HP(4, a, b) is tetration, HP(5, a, b) is pentation, etc... Before I realized this optimization, I just used the Wikipedia table with all closed-form expressions that didn't require recursion, and used a memoization hash table when recursion was needed. After implementing the HP optimization I realized Wikipedia already mentioned that in the "Definition" section, but I didn't understand the notation before... *bruh*
You know you're a true mathematician when you get scared by how big a number can get. Also makes you appreciate how big infinity actually is. It's bigger than any ack out there.
Well, infinity isn’t bigger than any number. Infinity is an undefined value, so it cannot be compared to numbers. For example, you may be tempted to say that 1/0 is infinity because division by a value extremely close to zero results in an extremely huge number, but it’s technically undefined. The reason why it’s crazy to try to comprehend the “hugeness” of infinity is only because that for any number you come up with, you can always add one to it (or do some crazy operation) and it’s even bigger. It goes from incomprehensible to even more incomprehensible. Just wanted to say that infinity isn’t comparable to numbers, but more just an undefined thing that *does not exist in the set of real numbers.* (That’s an obvious fact, but with that in mind, it should also be obvious that saying infinity is bigger than a number is just as senseless as saying infinity is smaller than a number, because they’re apples and oranges.) Moreover, since 1/x has the limit of “positive” and “negative” infinity as x approaches zero from the right and left side respectively, that should clearly indicate that there is only one infinity, and that is the state of being undefined, not existing within the set of real numbers. When 1/x shoots down toward “negative” infinity from the left and comes back down from “positive” infinity after x passes through zero, that’s the function 1/x escaping the set of real numbers to a single place, called undefined, and returning to the set of real numbers in the same direction in which it exited. That said, infinity seems like it exists on either “end” of the real number line, so that would mean it’s just as correct to say that infinity is smaller than every single number you can think of. But if that was true, then it’s a contradiction. Infinity cannot be both bigger than and smaller than every real number, and hence it does not exist on either end of the real number line. The existence of infinity is entirely outside of the set of real numbers. So, finally, infinity cannot be compared to real numbers. It’s not bigger than any ack out there. It’s just that you can construct a number arbitrarily large enough to be bigger than any ack out there, and that’s what is mind-boggling. :)
Computerphile is such a wonderful treat for me, thank you so much Brady and all of your interviewees for the time and effort putting this together (numberphile as well!). You are doing a service to all mankind, you deserve a trophy!
g64 is graham's number (see one of Brady's other videos). I first saw Ack(g64, g64) in an xkcd comic though. So you're looking at a number created through superexponential means, fed into a function that has a super exponential time complexity.
I've been jobbing business focused programmer for over thirty years. I was at college recursion was kind of worshiped by my tutors 'oh look at the elegant code' they would say. When I got in to the business world the number of times recursion was the best solution was about twice in thirty years!
u guys never sorted anything? i would assume most sort functions are recursive (of course i could be totally wrong - but then again i could be totally right!)? Or do you mean you never had to write any recursive code?
Strangely no because whenever I've had data it's been inside a database, so you get the database to sort the data before you get the data out. Or if you get it into a webpage then use methods from a library like jQuery datatables to handle the sorting for you. I've never needed to get my hands dirty sorting. Infact I remeber doing all that sorting stuff at college level and pretty much never having to use it like I never used truth tables or Karnough maps!
Steve Gould haha yeah - it's the curse (?) of software engineering, learn the nuts and bolts but unless you plan to be an expert on, say sorting, you are better off using someone else's sorting routines - so why learn the nuts and bolts in the 1st place. My answer would be: for job interviews! lol! cheers!
Spot on DjDedan and in fact some the most successful 'programmers' I know hardly write a line of code. They are just very good a cobbling together solutions from other people's work.
Minox sta Some would argue that summing natural integers to infinity would be integer as well... : ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-w-I6XTVZXww.html
Minox sta summing positive integers is not supposed to be negative. But 1 + 2 + 3 + 4 + 5 + ... = -1/12 (see numberphile) So if string theory can incorporate that, why not ?
You could try writing a memoized version of this code, where it remembers the result of a previous computation, and just plops that result in so it won't have to spend time working that out, cuz it seems to me that that's what's slowing it down so much; that it has to go through the same set of computations again and again EDIT: PEOPLE before commenting on why it wouldn't work; please read the other comments! THEY MAY ALREADY HAVE COME TO THE SAME CONCLUSION! Please don't waste precious keystrokes on repeated information, thank you
Nikolaj Lepka As far as I know your approach is known as " backtracking " , so for the people saying it wont work , it will. It will indeed make it faster.
9:45 I reproduced the function in Python and believe I got it right as it gives me the same results, except for ack(4, 1): _"RecursionError: maximum recursion depth exceeded while calling a Python object"_ lulz at Python
His computer's a bit slow. I compiled his function pretty much verbatim, and it calculates ack(4,1) in 27 seconds. With optimized compilation, it only takes 5.7 seconds. I don't think I'll bother calculating ack(4,2) though...
+moveaxebx The Ackermann function is not tail recursive. That is essentially what the professor is explaining. Tail call optimization is, in a sense, converting a recursion into a for loop (making it iterative), and you can not do this for functions like ack.
All the cores in the world won't help you since this C code will run in a single thread. Also the first thing I thought was "wouldn't memoisation help improve performance?" After a quick google search, I discovered that "When a software developer learns about the Ackermann function, he will try to see how much of improvement function memoisation does if any". LOL
God i wish we had teachers like you guys back when i was at school, i might have actually learned something more than whether tom crosses the road with sally or jill. I wanna go back and absorb all this glorious input, but since creating a time machine isn't on my to do list i think i'll just watch more of these videos. Imho this channel, your numberphile channel and the PBS space time channel are the best places for input on youtube, keep it up fellas it's well appreciated.
The only regret I have in my life is learning this late in my life, from Professor Brailsford's and rightly so, and so lucky I am. What an honor to stand on the shoulders of the masters of the computer coding & math sciences resident in my grandfather's home country. Thank you all at Computerphile. Cheers... Dale Robertson professional student. College of Marin
The most important question remained unanswered! Why the Ackermann function cannot be put in for loops! A sketch of the proof would be fantastic for Prof. Brailsford to answer given his great teaching skills, maybe for a next Computerphile episode! For people it may look like you only need to write a program that keeps nesting for loops to calculate Ackermann's. Best and great series!
Essentially, using recursion is a way of dynamically nesting "for" loops by virtue of your chosen programming language creating *dynamically* as many stack frames as are needed..There is in principle no limit except that, as the stack grows huge, you will eventually run out of memory, The problem with static for loops, explicitly written into your program, is that every compiler will put a compile-time limit on how deeply nested they may be.
Ok, so to go half way there, make a language structure that takes an array of triples which specify the set of nested loops and a pointer to a function to call inside that is passed a state array of the current values of all the loop variables. Several challenge levels: 1) The array is static, ie, it is just shorthand for writing out all the loops. The compiler or preprocessor can generate them. 2) The array is dynamic but contains constants. Ie, the loop system is known at the start of it and can be allocated as a static data structure. 3) The dynamic array can contain algebraic expressions which need to be evaluated to determine loop criteria. Ie, the structure is known at the start but the loop limited can vary. 4) The array is returned by a function which itself can contain functions to define what the substructure of loops will be at each level. I believe this is called recursion and is no longer a language feature.
Because of one of the definitions of the Ackerman function, where A(m+1,n+1) = A(m, A(m+1, n)) This means the loop depth of the Ackerman function relies on M, and you'd have to give one for loop to case m=1, two loops for case m=2, three for loops for case m=3, and so on. Though, this is without using a heap allocated stack to imitate a function call stack.
This is so so interesting... as a computing and business student, I am really amazed and I can actually admire and appreciate all these programs and contributions made!!!! AMAZING!
The reliability of the view that the expansion of the universe is speeding up is exaggerated. The Nobel Prize was awarded prior to their data being made public, and when it finally was made public recently there have been tons of papers criticizing it, both in methodology and by using more data from more modern telescopes.
The thing that makes the Ackermann function irreducible is that the amount of unique (m,n) pairs you need to computer is actually greater than the Ackermann number at (m,n-1). So even if you were trying to compute bottom to top, while it might be more efficient than the naive implementation (no pair calculated twice), it would still require a very large run time, and also require a great deal of memory as well.
I tried caching m=3, but it turned out that was more expensive than the multiplication direct computation uses. Not sure what my next step in optimization will be. There are a few avenues I can explore for it.
The salad is strong in this one. You are already at the magnitude of TREE(3), doing an ackermann function (which is EXTREMELY weak by comparison) won't really do anything.
The TREE function is way stronger than the Ackermann function. However, the follow-up video of this video was about a function even stronger than the TREE function - the Busy Beaver function.
If anyone remembers Knuth's arrow notation from Numberphile's video on Graham's number, it is worth noting that Ackermann's function can be represented in arrow notation as well. If we define an arrow operation: 2 {↑^m} n, then that is identical to ack(m+2,n-3) + 3 So, inversely, if we have ack(m,n) we can write its arrow notation as: 2 {↑^m-2} [n+3] - 3 So, as in the video, if we have ack(3,1) = 2{↑^1}4 - 3 Which is 2^4 - 3 = 16 - 3 = 13, same as in the video. (Note that this method only easily works, though, for m > 2, otherwise you have no arrow operator.) So really ack(4,2) may be *uncomputable*, but it can be mathematically represented as 2{↑^2}5 - 3 with these hyperexponentiation operators. If you want to know what ack(4,2) is exactly, well, the best estimates put it approximately at: 2*10^19728 OR 10^10^4.295 So, yes, far bigger than universal timescales or particle numbers. ^_^ en.wikipedia.org/wiki/Knuth's_up-arrow_notation has some more information. Love the videos!
Most of us in the UK where he teaches wish we had a professor like this as well. He comes from a very elite university that most dont have the money to pay for. A quote from the wikipedia for the UON says "Nottingham has about 45,500 students and 7,000 staff, and had an income of £694 million in 2021"
After breaking various online 'big number calculators' I found one that works, and it turns out 2^65533 * 3 minutes rounds out to, unless I've screwed up somewhere, about 1x10^19712 times the age of the universe. Not terribly helpful. (((((2^65533 * 3) / 60) / 24) / 365) / 13700000000) = 1.0434008320186794 × 10^19712
If you want help with big powers like that, try making logarithms of everything; multiplying and dividing stuff like that turns into adding and subtracting more manageable numbers.
I have barely the slightest idea of what is going on in the "guts" of this, but i just really enjoy professor Brailsford talking about it. So thanks, professor!
Would caching make this function more computable? By remembering old results you do not need to recalculate them. Since it only calls the function with reduced arguments, it doesn't seem that hard to build your way up? I am probably missing something :P
I downloaded the program files and ran ack(6,6) on my quad core 3.2 ghz machine running Arch Linux. It used 100% of one of my cores, took 1 minute 45 seconds, and ended in a segmentation fault. ack(4,4) also segfaults. ack(3,3) returns in under a second.
I remember when in school we were making the program that solved the “Hanoi tower” problem and then we made a fractal tree that was moving in the wind - it was so beautiful! Since that day I fell in love with recursions and fractals!
@7:05 he claims, "But what I would like to draw your attention to, because this is important, is that every time M and N are altered, they are reduced." This statement is wrong. @9:50, you can see that ackerman(m,n) is always greater than both m and n. Because the code contains, "ans = ack(m-1, ack(m,n-1))" whenever neither m nor n are 0, n in fact increases. It might be better to say m always either stays the same and n decreases, or whenever n increases m decreases. Eventually when m reaches 0, an answer is produced immediately.
i see an issue in the second line as well if n=0, the answer is ack(m-1,1). The second argument (n) is back to 1 even if it was 0 before. whcih will make it keep going.
pvic However, m decreases during the call; basically, as you do recursive calls, n decreases to 0, then m decreases as n increases, then n runs down to 0 again, then m decreases again as n is reset, and so on, so m will become 0 eventually, causing the calls to stop recursing. It will take a VERY long time, since n will increase by a crazy amount each time m decreases, but it will run out eventually.
See the thing is, that's barely bigger than G64 already. It's actually smaller than G65 Ack(G64, G64) looks to be 2 ↑^(G64) (G64 + 3) whereas G65 = 3 ↑^(G64) G64 ↑^(G64) refers to G64 Knuth Arrows
Recursively Enumerable includes undecidable problems. The halting problem is undecidable and recursively enumerable (just run the turing machine and accept if it halts). What you meant to put at the end is unrecognizable problems. An example of an unrecognizable problem is all turing machine input pairs that do not halt.
I think this function is easier to understand when defined in a functional language, such as Haskell: ack(0, n) = n + 1 ack(m, 0) = ack(m - 1, 1) ack(m, n) = ack(m-1, ack(m, n-1)) After defining it like this, try to run, for example, in ghci: [ack(m,n) | m
I have a different idea--what if we remembered the results of every call, so we could look them up in a table rather than recalculating? Would that simplify the problem? Somewhat at least. We are calling ack with the same values over and over. But is it enough? Interesting. By remembering the previous values, my program is able to calculate ack(4,1) instantaneously as 65,533. So remembering the previous values does simplify the problem, but I still get a stack error before I get to ack(4,2). Here's a question. Since there is an obvious pattern to the answers, what if we just rewrite ack to return the value? Is that possible? Or is recursion still necessary in order to calculate the n^n . . . ^n . . . ^n?
Yeah I was thinking of turning this into a math problem to look for shortcuts for 10,10. Maybe if they understood the math it can be written easier. Then you only have a fraction of the work to do.
As this problem is computable via µ-recursion, it is also computable by while-loops, this may speed your calculation up. For the problem of n^n^...^n^n (a power-tower) this can actualy be done with for-loops or primitive recursion, so the computational time is not that bad: In Python: def pt(x, y): n = x for i in range(y): n = x ** n return n The main problem here is that the numbers also become quite huge and may not fit into memory.
That's not at all what superexponential means in the context of computational complexity. In the context of computational complexity theory, anything superexponential is anything with a time complexity (or space complexity) greater than O(c^n), where c is a constant and n is the problem size. For example, the factorial function is superexponential. There are a lot of problems that have factorial complexity, for example in combinatorial optimisation.
You've discussed algorithms that are necessarily recursive. Given that multi-cores and threads are now mainstream, I wonder if there are any algorithms that are necessarily parallel.
Jonathan Park I meant that you can always emulate a parallel machine on a serial one. This means that there are no problems that are solvable by parallel but not by serial one.
Jonathan Park You can call it this way. I answered to "I wonder if there are any algorithms that are necessarily parallel." And since you can convert any parallel algorithm to a serial one (even if that means emulating some parallel machine), then parallel algorithms can't solve any problem that a serial algorithm can't.
i believe that for loops and recursion are only needed when working with indexes and such. but not when doing math... there is always an equation. that will do it next to instantly.
steven johnston No. Elementary functions can be defined recursively and there is no closed for solution to them. Some functions are defined as infinite sums of basis functions. Most differential equations have no closed for solutions and must be solved numerically. In all these cases, loops or recursions are absolutely necessary. And it's definitely not true that it can be computed instantly. I currently have 3 computers working for almost a month on one problem ;)
wuuut? i disagree math is all about recursion... the simplest of elements in math revolve around recursion like say counting but before we can count we have to define natural numbers; how do you define a number? well a number is either zero or the increment of a number, so you can practically create a function called "number 5" which is just inc^5(zero), see? recursion
example of enumerable recursion with occasional output: "Display all files in drive C" example of undescidable recursion: "write a text file that contains the hash value of the file when the file is hashed."
For small specific values of m and variable n, we have ack(1, n) = n+2, ack(2,n) = 2n+3 and ack(3,n) = 2^(n+3)-3. We can see from this that ack(4,n) is extremely hard to compute.
There seems to be an interesting pattern if you run the function where m = 3 and n >= 0 In that: ack(3,n) = 2^(n+3) - 3 Example: ack(3,0) = 5 | 2^3 = 8 ack(3,1) = 13 | 2^4 = 16 ack(3,2) = 29 | 2^5 = 32 ... and so on
Techically speaking all recursion is carried out by implementing a recursion stack hence you could use a for loop to implement any kind of recursion. Hence Ackermann's function too could be return using only for looops.
If my understanding is correct he is not talking about a for loop in the general sense but rather the inability to get a random ackerman number given a set of previously solved ackerman numbers in a reasonable timeframe as we do not know how long to run the for loop. In other words we cannot get a n upper bound for ackerman numbers. For eg we can get fibonacci number to get the 1000th fibonacci we can use a for loop that runs for 1000 iterations but otoh for ackerman numbers we are unsure what iterations ack(1000,2) will take.
It's interesting to see how far computers and processing power has come since then. I tried to run the same code (with a few tweaks to the stack size to avoid overflows). and what took 3 minutes back then on a fairly specced machine took about 2,6 seconds on my midrange i-5 12400. Fascinating though that NOBODY will ever see that comment.
At school while learning VB6 long time ago, I made a horse race game project. It had to have random values on the power of the horse but I felt that this PC random function was not enough random to me. So in the race the speed was randomly changing speed by a random factor for a random time. This is how you could have a 200MHz computer completely freeze. At that moment I understood that my code was not optimized.
Joseph Harrietha Since the Ackermann function is a classic in theoretical computer science, QC being only theoretical at the time being shouldn't matter all that much for the question at hand ;)
Such a simple function, yet amazing how long it takes to compute. But, given infinite memory, I wonder much the runtime could be brought down using dynamic programming (memoization).
There's some subtlety the video had to skip about what primitive recursive functions are that I had to look up. Because without that further clarification, of course I can rewrite any recursive function without recursive calls. Using loops I can keep track of my stack manually as a resizeable list and as my loop condition iterate so long as the stack is not empty. In C I could even do that with for loops where my loop condition can be anything, not just i < something. Partially recursive functions specifically restrict one to loops with a specific known upper bound to the number of times the loop iterates, but importantly also doesn't allow all manner of other things that would allow one to simulate recursion like while loops or goto. The definition lists all the things you're allowed to do instead of what you're not, because otherwise I could come up with an infinite number of other language features that let me simulate recursion.
As some people are wondering what ack(4,2) is like: the number has 19729 decimal digits, begins with 20035.. and ends in ...6733. One can compute it by exploiting the knowledge that the special case ack(3,x) is always 2^(x+3)-3. So, a programming language with unbounded integers like Haskell can do that for you. However, ack(4,3) is hopeless anyway, because to even represent the resulting number as a binary number on a computer you need a computer with a very very large piece of computer memory - exceeding the size of the known universe.
I noticed that all those numbers are very close to a power of two, in fact, starting at ackerman(2,5) they're all 2^p - 3 for some p. Is there a pattern to this?
yeah. try doing it by hand - you wont get the exact numbers, obviously, but you will get those power tower representations of them.mind you, once you see it, you cant unsee it.
I worked out the fast Ackermann function :) function fAck(m,n){ var ack = [4,5,6,2,3,4,5,6,7,3,5,7,9,11,13,5,13,29,61,125,253,13,65533]; console.log( ack[m] ) if( m
Yes, I know! I intended to reveal, at the end of the video,that values for Ackermann can be inferred, thereby sidestepping the need for an eternity of recursive calculation. However, I ran out of time and, as the video is already almost 15 mins long, what little I did say probably ended up on Sean's cutting-room floor ... :-)
Mark, thank you! I was thinking "I like this guy, but his cosmology is a couple of decades out of date." Glad I'm not the only one to go "Big Crunch?, srsly?"
one question about this: when I wrote this in python I got an error that more than 1000 recursive calls is not possible. While this might be different in other programming languages I'm a bit baffled that this can run for four weeks on their computer and not lead to some memory problem... Cause isn't the thing about recursiveness that we need absurd amounts of memory?
+Ezechielpitau > " isn't the thing about recursiveness that we need absurd amounts of memory?" Tail-call optimization completely eliminates the memory problem, if the programmer makes an effort to lay out the function in a certain way.
+Paulina Jonušaitė I believe that will only work for primitive recursive functions. In this case, one branch has a recursive call that cannot be unrolled.
Now, here is the nasty thing. You can program Ackermann's function iteratively. It requires setting up a stack and it is not a pretty picture. But you can do it. Those of you aware that no machine language actually has any recursion in it could probably guess this fact.
That's not the point. The point is you can't convert a general recursive function into a FOR loop. Of course, you can convert it into a while loop. You just can not know maximum possible iterations. That's why you can't do a for loop. Also, machine languages has recursion in it. Recursion basically means calling the currently executing subroutine. What prevents you from doing it? For example, in x86 assembly, you can just use the call instruction.
@@aim2986 Yes you can use the call instruction. But it is not innately recursive, as you will find out when you mess up your stack. It is an iterative instruction. It saves some data to memory, updates a general purpose register and updates the instruction pointer. I stand by my statement. No machine language _actually_ has recursion in it. It has tools you can use to implement recursive algorithms.
@@PvblivsAelivs by that logic no machine language actually has functions in it, too. We can think all non-recursive functions like loops which iterate only once. Like a do..while(false) loop.
@@PvblivsAelivs ok. I got you. Machine languages doesn't have functions. But I think assembly languages does. I know that's not the point here, but I just wanted to make it clear. Because you know, we can define labels which we can jump later.
I think you argument about the fact that the Ackermann function should stop is incorrect. Indeed, Ack(m,n-1) is passed in the argument as n in the last test case and we could have n
Because all recursion is just an iterative loop through a stack, a deeper recursion will require more stack space. When maximum recursion depth is reached, you get a Stack Overflow error / exception in modern languages. But even this algorithm can be expressed with a loop and a stack for variables. In fact I used this technique to overcome the maximum recursion depth of a language. And it was working faster than with recursion. As for the most difficult problem to compute, I was expecting to actually see Fibonacci done recursively. Because the worst way to do a Fibonacci sequence is by using recursion. To calculate the 40th Fibonacci by paper manually will be much faster than using a recursive algorithm on a 4GHz CPU. And with each additional Fibonacci number, the complexity of the problem increases like a factorial (worse than exponential).
int ack(int m, int n){ if(m==0)return n+1; if(n==0)return ack(m-1, 1); return ack(m-1, ack(m,n-1)); } here's a simple java version for anyone wanting to play with it, overflows pretty easily
Nice, I tried the function myself before he started showing his results and I thought I did something wrong because it seemed to get stuck at ack(4,1). But apparently it's just so tough to compute.
This video taught me how to crash a C++ compiler! template struct Ack { static constexpr size_t value = Ack::value; }; template struct Ack { static constexpr size_t value = Ack::value; }; template struct Ack { static constexpr size_t value = n + 1; }; template struct Ack { static constexpr size_t value = 1; };