I have been out of school for 10 years. I re-read CLRS and multiple other difference sources about this substitution method. I just forgot too much about induction that I could not follow those other explanations. This video really did a great job of explaining this very convincingly and clearly without assuming any knowledge of the audience. Great job, kudos and I appreciate your work so much!
hello...i have exams in 2 days...did you get a better understanding on how to guess the answer or any other vdo you can link... please would it help a lot 🙏🙏
Man I wish I had seen this prior to my algorithms midterm exam. The prof just uses the examples from the textbook, but the way you explain the steps make it so much clearer to understand. I will definitely be using you as a resource from now on
i read this before watching so i assumed it would be a bad video but it actually explained well. i think you just don't have the prerequisite knowledge
@@emperor8716it is not bad, but he definitely jumps around a lot. Most people go fast with explanations because they expect fundamental understanding of core concepts(such as induction). This idea usually just leads to confusion since most people go to these videos because of their lack of understanding. A slower Way longer explanation void of expectations is far superior…
according to me he is saying that this whole term (cnlog(n)-cn+n ) is less then cnlogn because we are subtracting (c-1)n from nlogn. therefore we just use this relation to get t(n) = cnlogn -(c-1)n to t(n)
@@clashingwithsahib But this argument is not valid if c is something between 0 and 1. Instead, I think the reason he discarded the linear term is because asymptotically they are trivial compared to O(nlgn).
@@sarahli2933 actually, for the proof to be correct, he needs to state in the video that c is greater than or equal to 1, because according to the big O definition, we could pick any values for the c.
Not sure if I entirely follow your question but basically cn is another way of giving bounds for O(n). For all intents and purposes cn=O(n) but we ignore the c because we are only looking for an upper bound when using BigO notation so we just ignore the constant. It is also worth noting that with the master method there are a few fringe cases in which cases where it does not work. Namely when the leaves (divide step) are almost equal but not equal to the root (conquer step) because both case 1 and case 3 require them to be polynomially smaller and larger respectively. Case 2 of the master method can generally absorb a difference of about log n but if it doesn't fit in that log n difference and it doesn't fit in case 1 or 3 as shown before it can't be used. Hoped this helped and I'm sorry if I confused you more!
For someone having hard time to guess the solution in one go,I recommend you to go through master's method first before stepping into this one. Btw great lesson!
you can guess any complexity and then check if it is true, he used the correct guess to teach us. if you use O(n^2) it will also be true, but it's loosely bound.
Hi, in mathematical induction we typically have a base case. I was wondering how come we did not use it here? For example, do we need to show that it holds for T(1) or T(2) etc...
Recurrence relations will usually have a recurrence T(n) specified for values greater than some n and a single value specified for T(1) or whatever the "base case" is (you might have seen the notation where you have T(n) = { "two lines for the two cases"). That T(1) would be the case where the recursion "bottoms out" and you would have for example just T(1) = 1. If you imagine a recursion tree, that is the cost of a single leaf at the bottom of the tree.
The proof is incomplete. The result we proved was for n/2 >= n_0, which is n>=2n_0. What we needed was to prove the guess for n>=n_0, so we still aren't done with the problem. We need to show that the guess is true for n_0
If we get any new recurrence how will we guess correctly.....there are other videos too on substitution where they substitute the values and find the series.... dats more easy I guess....here it's more sort of proving the answer rather than finding the answer
This helped a lot, thx. That being said, does it have to equal exactly? The reason cn + n is not cn because T(n) could be in between cn and n aka n(c+1). I have a similar problem where the f(n) in the recurrence is logn and you end up getting something like nlg^2(n) - nlgn + n (without the Cs) and the soln is O(nlg^2(n)), however it makes sense because while the nlgn grows faster, it's subtracting in this case and it's just obvious that nlg^2(n) grows faster than n. They do give a T(1) = 1. Are you supposed to use that somehow to finish the proof? NVM I see.
At 8:12, you said log2 is log 2 base 2 and changed it to 1. How did you know it was base 2? When I graph kx(logx - log2) and kx(logx - 1), I get two different graphs. How/why did you replace log2 with 1?
you can guess any complexity and then check if it is true, he used the correct guess to teach us. if you use O(n^2) which will also be true, but it's loosely bound.
Thanks for this video, I learned about inductive proofs just a while ago, where it was about proving that P(k+1) holds using P(k) (P = proposition) So the explanation of using T(n/2) to show that T(n) holds was helpful. (Well, not T(n) but whatever was used with T(n), in this case, the inequality) *Correct me if I interpreted something incorrectly here Have you perhaps also made a video about repeated backward substitution? I learned about it in a lecture but the CLRS didn't cover it. If not, can you make a recommendation, where it will be explained as well as you did here?
At 8:41 , you say that the expression is less than or equal because -(c-1)n is negative. Should we specify that the c should be greater than one for it to hold? or is it enough to realize that there exists a c greater than 1 such that the proof holds?