Thank you for opening with what q and r are and how you get them. A good few articles I've read don't actually say what they are, which is annoying, because for a number-phobe like myself, a more granular introduction is needed quite often. For me, just looking at the equation does not intuitively tell me that q and r are generated by dividing a and b
Was it a typo at 5:25? You wrote: d | b - r1 * q1 implies d | r2 I think it should have been: d | b - r1*q2 implies d | r2 Since equation 2 is: b = r1*q2 + r2
Hi, I'm afraid you have so many views no one would answer my question. But still. I don't understand how you can deduce that d | a -bq from d | a, d | b. Can someone explain me the intermediate steps ?
In the previous "Divisibility Basic" part, there is a propositon that " If a|b and a|c, then for any x and y as integers, there is a|bx+cy". So according to this, we have d|a and d|b, then d|ax+by where x = 1, y = -q1.
if d|a and d|b then we can write this as a = dx and b = dy (all integers). Any linear combination like pa + qb (p and q integers, again) can be written as dxa + dyb which you can factorize to d(xa + yb) which of course would be a multiple of d, the thing we initially sought to prove.
Does d | r_(n-1) imply (r_(n-1)) / d = 1 and that d = r_(n-1) ? Read: Does the last statement that d is a divisor of the last remainder imply that the remainder divided by d is equal to 1, and does that mean that d is equal to the last remainder?
Here we are using the definition of the gcd. We start by showing that r_{n-1} is a common division of a and b then suppose we have another common divisor of a and b and show that must divide r_{n-1}. This is precisely the definition of gcd(a,b). That d in the proof is only a divisor of the gcd, it may not be the gcd itself.
@@xRabidz I hope you see this comment :P As it drives me nuts :S we know that r__(n-1) is some common divisor, we do not know by the proof that it is the greatest one it may not be. So we take some divisor d, it may be the greatest but it also may not be the greatest, if we would take a d that is not the greatest one and r_(n-1) may not be the greatest one we are left with a d equal to r_(n-1), how does this proof that r_(n-1) is the greatest one ...
@@mateuszpachulski6211r_(n-1) MUST be the greatest, we are not trying to prove with contradiction here. Basically gcd(a,b) is defined that ANY divisor d of a and b must divide the gcd. Think about it, any common factors of 30 and 40 must divide 10.
rn-2 is not a divisor of rn-3, because rn-3/ rn-2 have a rest ≠ 0. I mean an example is 12 = rn-3 ,8 = rn-2 and 4 = rn-1. There you see 12/8 = 1*8 + 4 = 1,5. Not a divisor.
r1 < b otherwise the choice for q1 was wrong. r2 < r1 otherwise the choice for q2 was wrong. and so on and so forth. this is a consequence of how remainder upon division is defined.
Why can't you prove this shit works by using real world things like a field or something useful and meaningful? Why does it have to be bulk lines of gibberish simbols?