My name's Brian. I hold Master's + Bachelor's degrees in Mathematics and currently work as an instructor of mathematics at the community college level. I have a passion for teaching and sharing the joy of math with the world.
If you would like to work with me, please contact me at the email address below.
BY REQUEST. Infinity is never-ending, INCOMPLETE. Infinity + 3 = °°, Infinity + 5 = °° Rearranging °°-°°=3, °°-°°=5 Canceling the infinities 3=5 INCONSISTENT Infinity, again when referring to numeral digits are countless, therfore in any algebraic operation are IMPRECISE. (shows especially in multiplication and power operations)
Knuth's opinion is that if the exponent is viewed as an integer then 0^0 = 1 (because we don't have to worry about a bunch of annoying special cases as you noted) but if the exponent is viewed as a real number 0^0 is undefined (because the function x^y has an essential discontinuity at x = y = 0).
well I think you could make a convincing argument to most people (not mathematicians ofc) that ln(0) = -infinity, by simply showing them a graph of ln(x) on desmos. from that the problem just becomes what sin(-infinity) and cos(-infinity) are, they could be any angle from 0-360 degrees.
Yes it is. They are Abraham Robinson infinitesimals and the infinitesimals form a field containing R. So you’re wrong. And you’re a hypocrite for continuing to use the notation.
this only happens because the standard notation is not quite right. don't use differentiation. the d[] operator is an implicit diff. you can really see where it went wrong by implicitly differentiating 1/dx. See Johnathan Bartlett's notation change. d[c]=0 means "c is constant" d[d[t]]=0 = d^2[t] means "c is a line" d[a + b] = d[a] + d[b] d[a * b] = d[a]*b + a*d[b] d[a^b] = b a^{b-1} d[a] + log_e[a] a^b d[b] d[log_a[b]] = ... complicated, but derivable from d[a^b] The second derivative is where the standard notation goes wrong. This is the actual second derivative. Apply d first, only divide by dx as a separate step. d[ d[y] / d[x] ]/dx = ( dy * dx*{-1} )/dx = ( d[dy]/dx + dy*(-dx^{-2}*d[d[x]]) )/dx = d^2y/(dx^2) - (dy/dx)(d^2x/(dx^2)) The thing that most people wont calculate right is d[ 1/dx ]. When people think of acceleration, they assume that d^2x = 0. This is true when x is a line, when the variable is t, for instance. The third derivative is even more complicated. But you can check this for second derivative and see that you can use it to solve for dy/dx. That subtracted term is usually zero, but you need to keep it around for the algebra to work. z = [ x^2 + y^2 ] dz = 2x dx + 2y dy One partial is to set dy=0 and dx*dx=0 and dx>0 dz/dx = 2x dx/dx + 2y dy/dx
I mean, it works because we have in general y’(x) = g(x)*f(y) in seperate variables problems, so then dividing by f(y) we get int of y’(x)/f(y(x)) dx is just ln(f(y(x))). Differentiating the result you indeed get (by chain rule) 1/f(y(x)) * y’(x) as we wanted.
I wish you gave an example where it does not work as a fraction, I'm curious what kind of cases I should be wary of when treating derivatives as fractions.
Early in your math career: a derivative is NOT a fraction do NOT treat it that way Later in your math career: I differentiate both sides and divide one infinitesimal over to find the derivative. Actually i divide it over and then take the reciprocal of both sides cause that’s how you flip a fraction 😂
oh, so you include actual Brilliant content in your subject matter, meaning i can't just skip the sponsored part of the video? that's .... that's brilliant, damn you
1) Define derivative as a limit of the ratio delta y)/(delta x), as delta x goes to zero. 2) Define differential of a function of x as d(f(x))= f’(x).delta x. 3) Thus, d(x)=1.delta x. Note that (dx=delta x) not equal to 0. 4) We have, dy=f’(x) times delta x= f’(x).dx 5) Now, recover f’(x) by dividing dy by dx. Thank me later.
@@johnlabonte-ch5ul KarenTheBonehead: "Infinity is a dangerous concept as it is incomplete, inconsistent, and imprecise." Me: Explain what the nonsense is supposed to mean, then prove it.
@mateusz... Duh! You can't have a 1 at the end of an endless string of 0s. If I ignore that for a moment, surely 1 - 0.(0)1 = 0.(9)9, not 0.(9). Alternatively, surely 0.(9) + 0.(0)1 would be 0.(9)1, not 0.(9) or 1.
'i still disagree' Well, you are still wrong, then. '0.(0)1 would be a solution' Obviously not. 0.000...01 = 1/10^n, where n is natural and corresponds to the position of the digit '1'. 1-0.999... is a real number (because it is a sum of real numbers 1 and -0.999...) and is non-negative (because 0.999... is not greater than 1), and it is also less than 1/10^n for all natural n (because 0.999... is greater than 0.999...9 = 9/10+9/100+9/1000+...+9/10^n for all natural n, and 1-0.999...9 = 1/10^n). There is only real number with those properties, and it is 0. 'we have no problem adding 1 to inifinity' This is obviously nonsense. In most contexts, addition is not defined on pairs of points one of which is named 'infinity'.
I took calculus twice once with a professor who stressed dy/dx IS NOT A FRACTION. The next said you can use it as a fraction the only difference is that dy/dx can be defined for /0 when the equations are applied. I got a C in the first professors class and a B in the second. I felt lost when i couldn’t approach it as a fraction but it all made sense when i did.