I think it's useful to realize that for ANY epsilon, 1+(1/n) is closer to 1 than 1+epsilon for infinitely many n. Intuitively we determine limiting behavior by what happens for infinitely many terms. Therefore, being that 1+1/n is arbitrarily close to 1 for all but finitely many terms, and Sum (1/n^1) diverges, it's fairly unsurprising that Sum 1/(n^[1+(1/n)]) diverges, since it's arbitrarily close to a diverging series for all but finitely many n
If you had sum of 1/(n^(1 + f(n))) , for f(n) going to zero from above sufficiently slowly, I would think it could converge? So I think this explanation probably isn’t the whole picture?
Another way is to use the Cauchy series test, which says Σa(n) conv. Σ2ⁿa(2ⁿ) conv. This yields: Σ 2ⁿ/(2^(n(1+2^-n)) =Σ2^(-2^-n) which diverges by the Divergence Test. Thus the series diverges.
Argument can also be made via integral comparison test. With usual (for harmonic series) substitution u = lnx one receives I = \int_a^\infty e^{-ue^{-u}} du, which is divergent since the integrand's limit exists and is nonzero , namely it equals to 1.
A convoluted way to show this is to use Abel’s test: assume the series in question converges. Notice that for n>=3 the sequence n^(1/n) is monotonically decreasing. Additionally, it is bounded. Putting this together, by Abel’s test, the harmonic series converges, so we get a contradiction.
But Abel's test also requires that the decreasing sequence, in this case n^(1/n), goes to 0 as n-->∞ which is not the case. Also if A and B imply C if doesn't mean that C and A imply B. Am i missing something?
@@lucacastenetto1230The "Abel's test" I'm thinking of goes "Suppose ∑ bn converges and that {an} is a monotone, bounded sequence. Then ∑ a_n*b_n converges." I think the version you're thinking of is maybe the one for power series? The naming is a bit confusing. Also, this is just a proof by contradiction.
That seems like the right way to go, doesn't it? Or if you want to make it unnecessarily complicated in a fun way, use Cantor's proof that the cardinality of a set A is always less than the cardinality of the power set of A. 🙂
2^n > n seems like one of those facts you could just assume people know. Shortly after he assumes that the x^(1/n) function is increasing which is, if anything, less trivial.
It's maybe more confusing when you know that the sum from n=2 to infinity of 1/(n*log^2(n)) converges even though n*log^2(n) grows much more slowly than n^(1+epsilon) for all epsilon greater than 0
Just wanted to note a small mistake in the binomial expansion at 3:30, it should be (n-1) rather than (n+1). Otherwise, it was really interesting to see these two different ways of solving the question, I definitely would not have thought to use the first method!
Год назад
Thank you, I thought I was going crazy lol. I plugged in n=2 and got 2^2=6 and felt like I must just be misunderstanding somehow
A similar problem I was shown and might also be interesting to cover is the convergence/divergence of the intgeral from 1 to infinity of 1/x^(1+{x}) where {x} is the fractional part of 1 or if you prefer: {x}=x-floor(x)
n^(1/n) tends to 1 and ln n tends to infinity. Therefore 1/(n ln n) < 1/(n^(1 + (1/n)). But 1/(x ln x) is the derivative with respect to x of ln ln x which goes to infinity (very slowly). So the sum over n of 1/(n ln n) diverges by the integral test, and therefore so does the original series, by the comparison test. 1/(n ln n) goes to zero faster, but it still diverges.
A related problem can be derived from this If we have a sum S = Σ 1/n^(1 + ε/n) where |ε| < 1 and the sum goes from n=1 to ∞, which value of ε (if any) will make the series converge?
This might be hazardous reasoning but i figure the long run behaviour of the harmonic series is what contributes to its divergence. It doesnt taper off fast enough. And as values of n increase the series acts more and more like the harmonic series then it ought to also diverge. Basically despite the series acting like the sum of squares early on it cannot save it later
Hey Mike, Can you please go one step ahead and solve limit for n->infinity of Σk^(-1-1/k) / log(n) where k range from 1 to n in the numerator. This seems way more challenging and helpful as we would know the asymptotic growth rate of today's series.
Dr Penn does one of my pet peeves when I am teach second semester calculus is when a student uses L'Hopital's Rule to a limit with a discrete variable. If n represents positive integers, then how can we take derivatives with respect to n. I tell my students, that they somehow have to address this. Usually they convert it over to a real variable x, or, in rarer cases, they say that n is to be considered as a real variable.
You could use Stolz-Cesaro (I'm not sure if it's normally covered in second semester calculus). So that immediately gives us lim ln n / n = lim ln (n / n-1) = 0. Or it's easy to prove that if lim f(x) = L and f(n) = a_n for all n, then lim a_n = L as well (this is more of a real analysis thing I guess).
Wow! I have never encountered the S-C theorem, even in my Real Analysis classes. (I'm an algebraist.) Fascinating. Not something we would encounter in 2nd semester calculus.
@@DrR0BERT Does it actually matter though? 2nd semester calculus students are not math majors 95% of the time so understanding this vague difference is of no matter to them, nor will there EVER be a situation in their lives as an engineer, scientist, programmer, whatever where it will matter. They need to know how to use calculus, not the nitty gritty details of why it works that arouse us. I can and will take a number that is meant to be an integer and pretend it is a real variable, hell even a complex variable, so that I can use derivatives as I please. Ain't no one gonna stop me or hold me back like you mathematicians try to! /s
@@natevanderw I'm sorry that you don't like my standards I hold for my class. Too bad there wasn't a math for engineers, where they only get the part that they need. Or physics for engineers, where they only get the part that they need. Or English for engineers, where they only get technical writing. Let's restructure the entire curriculum to give engineer students only what they need, basically vocational school. Forget about processes. It's more important to show them how to circumscribe the rules to get to an answer, usually a wrong answer. Taking shortcuts willy nilly is what put a man on the moon. On a more serious note, teaching short cuts or to skip the nuances of a problem only creates mathematical illiteracy. It's one thing to discuss the process in depth and then address the short cut. I trust that Michael knows that the variable needed to be continuous in order to use L'H. And I am confident that he knows he's skipping some steps. I trust his process, even though it is a pet peeve of mine. He's not learning the concept for the first time. He has the experience to know why the short cut works. If I had a student who can articulate that, then I'm fine. But when I have a student (on a test I graded this morning) trying to use the integral test on an alternating series and during the integration process had a (-1)^x/ln(-1) as part of the antiderivative, there's a major disconnect. Tl;dr, short cuts can work, if you understand why they work. I don't care what field you are going into.
Great example! I hope to look into how fast the supremum of f must grow without bound in order to make the series converge if we replace n^(1+1/n) with n^(1+1/f(n)) in this problem. I think that’s can be interesting.
Probably the limit N to infinity of (sum n =1 to N 1/(n^(1+1/n)) - int_1^N 1/(x^(1+1/x)) dx) exists and gives a constant quite close to the Euler Mascheroni constant?
for the fact that 1/n^(1/n) > 1/2 couldn't you just divide both sides by n to get that 1/n^(1+1/n) > 1/2n for all integers n>0, then you get that the sum from 1 to infinity of the first is greater than the second, but the second div. because it's 1/2 the harmonic series, therefore the original sum diverges? Seems a much simpler solution than both shown in the video
Another way ln ( n ) < n 1 / n < 1 / ln ( n ) n ^ ( 1 + 1 / n ) < n ^ ( 1 + 1 / ln ( n ) ) Notice that n ^ ( 1 + 1 / ln ( n ) ) = n * e As a result 1 / ( n ^ ( 1 + 1 / n ) ) > 1 / ( n * e ) The series diverges.
Yes, it's true. It's one of Michael's favorite identities. He has made at least one video about it already. https: // www . youtube . com / watch?v=GEZISWekbGU
I have an intuitive argument for this. But it’s not formal at all. I’m writing this before watching the whole thing, but I’ve read that the series diverges. We could consider the behaviour at large n of the stuff in the exponent, which tends to 1, so effectively in the large n limit, which is what we care about, the series “becomes” some kind of “harmonic series”, which diverges by default.
Pragmatic insight approach?: plug it in to a spreadsheet, maxima, wxmaxima ... Say for the first 30 terms... What does the finite sum appear to do? Why?
Можно решить проще и быстрее, если использовать признак сравнения сходимости рядов :) ~~~ It can be solved easier and faster if you use a series convergence comparison feature :)
Limit comparison test is fairly straightforward. If the limit of the ratio of two sequences is a finite number greater than 0, then both series converge or both series diverge. If the limit is 0 or infinity, then you may make a conclusion depending on the set up of the ratio and the known conv/div of one of the series. If the problem is from an intro calculus/analysis course, you probably are looking for a simple looking sequence that has some structural similarity to the problem sequence (making 1/n a very good choice for Michael here).
By comparison test: a_n = 1/(n^(1 + 1/n)) b_n = 1/(n^( 1 + 1/ln(n) )) n > ln(n) ==> 1/n < 1/ln(n) ==> n^(1/n) < n^(1/ln(n)) ==> 1/n^(1/n) > n^(1/ln(n)) ==> a_n > b_n but we can determine that 1/b_n = n*exp( ln(n)/ln(n) ) = e*n so that b_n = 1/(e*n), so the sum of b_n diverges. Therefore, the sum of a_n also diverges.