Physics student here! for the people wondering when this integral pops up. I have seen it mainly in statistical mechanics when dealing with distributions of bosons, for example, this is the kind of integral one has to solve to get the black body radiation equation (a.k.a Boltzmann equation).
Man, the more I watch your vids, the more I get insights on nuclear tools to evaluate other similar integrals, I'm about to preach bout you to all the homies in the e-hood.
It is interesting to learn how the geometric series, gamma function and zeta function are used to arrive at a solution to this quantum integral. Brilliant analysis and explanation. This is real and yet complex problem solving.
Great connection between two awesome functions. And really cool applications too. Maths 505 is getting more and more awesome by the day. Would really love this channel to go from 8.63 K subscribers to 10^6 subscribers. Really cool stuff here.
Actually s=4 is the integral by integrating Planck's formula of spectral density of radiation for all range of frequences to get total radiation density (Stefan-Boltzmann law)
Evaluating the integral using contours to get the reflection formula for zeta was some of the most fun I had in complex analysis. The integral that appears in the reflection formula for gamma was almost as fun.
For the mathematicians this identity is known as a functional equation and Riemann used a variation of this identity this to analytically continue the zeta function to accept all inputs s such that s is not 1.
This is because geometric series are convergent if and only if the absolute value of their common ratio is less than or equal to one. You could think of it in the sense that if you take the limit of the common ratio as the exponent approaches infinity, it has to go to zero. This is only true for magnitude strictly less than one. If it were equal to one, you'd just have a bunch of ones; sum them all together up to infinity, and you have a never-ending series of ones - essential one times infinity. If it were greater than one, it'd grow exponentially and approach infinity as x approaches infinity. Utilizing that same exact logic, you might observe that e^x is divergent (goes to infinity) as x approaches the upper bound (infinity), and is not strictly less than one on this interval (from zero to infinity). You can't have that, or else your geometric series will diverge and you will not have a finite result. To remedy this, you can multiply and divide by e^(-x) - which happens to be a really convenient option, since it goes to zero as x approaches the upper bound (infinity), and it is strictly less than one on this interval (from zero to infinity). Considering how integrals are defined, the integral bounds are approached by a limit with respect to their position on the integral sign; i.e: the lower bound (zero) is approached from the positive direction, and the upper bound (infinity in this case) is approached from the negative direction - which means that you never actually have zero as an argument, but rather something slightly larger, like 10^(-5) for example - if you plug this number into your calculator as an argument for e^(-x), you'll see that it's slightly less than one; additionally, e^(-x) will never be greater than or equal to one on this interval (from zero to infinity), as long as you don't plug zero into its argument. Considering these facts, e^(-x) is strictly less than one throughout this entire interval. Hence, the geometric series fully converges.
@@daddy_myers I am curious about something and it has been too long since I took calculus... Is there any problem with the fact that for 1 / (1 - exp(-x)) the value of exp(-x) is, in fact, greater than 1 for SOME values in the interval of integration, which is (0, inf)? If you take a really small value of x, exp(-x) can certainly be as big as one wants, right? How does this not screw up with the |X| < 1 requirement for the geometric series expansion?