Here's another video on Feynman integration where we've used the technique to get a result that is absolutely beautiful! In case you haven't seen the video on the Dirichlet integral: • One of the coolest int...
Anyone who might be sophisticated enough to do this Feynman trick would already know how to do the contour integral, which gives the result in just a couple of lines.
Yes I do agree that contour integration and even the laplace transform will derive the result more efficiently. However the purpose of this video is to to demonstrate the use, power and beauty of differentiation under the integral sign
@@maths_505 I think a better example then is to do the full exercise: cos(ax)/(1+x^2) . Since cos is harmonic, it is easy to follow your derivation, if 'a' is far enough from 0.
I personally had no trouble understanding this and potentially could’ve solved it myself. But I have no idea how to even begin understanding complex integration and cauchy’s residue theorem
Amazing result, by the way, you can actually get a better one by applying the fundamental theorem of engineering saying that π=e therefore getting I=1.
Great video Kamal! Isn't that simply the real part of the Fourier Transform of 1/(1+x^2) evaluated at 1? This Fourier Transform is well known (exp of abs) 😊
I am in my first year of a bachelor's degree in mathematics and I love these integrals but I don't like the way that Feynman do these. Not for me, looks like a physicist thing.
I've uploaded a video on this integral solved using the laplace transform instead of Feynman's technique. Check it out, I think you'll like that better.
Honestly seems to me like an overkill, I think it is more straightforward to just solve this using complex analysis + residue theorem (with a route around the i pole)
. |=|cos x{dx/(1+x^2)}= =|cos x d(arctan x)= =arctan x.cos x-|arctan x (-sin x dx)= =arctan x.cos x+|arctan x.d(cos x)= =2 arctan x.cos x-|cos x d(arctan x). Maka 2|cos x dx/(1+x^2)= 2 arctan x.cos x Jadi |=cos x.arctan x+C
I don’t think you can switch the differential and the integral because the integral of the derivative doesn’t absolutely converge. In order to do that you would need to do an integration by parts to increase the degree of the denominator thus when you differentiate you get something absolutely integrable
I was wondering why at 4:58, you can take the constant "a" inside the differential? I've never seen this done before so any explanation for why this works would be much appreciated. Thanks!
@@maths_505 I also just tried u-substitution, letting u = ax, and was able to get the same result (equals integral of sin(u) / u du). So that works too 👍
You do need to be careful about the bounds of integration when doing this. In this case they didn't change because they were 0 and infinity and we're assuming that a≥0, but in general they will change by a factor of a.
Because it’s the same process as thinking “ax is inside the sine. it would be nice if I had ax on the outside in the numerator so that a u-substitution would get rid of it, but I must multiply the top and bottom by a to produce ax”
My thoughts exactly! Also, when calculating I'(0), he used I'(a)=-pi/2+ int (0 to inf) sin(ax)/(x(1+x^2)), so he got I'(0) = -pi/2, but I'(a) also equals int (0 to inf) -xsin(ax)/(1+x^2), if you apply this when calculating I'(0), shouldn't I'(0)=0 ???
Amazing integration, just want to point out when solving a differential equation you must first find the characteristic equation for the homogeneous part 1st and then find the particular solution. In this specific case this is a strictly homogeneous equation so there is only the characteristic equation to be solved which is r^2-1=0 and hence r=1 or -1 and the form is c1e^(r1a)+c2e^(r2a). This only works when r1 and r2 are different from each other and also real. For repeated roots and complex roots the form of the solution will change. By the way r1=1 and r2=-1. Also it’s important that the solution the second order differential is not 0 because that would imply are original integral is equal to 0 even though 0 is a possible solution to the differential equation. But when graphing cos(x)/(x^2+1) clearly the area is not 0.
Here's what's bugging me about the derivation. Let's look again at I'(a). Before we manipulate it, it's the integral of -x sin(ax)/(x^2 + 1) dx. Now, if we take the limit as a --> 0, it looks like we get 0, since sin(0) = 0. But in your derivation, you get I'(0) = -pi/2.
Normally, using the dominated convergence theorem, we can justify differentiating under the integral sign by showing for all a in some interval, the partial derivative is dominated by some positive function whose integral converges. But the integral of |(sin ax)*x /(x^2+1)| from 0 to infinity diverges for any a>0. So how do we justify differentiating under the integral sign here?
I'm still very curious to know how we can justify differentiating under the integral sign to get I'(a). I've been thinking about it for over a week, but I still haven't figured it out.
@roderictaylor this is one of my earlier videos and definitely not one of my best. I approached this using a different method in another video which I liked alot more.
@@maths_505 Thank you. I enjoy your channel, and I will check out your other video, but I've been studying differentiation under the integral sign recently, and I'm interested in when it works and doesn't work for its own sake. If we could show it works in this case, I'd be very curious to see it as I'd be learning something new. At this point I don't think it does. Let F(a,x)=cos(ax)/(x^2+1) and let F_a be its partial derivative with respect to a, F_a(a,x)=-x sin(ax) / (x^2+1). To justify differentiating under the integral sign, I believe we'd need to show that the integral from -infty to infty with respect to x of [ (F(a+h,x) - F(a,x))/h - F_a(a,x)] goes to zero as h goes to zero. After some manipulation, I believe this is equivalent to showing the integral from -infty to infty with respect to x of [ x sin(ax) (1 - sin(hx)/(hx))]/(1+x^2) goes to 0 as h goes to 0, and I don't think this is the case.
How do we know in instances like these to make it match the bottom by multiplying by 1 and adding 0 like it sometimes seems to be necessary? Is there an alternative way that doesn't require this?
We're not exactly taking a=0 Were actually taking the limits of I(a) and I'(a) as "a" approaches zero. As far as the confusion about I'(a) is concerned it can be proved using more mathematical rigor that the expression for I'(a) at the 2:48 mark isn't defined for a=0 which is why I pulled out the Dirichlet integral to consider the limit of I'(a) as a approaches zero. This issue was also raised in another comment and it got me thinking about uploading an alternate solution that still uses the Leibniz rule. Unfortunately I forgot to upload it....I'll upload that solution tomorrow as it won't create ambiguities that would force us into being extra rigorous Thank you so much for reminding me via this comment
Yes indeed that is quite disturbing Here's an article that explains the rigor behind our solution (can't explain it properly in a RU-vid comment 😂). Its the last example in the text. kconrad.math.uconn.edu/blurbs/analysis/diffunderint.pdf
@@maths_505 I do like the solution you give here. Just because there are other arguably easier ways to find the answer, doesn't mean we can't also appreciate a solution like this. I'd just like to figure out why it works (and perhaps in the process get a better understanding of when differentiating under the integral sign works). And I just now discovered the paper you linked above which treats this problem, acknowledges there are several invalid steps, and promises to derive it rigorously. I'll need to spend some time studying that.
Except the switch up of the integration and differentiation was not justified, and actually the mentioned "trickery" was done to avoid getting a nonconvergent integral without mentioning why one does that
To be frank, it’s an extremely simple PDE and a standard result. You can assume the result by a little intuitive thought if you forget it. I guessed it before he wrote it
is there not a contradiction between your statement about I'(a)=-pi/2 at 8:55 and your earlier initial statement about I'(a) earlier at 2:30, which would surely collapse to 0 when a=0?
I agree that this is not the best way to tackle this integral using Feynman's trick. I solved it using the same technique but with an adjustment to take care of the irregularities in this video.
9:03 Our solution for I’(a) relies on the dirichlet integral evaluating to pi/2 however that doesn’t work for a = 0 which would give \int_{0}^{\infty} sin(0x)/x dx = 0 making I’(a) discontinuous at a = 0.
At 4:54 he doesn't use a = 0 yet and does the suspicious "bring the constant a into the differential," so a being zero hadn't been used yet. That's why he got the right answer at the end despite what you saying being true.
Excellent question We can justify the solution better by considering the behavior of the general solution I(a)=c1e^-a + c2e^a as "a" approaches positive infinity. The only way to get a bounded solution for all positive values of "a" is c2 to be zero, which agrees with the general result of the integration with the parameter being different from 1 (the general result can also be proved using the Laplace transform). We can actually prove the result more rigorously while still using the feynman technique but taking into account the fact that I(a) is not differentiable at a=0. So the value I obtained at the 5:56 mark is actually a limit as "a" approaches zero from the right. That's why I pulled out the Dirichlet integral to make things more clear.
I think the first and last expressions for I’ have equal value for non-zero values of a, but their values are different for a=0. It’s not clear to me how that happened. It’s been almost 30 years since I studied calculus.
At 1:10 you show I(a) equal to an integral (from 0 to infinity) with "a" plugged in, but you dropped the factor of 2, which you showed immediately above. As this percolates down to the end, this would make the final solution 2*pi/e rather than pi/e. Right?
Très belle démonstration de la puissance, de l'efficacité et des immenses perspectives offertes par cette méthode ingénieuse pour se sortir de situations apparemment inextricables. Merci pour votre travail
why is a partial derivative when it is after integral sign but normal derivative when it is before integral sign? what is the difference in which order we put these operators?
7:08 That is indeed a solution to the differential equation, but how do you know that it's the correct one? Put another way: the desired function I satisfies I''(a) = I(a), but that does not mean that every function satisfying this condition is necessarily I. Here, you would also have to argue that NO function other than the one you give solves I''(a) = I(a). Can this be done? If so, how?
1. This integral can be found with residues in one line. 2. If you do not know complex integration, you may know Fourier transform. The parametric integral is pi/2 exp(-|a|) by taking the Fourier transform of a symmetric decaying exponential. Your solution ignores the case a
The Fourier transform is immediate. Proving the Fourier transform for e^(-|t|) is super easy and then using the inverse Fourier transform for t=1 is exactly the integral in question, constants not withstanding. The Fourier transform is such an amazing tool, shame none of the math channels use it very much.
Nicely done! But what will learning these integrals of such specific mathematical operators lead us to in real world? Request you to corelate integrals with real world applications. That would make this video even more explosive!
Not sure how it's normally proved, but thinking about power series it makes sense. You will of course also have f''' = f', f'''' = f'' etc. so when you expand f(x) = f(0) + xf'(0) + 1/2x^2f''(0) + ... all the coefficients will be determined by the first two, so then f(x) = f(0)[1 + 1/2x^2 + 1/24x^4 + ...] + f'(0)[x + 1/6x^3 + /120x^5 + ...] and it should follow from there
But I don't think the converse holds - that is, I = I' ' doesn't necessarily mean I = I '. Hence the more general expression initially deduced in the working@@cottawalla
good day. You are multipying by X both of numerator and denominator and in the same time calculating integral from "0" to infinity which means that "0" is included....can you?
@@maths_505 sorry not only approaching but it is included. You say limits and I say domain of integration. ......Yes sir you can, just to give you some headache. Kindest regards.
The technique is elegant, no shadows of doubt about it. But would it be possible to identify cases where that technique would be the best approach or something like we do when we learn other integration techniques such as integration by parts, variable substitution, trigonometric integration?
If you had something like "cosaxsinax", then would you treat it as "cos(ax sin (a)x)", "cos(a) x sin(ax)", "cos(a) sin(a) x^2" etc. or something else? Even if here one can 'guess' that you secretly mean that "sin ax = sin(ax)", it's often problematic, if people just skip brackets like this and we are working with many different variables or expressions that are multiplied together... I never understood why some physicists purposefully keep skipping brackets to create ambiguous expressions and sometimes even technically invalidate correctness of what they do, as if they loved to see the world BURN! :D :D Edit: 5:15 why not just "dax" instead of d(ax)? Let's be consistent in skipping the brackets 🙂
@@florisv559 if even meant what they thought it meant, only a small portion of functions would be even, and a general word like even wouldn’t be used for them
That's a fantastic computation and an amazing result. I wonder if this result could be used to prove something about the number π/e, for example, if it's transcendental or not.