I hereby agreed! the best interpretation of what is convolution integral and its fourier transform and laplace transform. Clear concept explained in plain language and easy function plot. Prompt those nasty algebra expression to save audiences' concentrated energy for listening to the core ideas. superb.
I've been through all four calc courses and am on Linear Circuits 2, and this is the first time anyone's written the first part of the definition. And it makes sense now.
"I won't be spending the next 18 minutes taking the convolution of sin and cosine in an effort to show you that the convolution of two functions is an actual quantity." Savage. I agree though--that Kahn academy video was a waste of time
I had to go back to the Khan Academy's video to understand how it is happening. I think both videos are very useful and needed. However it was rude from the Faculty of Khan to say so.
@@dania5426 I agree with that. Educators should maintain mutual respect for one another. True professionalism is depicted in how you present these videos. Salman from Khan Academy is never seen to say anything that is not pertinent to the topic of the video. Whereas, Faculty of Khan, even in his intro video felt the need to defame the other educators who have put out their own lectures in the past. I am an Engineer and an educator myself and I believe it is for the sake of the growth of this channel it would be better for Faculty of Khan not to defame other educators. For the purpose of roasting and dissing we already have tonnes of other entertainment channels.
This is such a MONUMENTALLY important idea in electrical engineering, I don't understand why so many other videos and teachers are so bad at explaining this topic
You're actually sweeping across values of tau not t. t is a constant inside the integrand and that is why integrating results in a function of t, y(t).
Boundary condition between negative side and positive side can use Laplace Transform too. Fourier Transform is just a special version of Laplace Transform.
Could you do a video about the Fourier transform (definition, purpose and derivation). Also, what is the difference between a Fourier Transform and a Fourier Series. Thanks!
Sure, as I continue my series on PDEs, I'll do some videos on Fourier! Also, a Fourier Transform is an operation that converts a function of time to a function of frequency (in a sense, it's like the Laplace Transform), while a Fourier series is a way to express a function as a sum of sines and cosines. Hope that helps!
I've always learned that the upper bound of the integration was 't' for the laplace convolution, not 'inf'. One give you a function of t the other gives you a number. How do we distinguish between these two?
"I wont be spending the next 18 minutes showing you the convolution of sine and cosine in an effort to demonstrate that the convolution of two actual functions is an actual quantity" damn, some harsh words for sal
From your last video almost like 10 years ago and you said the upper limit and of the integration to be t, then the lower limit of the integration to be zero thereupon leading to totally different result, can you explain the reason behind this two different operation?
At 2:02, isn't it the increasing value of Tau, rather than t, that causes the g function to sweep to the right? If you have y = (x - Tau)^2 and you increase the value of Tau, you will cause the function to shift rightwards.
It's a bit different in this case; to use your analogy, we're increasing the value of x and not tau (this is the same as increasing t in g(t-tau)). If you draw y = (x-tau)^2 (y vs. tau as your axes), then increasing x will make your function move rightward. For instance, if x = 0, then y = -tau^2 (i.e. the vertex of the parabola will be at tau = 0). However, if x = 1, then y = (1-tau)^2: now, the vertex of the parabola will be at tau = 1 (i.e. you've moved your function to the right). Same idea in 2:02. Hope that helps!
Hi sir, If the upper limit of the convolution integral for Laplace transform is infinity, then why LaplaceInverse(F(s)xG(s))=int f(Tau)xg(t-Tau)d_Tau from Tau=0 to "Tau=t" (and not infinity), where F(s) and G(s) are the Laplace transforms of f(t) and g(t)? Thanks.
First time seeing "Faculty of Khan", after coming from Khan Academy, also thought it was a robot talking and couldn't help but think- is this an incredibly advanced neural network, trained on Khan Academy neural net tutorials to output simpler neural net tutorials? Is this a weak AGI primitively reaching out and asking us to bring it to full capacity? If so, uh... *I'm here to help* Cheers! 🍺
It's not you're right, but this was just a way to explain the idea behind convolutions. Using positive functions is more intuitive for teaching purposes than using negative functions.