The video is very nice. Thank you! Just a small remark: The indexing of f and f hat in the matrix vector multiplication is wrong. Should count up to f_{n-1} not f_{n}.
@@Eigensteve Or conversely, shouldn't you simply make the summation from 0 to n? Since for f_0 to f_n you now have n+1 sample points, and x is an n+1 size vector. By making your summation to j=0:n, it is summing over n+1 points which is the standard notation used in approximation theory.
@@iiillililililillil8759 you can change summation range if you pull out the j = 0 term and add it in front of your sum :) similar to how it is done in series solutions for certain differential equations
十分同意!所谓离散傅里叶变换,事实上是用数值方法求解傅里叶级数,这个命名太扯犊子了! I can't agree with you more! Discrete Fourier Transform is to solve Fourier Series in fact, not Fourier Transform. What ridiculous the name is!
The amount of free, useful, precise information coming from this channel is remarkable and something to be grateful for. It legitimizes RU-vid education.
It is not "free". Most likely, Professor Brunton has these lectures as one of the deliverables of many of his NSF grants. Thus, this is paid by the US taxpayer. :)
Dear Steve I really enjoy your teaching format and also your wonderful explanation. Just one suggestion, It would be great if you could have at least one practical lecture at the end of each series of lectures, e.g for Fourier series transformation lecture designing one lecture which shows a real problem is great and enhance the level of understanding. Stay motivated and Many thanks for your consideration
I like your insight that this should actually be called the Discrete Fourier SERIES. Thank you for your way of relating the matrix to the computation. Your perspective help me see how the matrix is related to the tensor and quantum mechanics.
Excellent video! The video was conceptually very clear and to the point. You are an amazing teacher, Prof Brunton! I loved your control systems videos too!
Oh my goodness! Stumbled onto video 1 in this playlist this evening. and I can't stop. Steve, you're amazing. I actually finally feel like I understand what a fourier series is and why it works. can't wait to get to the end. This is easily the best set of lecture on this topic i've ever experienced. HUGE thanks!
@@samarendra109 I had to pause the video to look in the comments to see if he was writing backwards, It was driving me crazy, small obsessive compulsive attack XD
He is writing on a piece of glass and he flips the video after. He is a lefty, which you can see in his early unflipped videos. His part is also the other way.
I have absolutely no clue what you're talking about but I love listening. Even without understanding it's very evident you're a talented and efficient teacher.
Heya! I really enjoy the pacing of your lectures. It's also nice for me to get a quick recap of some signal processing before assembling my own lectures. It is also helping me fill in the gaps of knowledge I have around data science, where my training is in Functional Analysis and Operator Theory. This past fall I dug through the literature for my Tomography class looking for a direct connection between the Fourier transform and the DFT. Mostly this is because in Tomography you talk so much about the Fourier transform proper, that abandoning it for what you called a Discrete Fourier series seemed unnatural. There is indeed a route from the Fourier transform to DFT, where you start by considering Fourier transforms over the Schwartz space, then Fourier transforms over Tempered Distributions. Once you have the Poisson summation formula you can take the Fourier transform of a periodic function, which you view as a regular tempered distribution, and split it up over intervals using its period. The Fourier integral would never converge in the truest sense against a periodic function, but it does converge as a series of tempered distributions in the topology of the dual of the Schwartz space. Hunter and Nachtergaele's textbook Applied Analysis (not to be confused with Lanczos' text of the same name) has much of the required details. They give their book away for free online: www.math.ucdavis.edu/~hunter/book/pdfbook.html
Your ability to explain something this abstract in such a simple manner is simply astounding. However i was more impressed by your mirror writing skills. hats off sir..very very good video.. Subscribing to you.
Some of your lectures are very good but you need to be more specific please when teaching. Its fine when talking to people already familiar with the concepts but understand that you are teaching to new learners
As far as I understand, when we take the inverse discrete fourier transform, we end up with the function values at x_0, x_1, x_2, ..., x_n, but how would you determine what the values of x_0, x_1, ... ,x_n are? I need to know this for my masters thesis please help me if you can.
The last time I tried to give a similar lecture I messed up the indexing much more than this, it was a little comforting to see you do it too. It made me wonder if it was worth it to count from 0 always when teaching linear algebra (probably not).
Thanks for the feedback... yeah, I know that when I make mistakes in class, it actually resonates with some of the students. I hope some of that comes through here.
Thank you for the presentation with clarity and intuition. I have a question, @ 9:14 you mentioned something about the fundamental frequency wn. If we are given a piece of signal like you drew, how do we decide what frequencies to look for in that signal? and hence how do we decide what fundamental frequency we can set wn to be? In other words, how do we know if we should look for frequency content from 10 - 20 hz instead of 100-110hz?
Very useful lecture. Thank you so much, Steve! One question by the way, why the number of f hat equals the number of f ? I can't really understand the point here. In my opinion, the number of calculated Fourier coefficients can be different from the one of sampling points.
Sounds like a good question to me. Maybe some of the values are so small that they can be neglected? I'd be interested for him, or someone else who knows this math, to talk about it here in the comments.
Hello ! Thanks for your video. I had a question : So if you start with datas from a periodic analogous signal x(t) of period T, frequency w and you want to discretize it with sampling frequency f_s. I know you use DFT but how to you link the frequencies of your discrete and analogue signals ? Is the frequency w_n you're showing here the frequency of the continuous signal ? Thank you !
Good question! There are deep connections between the discrete and continuous Fourier transform, but you can derive the discrete from continuous and vice versa (taking the limit of infinitesimal data spacing).
Mr. Brunton. Thank you for clear, concise, organized presentation of DFT. Appreciative of how much time and effort such a presentation / explanation takes to create and deliver. Appreciative of the format you use and precision in getting explanation correct. Explanation of terms and where terms originate has always been helpful in your presentations. Going through the whole DFT, FFT series again to refresh my thinking on the topics. Thanks again. (Erik Gottlieb)
It's crazy how Gauss discovered the FFT algorithm and didn't publish it, probably because he didn't think it was significant enough. Meanwhile the rest of humanity took hundreds of years to discover it.
I think Proff Steve either hate or don't remember 26 alphabets,, 😂.. He made the exponential term in summation 6:40 such confusing that I had to leave the video...😂..
Some videos have this equation e^(i*2*pi*j*k/n) as without i e^(2*pi*j*(k/n)*m) namely this one ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Iz6C1ny-F2Q.html&ab_channel=BarryVanVeen, Is there a difference?
I have to give you credit for giving the absolute best educational videos I have ever seen. The screen is awesome, the audio is great, you explain thoroughly and clearly, you write clearly, your voice is not annoying and everything makes sense. Thank you mr sir Steve.
Dear Prof. Steve. I think there are n+1 data points (starting from "0" to "n"), but you have calculated the frequencies for (f1,f2, f3, .., fn) total "n" points. I think that one point is missing? Is something wrong?
Hi Steve, at 13:07, if your increase your sample data to 2n, then your DFT matrix first row will be 2n of 1s, and f0_hat will be doubled, is that right? Thank you!
one thing i don't really understand is why there is a "j" in the exponential e^{2\pi1k/n}. Aren't e^{2\pi1k/n} sort of like the basis vectors we are projecting onto? Why do we need to raise each of those to the j's?
At 14:55 shouldn't the last value be wn ^ (n(n-1)) instead of wn ^ ((n-1)^2) Since the value is at the fnth value row wise and jnth value coloumn wise?
Nice try, but you need to know that changing i and j makes the non-familiar audiences get confused. Additionally, time and frequency domain are both denoted by f, which is not an appropriate notation for teaching fourier transform.
In any kind of complex maths explanation, I value preciseness the most. This guy has a good visualization but should have been prepared better if he is interested to make the video helpful.
I'm thinking the same thing. If it is mirrored then Professor is left-handed and writing to the right, then nothing is wrong here. If not then it is very hard to do. So I think it is mirrored. Very good video, clear explanation, 4k image quality really helps me to focus. Thank you very much, professor!
I believe it's the same technique as professor Matt Anderson uses on his physics videos. He explains this method on the video: "Learning Glass - What is he writing on?" link: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-CWHMtSNKxYA.html
To understand how important the FFT algorithm is, it helps nations know when other countries are performing underground nuclear tests from anywhere in the world. hope that helps :)
sorry, I just could not concentrate on the DTFT while trying to figure out how the hell youre doing the writing, right hand, from right to left, is there a mirror involved (i cant see how) or did you really learn to write the other way ?!, thanks!
Hi Steve, do you have a lecture to the connection between fourier series and DFT? their form seem so alike. do they actually connect each other? interpretation wise. Many Thanks!
They do! The important thing to notice is the continuous FT is described as an integral (an infinite sum) whereas the DFT is defined as a finite sum. Otherwise they're almost identical Would recommend 3blue1brown's video on this
If I am not wrong we collect the sample of data from x(t) in time domain so the elements of the second vector (red one) are not the signal frequencies and just the amplitude of our signal in time t?
At 5:56 if its only going till fn (the coefficients) and thus the number of weighted signals, how is it an infinite sum of sinusoids? I'm a bit confused
I'm not quite sure I understand your question. If you are asking why the DFT/FFT has multiple "mirror" copies, this is because the DFT/FFT is complex-valued, and so there is redundancy in going from "n" real valued data points to "n" complex valued Fourier coefficients.
it's really complicated. When we use Binary Algebra, we can get s formula of a function almost immediately. This is a formula for the first 16 primary numbers: y (n) = 5 a3 a2 a1 a0 +5 a3 a2 a1+ 5 a3 a2 a0 + 9 a3 a2 + 1 a3 a1 a0 + 5 a3 a1 + 5 a3 a0 + 21 a3 + + 1 a2 a1 a0 + 3 a2 a1 + 1 a2 a0 + 9 a2 + 1 a1 a0 + 3 a1+ 1 a0 + 2
How did you write it??? I mean it seems you are standing behind a clean glass, that means you must have to write everything from right to left,sort of a mirror image of a normal writing............that's so cool, I really wanna know if that's how you did it??? (Also yeah I'm supposed to concentrating on DFT instead of the mirror image writing, but that's me,I can't help it...)
It's called a "lightboard." They are writing normally on glass, and recording the person writing. You have a choice: Capture the work in a mirror, and video the mirror, so everything looks normal writing, OR, record as they write through the glass, and then put the video into Microsoft Movie Maker and "FLIP HORIZONTAL." The glass is low-iron glass, so no reflections, there are LEDs at the top and the bottom of the glass. The light gets trapped in the glass, and, as they write on the glass, the marker ink makes a path whereby the light can escape. Also, black backdrops behind the writer and the camera. Easy once you know the trick behind the magic.
@@JohnVKaravitis OOOHHHH Thanks brother, I thought he must have trained his brain to write in reverse, which would have been pretty impressive, but this was cool too , thanks
13:06 the number of 1s for the first row of the matrix will be j ones right? the same number as the number of data points in the signal (or n for that matter)
Is there a difference between the Discrete Fourier Transform and the Discrete-Time Fourier Series? They seem the same thing, including their formulas. Even at the beginning of the video you said the DFT should be called a FS.
In DFT, you can tell there's a linear system of equations (whose dimensions are n*inf) that's being solved through inner products, by eliminating all terms except 1 on each equation, since the complex basis vectors are orthogonal to each other. Thats pretty straightforward and intuitive. However, when f is continuous, Fourier treats it the exact same way, which seems wrong, since the e^(iωx) and e^(i(ω+dω)x) vectors arent orthogonal to each other anymore, so even if we use inner product, there will still exist some non-zero 'remainders' on each equation which we cant get rid of. Also, any F.T. of a function f in the [-inf,+inf] domain is problematic, since the inner product of any pair of 2 basis vectors diverges. Do we assume then, that we extend our domain to [-inf,+inf] in such a way that the I.P. remains 0? Unfortunately, noone explains those.
How would an efficient DFT look, if I have a series of n-coefficients λ0, λ1, λ2, λ3, ..., λn which are prime numbers (2, 3, 5, 7, ..., P(n)) times a factor (f0, f1, f2, f3, ..., fn). And each factor is a positive integer, including zero?
Thank you Steve! I am still not 100% on how we get from the Fourier series coefficients to the DFT coefficients (f-hat_k). If someone could explain that or share a relevant resource, I would greatly appreciate it.
Hey great video and super clear explanation! I have a question regarding the indexing. Since we are indexing from 0 shouldn't the data and Fourier coefficient vectors index to "n-1" instead of "n"? Otherwise we would have "n+1" entries to the data vector. Understanding that it's just indexing, however, the dimension of the matrix and vector wouldn't match for the matrix multiplication. I think as it stands it's a "n X n" matrix and a "n+1 X 1" vector.
Great video. One of the better ones. I wish you explained the exact meaning of the coefficient in the exponent though ... e.g. I never really understood the relationship between sample frequency and number of data points (N). Seems like they will always be the same.
When i try to discretize f_hat from the continuous Fourier transform, I can't figure out how dx disappears. Shouldn't some delta x be part of the f_hat function?
I think I'm just going to watch all your videos for my machine learning course this semester instead of my professor's lecture which was so painful and frustrating....