Cool vid! 3:30 I would like to note that, while for a proof, inversion of the matrix is probably the best way to go about this, practically speaking, inversion of the matrix is usually the worst way to go about solving a matrix equation. The idea is that if a matrix is invertible, A is unique, not that it exists at all. But if A isn’t unique, that means the interpolating polynomial isn’t unique. Anyhow, this matrix depends only on choice of node location, not value at the node, whether the equation has a unique solution depends only on whether the matrix is invertible, not on the y values. And that’s why it’s useful to do the proof this way.
Your comment is absolutely correct. The matrix equation can never results in my multiple solutions for A because of the polynomial interpolation uniqueness. Therefore, the polynomial doesn't exist if the Vandermonde determinant is zero. Using the Gauss-Jordan method would be best to solve for A than having to invert the Vandermonde matrix. Matrix inversion requires the use of cofactors, a very computationally expensive and wasteful process. I cannot recall another way for doing so that would have an acceptable time complexity, say polynomial for instance.
Thanks for the video, this was a very interesting demonstration of an application of Vandermonde matrix I'd never heard of before, and all the steps were clear.
Yes, this is long winded. I just write the 4 equations and solve for the unknowns. This is simple if the points are equally space. Sometime one must use unequal intervals like t01 for the time between x0 and x1. I get equations in terms of time intervals. I make the substitutions for the time intervals so they aren't calculated over and over again.
For representation of the general formula of vandermonde determinant , a double pi notation could be easier to understand , like a nested for loop. The left subscript is the lower limit, and right subscript is the upper limit here .. j=1π(n-1) {i=0π(j-1)} (xj - xi) is the correct representation then .
Great video. The Vandermonde determinant is really cool but I don't think it was necessary to show invertibility. If we know that Lagrange interpolation or whatever works for every choice of vector y that means the Vandermonde matrix is surjective, hence invertible.
The framework is essentially the same as what's in linear algebra textbooks (slightly more handwaving on the arithmetic than what I remember) from back in the day but presented more succinctly. (got tired of having to code it anew in fortran every time I needed it... damned kids and their libraries for everything.)
Nice, I always saw the existence proof given by explicitly constructing a polynomial (using Lagrange polynomials) and then using Taylor polynomials or something in actual practical applications. Though now that I am thinking about it, if all we want is an existence proof, doesn't that follow immediately from uniqueness? Seeing as the Vandermonde matrix (once you have fixed the x-values) represents a linear map on a finite dimensional vector space, where injectivity is equivalent to surjectivity?
at 7:15 when you substract a cols, i cant seem to understand why the second row after it only has x0 in the power of 1, shouldnt the action of sub x0^2 on the above col affect the whole col? sorry that part got me confused a bit
Note the blue text: we are subtracting "x0 * the previous column", for all rows. First we look at the effect that has on the first row, namely that it turns every element after the first into a zero. Then we look at what it does for the second row, so we start with [1, x1, x1². x1³, ...] and then apply same operation: the 1 stays the same; the next column gets "x0 * previous column = x0 * 1 = x0" subtracted, yielding x1 - x0; the next column gets x0 * x1 subtracted, yielding x1² - x0x1; the next column gets x0 * x1² subtracted, yielding x1³ - (x0 * x1²), and so on. The next row [1, x2, x2², x2³, ...] gets the same treatment, turning into [1, x2 - x0, x2² - x0x2, x³ - x0x2², ...], and so on all the way down the matrix.
But it doesn't say if a node is repeated then polynomial doesn't exist or multiple polynomials exist... so how to find the polynomial if a node is repeated? I can repeat the method for non repeating nodes and then multiply by (x-xr)^m when xr is repeated node and m is no of repetitions. Is this correct?
If a node is repeated then you have two output values for a single input which is not a function and so not a polynomial. So what you are doing is not interpolation. You are looking for polynomial regression, probably.