At 7:10 there is a correction. The notations aren't consistent with the matrix shown at 5:44. x_1 will pass through phi_{11}, phi_{21},..., phi_{51}; and x_2 would pass through phi_{21}, phi_{22},...,phi_{52}. Basically, the activation functions should be labeled in this order: phi_{11}, phi{21}, phi{31}, phi{41}, phi{51}, phi_{21}, phi_{22}, phi_{32}, phi_{42}, phi_{52} Credit to @bat.chev.hug.0r for pointing it out!
This is an excellent explanation of the paper (now i can ease into reading the paper). Learnable activations is new and exciting and most researchers would be kicking themselves saying, "why didn't I think of that?" The next step (for the authors of the paper) may be to work with "attention", because as far as we know, that's "all you need".
Agreed! In theory, they could probably do some attention stuff when aggregating the outputs of the activation function at each layer. Instead of a regular addition, just do a (attention-)weighted addition. It'll be interesting to see for sure - Kolmogorov Arnold Attention Networks (KAAN) got a nice ring to it. That said, I think they should prioritize making it highly parallelizable and fast first.
There have been attempts to make activation functions learnable. In my opinion, one of the most successful attempts is the radial basis function neural network. It's quite an interesting mechanism, but it is now considered outdated.
this reminds me of harmonics in sound, where the function is one-dimensional (the strength of the sound depends on time), but we can say that a sound wave is also a complex function that consists of simpler functions, namely different frequencies or harmonics of the sound wave. I have this analogy in my head
I think thats a fair analogy. I saw some stuff in Hackernews (news.ycombinator.com/item?id=40219205) where someone tried to implement a KAN layer on pytorch with Fourier coefficients (github.com/GistNoesis/FourierKAN/).
This is the best explanation of the theorem I've found so far. I think I understood most of it when going through the paper, but this has really solidified and clarified what the proof is about.
I get an itch in the back of my brain that KANs should be able to use some support-vector tricks. In particular, there should be a sub-set of training examples that support the learned splines, with the others being hit "well enough" by interpolation. It's kind of like learning the support vectors + kernel at the same time. It perhaps should be possible to train an independent KAN per minibatch with a really restricted number of free params, and use this to a) drop out the non-supporting training examples, and b) concat/combine the learned parameters recursively.
Do you plan on making long mathematical breakdowns and derivations of ML papers at some point in the future? An example of what I mean is something like the mathematical explanation of the diffusion model by "Outlier" RU-vid channel. The suggestion is basically to have 2 versions of some major ML topic. An overview like this video and another one that goes into a more deep dive of derivations and simplifying it.
@@AdmMusicc Thanks for the suggestion, sounds like a good idea. I might consider doing more in-depth math videos in the future. Most of my videos right now focus on the more practical and intuitive aspects of ML algorithms with some visual cues and illustrations.
Thank you for this great, extremely clear video. KAN network seem to be a much more sensible approach than MLP for physics as the basis function can be selected based on some prior knowledge of this field... But without GPU support it will be complicated to scale to large scale models.
great explanation thanks. I am just a bit confuse of what contains the learnable function at the edge level and how these local parameters are updated during the backpropagation phase. Thanks !
Most important that is theoreticaly prooven formula of presentation of any multidemensional function. Kolmogorov- Arnold theorem plus control points of B-spline are basis for any continuous function. Training, speed and propagation are technical problems which should be solved.
Aren't indices of the $\phi$ functions inverted at 7:10 when you describe the KAN layer? I think $x_1$ should be passed through $\phi_{11}, \phi{21},...,\phi{51}$; and the same goes for $x_2$ ? Great video anyway! Thanks
So basically, "promising results" on very simple function approximation. What about classifying the mnist digits? It's not even considered a meaningful test nowadays, but at least it shows that the method does work (on 28*28 dimensions). It doesn't take much time to test it.
Thank you for your very clear explanation I have a question Please tell me about the formula at the bottom around 3:40. In this formula, isn't "Price" the result of adding all the prices in each row above? Or is “price” in this expression a vector?
Thanks a lot! So in the dataset (like the Boston Housing Dataset) each row stands for one house/property with all of its different features and its price. And the task is to train a ML model that inputs the features (bedrooms/sq footage etc) and predicts the price. So basically the model can be used later when to predict prices when the price is “unknown” and the features are known. So yeah, we won’t be adding up prices from other rows because a) they are all for different houses, and b) they are the entity we wanna predict using those other attributes. Hope that answers the question!
@@avb_fj Thank you for your reply I am so glad to see well, I think the data in the first and second rows of the Boston Housing Dataset are the input data for f1 and f2, respectively. Then, what exactly does "Price" on the left side of this equation contain? Or is the idea that only one line should be entered for one calculation?
how do kans work on MNIST? Because MLP can do 92% test accuracy with around 20k connections, it should be easy for you to zip mnist through it and get a result?
Mmh, why is it good that all functions are univariate? Why not make bivariate functions based on bezier surfaces or few-variate hyper,-surfaces? They could pack some power of combining inputs to a higher degree, while retaining some advantageous of weight changes having local impact...
I’m sure as time goes on, someone will try it to squeeze out more accuracy and performance out of KANs with multivariate functions. I guess philosophically it makes sense to keep them univariate because according to the Kolmogorov Arnold theorem, all multivariate functions is just adding up a bunch of univariate functions.
That’s true. Any vector representation of a data point is its embedding. The outputs of a KAN layer is indeed an embedding of the input. It just computes this embedding in a different way than MLPs.
@@avb_fj thank you so much! And, thank you for your breakdown of the math. HUGE anxiety trigger for me and your fantastic presentation skills did wonders for that.
For starters, they will probably try to flatten the 28x28 images into a 784 length vector and run the current version of KAN on it. Similar to how standard MLPs train on images. To do a CNN-like implementation, they will have to do more stuff like summing up representations over a rolling window/kernel, which would probably be more in the future.
I still don't understand why this paper is important. Yes, of course you can replace anything in an existing RNN style structure with some other type of function, yay. There are an innumerable number of arrangements and styles of things you could do to modify RNNs. Whee. I haven't seen a single example that shows why we should care about this particular arrangement, if KANs were good, they'd have more evidence than the very synthetic and useless examples in the paper.
I think the degree of the b splines is a hyperparameter that is predetermined at the start of training. So no its not dynamic during training. That said, one can increase or decrease the number of control points of the splines after training (look at Grid Extension in the video or the paper) to create a new model with more/less complexity.
very clear. Subscribed. Question though. Your b-spline visualization showed it curving under itself -- but wouldn't this make it a non continuous function? (ie/ more then 1 output per x co-ord?)