The "universal approximation theorem" is a catch-all term for a bunch of theorems regarding the ability of the class of neural networks to approximate arbitrary continuous functions. How exactly (or approximately) can we go about doing so? Fortunately, the proof of one of the earliest versions of this theorem comes with an "algorithm" (more or less) for approximating a given continuous function to whatever precision you want.
(I have never formally studied neural networks.... is it obvious? 👉👈)
The original manga:
[LLPS92] M. Leshno, V.Y. Lin, A. Pinkus, S. Schocken, 1993. Multilayer feedforward networks with a non-polynomial activation function can approximate any function. Neural Networks, 6(6):861--867.
________________
Timestamps:
00:00 - Intro (ABCs)
01:08 - What is a neural network?
02:37 - Universal Approximation Theorem
03:37 - Polynomial approximations
04:26 - Why neural networks?
05:00 - How to approximate a continuous function
05:55 - Step 1 - Monomials
07:07 - Step 2 - Polynomials
07:33 - Step 3 - Multivariable polynomials (buckle your britches)
09:35 - Step 4 - Multivariable continuous functions
09:47 - Step 5 - Vector-valued continuous functions
10:20 - Thx 4 watching
5 июн 2024