H.B. Keller Colloquium
The heart of modern machine learning (ML) is the approximation of high dimensional functions. Traditional approaches, such as approximation by piecewise polynomials, wavelets, or other linear combinations of fixed basis functions, suffer from the curse of dimensionality (CoD). This does not seem to be the case for the neural network-based ML models. To quantify this, we need to develop the corresponding mathematical framework. In this talk, I will report the progress made so far and the main remaining issues within the scope of supervised learning.
I will discuss three major issues: approximation theory and error analysis of modern ML models, qualitative behavior of gradient descent algorithms, and ML from a continuous viewpoint.