Rigorous Systems Research Group (RSRG) Seminar
This talk is one of a series of talks given by Professor Tarokh as part of the Moore Scholar program.
Abstract: To begin with, I will very briefly overview our work of the last 2-3 years. Then I will focus on our results on the limits of learning (LoL).
Let a sequence of data points X1, X2, …., Xn be given. A fundamental question is: how much can be learned from this data?. An approach has been set-forth by great mathematician A.N. Kolmogorov based on Kolmogorov Complexity. However, Kolmogorov Complexity is not in general computable.
We propose (what appears to be) a new approach. We take the point of view that the LoL depends on learner's objectives, leaner's limitation, potential existence of tutors, cost of learning, prior knowledge, model classes employed by the learner for learning the data, etc. There are "soft" analogies between this formulation and Shannon's information ideas that we will also briefly discuss. However, these analogies do not enable new tools for the calculations of limits of learning.
We then measure the amount of learning by the ability to predict the immediate future (e.g. Xn+1 from X1, X2, …., Xn) where the goodness of prediction depends on the objective (loss function) of the learner. We review a conjecture of Takeuchi (TIC), generalize it (GTIC) and prove it under some standard assumption. This provides an asymptotically accurate Algorithm (ALGO-0) for calculation of limits of learning for batch data.
We then extend ALGO-0 to sequential settings. By producing an amalgam of ideas Kolmogorov-Tikhomirov -entropy, and expert learning we provide a provable algorithm (ALGO-1) for computation of LoL in sequential settings. This algorithm has significant practical value and can be used for prediction.
We then further modify ALGO-1 to allow for smooth transition between candidate models for prediction in sequential settings. This gives ALGO-2 with various practical applications. An interesting application of ALGO-2 is in designing neural networks that change their structure with incoming data. These neural networks are similar to human brain: For example, new- born children can only do simple tasks, but gradually they can succeed at doing more complicated tasks. Biological evidence has been found in a recent study on how children reason about other people's mind, where cognitive scientists found that the same brain regions of children get more functionally specified as they get older. These organic circuits (neural networks) are being investigated and provide robustness to variations in data statistics, and lack of large amounts of training data.