Cornelis Wiersma Visiting Professor of Neurobiology Talk
Title: "Can brain computation be compressed enough for us to understand it?"
Abstract: Understanding how the brain works is undeniably one of the big questions us humans ask. A mainstream view of computational neuroscience is built on three demands. First, we want to (jointly) understand the whole brain or at least large subsets thereof: how the nervous systemconverts stimuli and internal states into behavior. Second, we seek answers that can be communicated to another scientist, answers that can thus be compactly expressed. Third, we seek answers that describe the full causal mechanism, how information is processed,stored, and used. Here we argue that these demands probably cannot be simultaneously satisfied. Our argument is based on the idea that learning to high performance in a complicated world will force the learner to be complicated making it not compressable.
After all, we can not even compress deep learning systems such as ImageNet or AlphaGo. We discuss how many popular approaches in neuroscience stem from relaxing one of the three demands. We sketch how an alternative that relaxes the third demand in favor of a focus on learning may work and how the answers it may deliver would qualitatively differ from those that we currently seek.