Title: "Understanding human reinforcement learning on a deeper level"
Abstract: Latest research in reinforcement learning (RL) has demonstrated an ability to succeed in a few arduous tasks. However, fundamental questions still remain as to how the human brain develops an ability to handle a wide variety of tasks and to learn from only few observations. This talk introduces our research team's twofold approach to advancing the understanding of human RL, by juxtaposing wisdoms from neuroscience and AI.
The first line of research (AI to neuroscience) investigates neural computations underlying human RL. Recent findings support the view that the brain implements multiple distinctive types of learning: model-based and model-free RL, incremental and one-shot inference. This idea forms a basis for the prefrontal meta-control theory that one of the key functions of human prefrontal cortex is to allocate behavioral control to the brain's subsystems so as to quickly adapt to changing environments. I will present some of our ongoing researches aiming to advance this theory.
The second line of research (neuroscience to AI) focuses on brain-inspired AI. I will present two examples demonstrating the applicability of neuroscience for designing a better AI:
- RL algorithm to control human RL
- Designing an experiment without a human experimenter
A detailed insight into these issues not only permits advances in AI, but also helps us understand the nature of human intelligence on a deeper level.