Behavioral Social Neuroscience Seminar
When we learn from feedback, prediction errors allow us to estimate the value of stimulus-action associations. However, in a structured world where different rules may be valid in distinct contexts, or where instead distinct contexts may require the same rule, prediction errors may also inform us as to when to create new rules, when to transfer known rules to new contexts, and when to generalize newly learned associations to other equivalent contexts. Here, I present computational modeling and experimental results that show that healthy human adults build structure and generalize knowledge in reinforcement learning. Trial-by-trial model-based analysis of EEG signals support the model's hypothesis that subjects incorporated the generalized reward expectations from their inferred hierarchical structure. In a second set of experiments, I show that neural constraints in representations of motor choices predict the nature of the structure built by subjects. This highlights the dual influence of top-down priors supporting abstraction, but also of low-level bottom-up motor constraints in hierarchical rule learning. These results further our understanding of how humans learn and generalize flexibly by building abstract, behaviorally relevant representations of the complex, high-dimensional sensory environment.