Value contextualization in reinforcement learning: modeling, imaging and development during adolescence.
Context-dependency of option values has proven useful to explain adaptive coding and range-adaptation. Here we show that value contextualization is also necessary for successful punishment avoidance learning. However, this adaptive function is traded against the acquisition of potentially maladaptive preferences. Both effects (adaptive and maladaptive) are well accounted by a novel learning model, in which context- (or state-) value sets the reference point to which an outcome should be compared before updating the option value.
Consequently, in contexts with an overall negative expected value, successful punishment avoidance acquires a positive value, thus reinforcing the response. This is mirrored at the neural level by a shift in negative outcome encoding from the anterior insula to the ventral striatum, suggesting that value contextualization also limits the need to mobilize an opponent punishment learning system (neural efficiency). In a second experiment we aimed to trace the developmental time-course of the computational module responsible for value contextualization.
Adolescents and adults carried the same experimental task as in the fMRI study. The computational strategy changed during development: whereas adolescents’ behavior was better explained by a basic reinforcement learning algorithm, adults’ behavior integrated the value contextualization module. As a consequence, adolescents learned from rewards but were less likely to learn from punishments. Together our findings shed new lights into punishment avoidance learning at the computational, neural and development levels.
References: Palminteri S, Kilford EJ, Coricelli G, Blakemore SJ. The computational development of reinforcement learning during adolescence. PLOS Computational Biology (2016). Palminteri S, Khamassi M, Joffily M, Coricelli G. Contextual modulation of value signals in reward and punishment learning. Nature Communications (2015).