This post is based on a draft dated July 10, 2015, “Learning in Dynamic Probabilistic Environments: A Parallel-constraint Satisfaction Network-model Approach,” written by Marc Jekel, Andreas Glöckner, & Arndt Bröder. The paper includes experiments that contrast Parallel Constraint Satisfaction with the Adaptive Toolbox Approach. I have chosen to look only at the update of the PCS model with learning. The authors develop an integrative model for decision making and learning by extending previous work on parallel constraint satisfaction networks with algorithms of backward error-propagation learning. The Parallel Constraint Satisfaction Theory for Decision Making and Learning (PCS-DM-L) conceptualizes decision making as process of coherence structuring in which learning is achieved by adjusting network weights from one decision to the next. PCS-DM-L predicts that individuals adapt to the environment by gradual changes in cue weighting.
This post is based on a paper: “What is adaptive about adaptive decision making? A parallel constraint satisfaction account,” that was written by Andreas Glöckner, Benjamin E. Hilbig, and Marc Jekel and appeared in Cognition 133 (2014) 641–666. The paper is quite similar to that discussed in the post Swiss Army Knife or Adaptive Tool Box. However, it reflects an updated model that they call the PCS-DM model (parallel constraint satisfaction-decision making). From what I can tell this model attempts to address past weaknesses by describing the network structure more fully and does this at least partially by setting up a one-free parameter implementation which can accommodate individual differences and differences between tasks.