This post looks at Parameter P, a specific construct of the PCS-DM model, as elaborated in “What is adaptive about adaptive decision making? A parallel constraint satisfaction account,” by Andreas Glöckner, Benjamin E. Hilbig, and Marc Jekel (Cognition 133 (2014) 641–666). (See post Revisiting Swiss Army Knife or Adaptive Tool Box.)
Glockner et al state that transformations in Eqs. (3)–(5) (See figure at top of post.) are commonplace and sensitivity analyses have shown that the selection of specific values has little influence on predictions as long as inhibitory connections are relatively strong compared to excitatory connections. PCS-DM predictions, however, strongly depend on Eq. (2). In this equation for calculating connection weights, validities are corrected for chance level (.50) to avoid that irrelevant cues have a weight. Parameter P allows PCS-DM to capture individual differences in the subjective sensitivity to differences in cue validities. Low sensitivity is captured by low P. By contrast, high sensitivity for cue validities is captured by large values of P with high values as special cases in which less valid cues cannot overrule more valid ones. P captures sensitivity at the level of individuals, that is, it determines how an individual transforms explicitly provided or learned information about a cue’s predictive power (i.e., cue validity) into a weight. Glockner et al suggest that P describes a core property of a psychological transformation process that precedes decision making.
To find the value of P that maximizes the overlap between PCS choice predictions and the rational Naïve Bayesian solution, the authors used Monte-Carlo simulations. They found that in randomly generated tasks and sets of validities in a four-cue environment this is the case for P = 1.9. My understanding of this is questionable. I would assume that this particular P value is not valid across many environments, but I really do not know. I plugged in numbers in the Equation 2 to do a mini-sensitivity analysis. So with a cue validity of .6 subtracting the chance .5 and taking it to the 1.9 power, I got a weight of .012 compared to a weight of .175 for a cue validity of .9. Thus the weight for the more valid cue is about 15 times that of the less valid cue. For P=1.2 the weight for the more valid cue is only about 5 times that of the less valid cue. Since in this situation 1.9 is optimal, the calculations show that an individual with a parameter P of 1.9 would rely much more heavily on the more valid cue.
Additionally, they implemented a second fitted version of PCS-DM, PCS fitted, which estimates one individual P parameter per participant, representing participants’ sensitivity to differences in cue validities. They found that participants were insufficiently sensitive to differences in cue validities, although cue validities were explicitly provided. The authors state that by taking into account individual differences in sensitivity to cues in the parameter P, PCS-DM allows the model to describe and predict choice behavior better than other models even if cue weighting is suboptimal from a rational point of view.
For environments with stable cue validities, the findings hint that adaptivity is achieved through adapting weights as suggested by PCS-DM. Glockner et al note that environments are often instable, and it remains unclear whether PCS-DM can also capture individual adaption following a change in cue validities. Research indicates that people stick to previously learned strategies and this stickiness is particularly strong if there is a shift from compensatory to non-compensatory environments, indicating that individuals have a hard time learning to ignore less valid evidence. According to PCS-DM, such stickiness would be reflected in suboptimal adaption (and thus insufficient differences) in the P parameter. One can expect to find lower P parameters after switching from compensatory to non-compensatory environments, indicating insufficiently adapted sensitivity to differences in cue validities. Participants seemed to be insufficiently sensitive to differences in cue validities as the P parameter for PCS fitted was significantly below 1.9. They found insufficient adjusting of cue weights once the environmental structure changed. Specifically, individuals may differ in how they translate information about the world into their mental representation of the decision task.
Parameter P is difficult for me to characterize. Glockner et al state that:
“According to the PCS model for decision making, participants translate cue validities into subjective weights in a mental representation corresponding to their individual sensitivity captured in the parameter P.”
Recent posts that looked at prediction error minimization propose that the brain tries to slow down the onslaught of sensory information. Parameter P might capture some individual differences in this. It seems to me, and I am getting way over my skis here, that it might be partly a “slowness factor”. People with a lower Parameter P might be “slower” to respond to new information from the environment or maybe to trust information from the environment less. The particular experiments probably do not translate to many real world situations well. It is probably not typical or adaptive to quickly trust that one cue has a validity of .6 and one of .9. This slowness might lead to large differences in how we respond to the world and thus who we are. Slowness might change over time within individuals so that it is also a development factor. Clearly the stability of the environment would help to determine the adaptivity of a particular Parameter P. Parameter P might respond to the blend of analytical and intuitive activities. The idea that a significant amount of our personality or cognitive style or perceived intelligence might be based on our differences in a single parameter may seem crazy, but aggregation modeling shows us the complexity that can be generated by a simple rule. If individuals do have stable Parameter P values, at least over certain conditions, it might be possible to improve our individual decision making based on it. Parameter P might vary with expertise. (I should note that Glockner et al introduce lambda in the appendix as a function representing the steepness of the choice function. That was beyond me at least as presented.)