This post looks at a paper by Andreas Glockner and Thorsten Pachur entitled: “Cognitive models of risky choice: Parameter stability and predictive accuracy of prospect theory,” that appeared in *Cognition* in 2012. The paper looks at the changeable parameters in prospect theory and tries to determine their explanatory value and also the extent which individuals have stable parameters. It also tests a number of heuristics along with expected value and expected utility theory by studying the responses of 66 college students at the University of Bonn.

In CPT(cumulative prospect theory) the so called S curve embodies the model, but different people may have different shaped curves and different weights. The authors create a CPT model with 7 changeable parameters. These parameters include decision weights for gains and losses, concave and convex curvature of the gain and loss value functions, relative weighting of losses and gains, overweighting of small probability occurrences, underweighting of larger probability occurrences, along with elevation of the curve functions.

The study found that at least over a one week period, individual preferences were stable with 79% making the same choice at the two sessions. It also found that individual parameters were more predictive than median parameter values. Figure 2 below presents the results. The three parameter model that included: a common parameter for the utility curvature across gains and losses, a loss aversion parameter and a one-parameter weighting function fitted to minimize the percentage of mismatching choices was nearly the most accurate and much simpler.

The authors say their study showed that people reliably differ in how they make decisions (even when given the same set of choice problems). How to accommodate these individual differences in models of risky choice? Their results suggest that CPT’s multi-parameter framework offers one viable way. They suggest that alternatively, individual differences in risky choice could be modeled assuming that different people rely on different heuristics. However, an additional analysis focusing on the heuristics only showed that this approach is rather limited (at least based on the set of heuristics they investigated). First, only 48% of our participants were best described by the same heuristic at both sessions; second, using the best-performing heuristic at the first session to predict choices at the second session (and vice versa) yielded, on average, only 62.4% correct predictions, which is far below the predictive accuracy achieved with CPT.

Glockner and Pachur point out that although CPT emerges as the superior model in predicting the outcome of people’s choices, this does not mean that it also provides a good description of the information processing steps underlying people’s choices. (This criticism is that the CPT model is paramorphic.) In fact, in light of people’s limited capacity for carrying out complex processing deliberately, it seems unlikely that they explicitly calculated weighted sums, as described in CPT’s algebraic formulation. In addition, process tests clearly speak against this explanation.

Glockner unsurprisingly points to his own model: Parallel Constraint Satisfaction Theory (He has at least adopted it.) as one possible explanation with choices resulting from automatic processes that can lead to compensatory information integration—as modeled by evidence accumulation or coherence maximizing parallel constraint satisfaction mechanisms. Glockner and Pachur do also point out that, by having multiple adjustable parameters, current models of automatic processes face the potential problem of overfitting; in addition, they have not yet been submitted to rigorous quantitative tests in risky choice. Nevertheless, they contend that qualitative tests have shown that models of automatic processes can account well for choices, decision time, and patterns in information acquisition.

Another possibility is that people might rely on heuristic principles for information search and integration. Despite the negative results for existing models of heuristics from outcome tests reported in the paper, evidence from neuroimaging , verbal protocol studies, as well as eye tracking and information board studies suggests that people often process ‘‘information about the gambles in ways inconsistent with compensatory models of risky decision making’’. These results highlight that a challenge for future research is to reconcile the apparently conflicting lines of evidence from process tests and outcome tests.

Glockner and Pachur note that a key objective of cognitive modeling should be to make things as simple as possible, but not simpler. Various approaches, ranging from multi-parameter frameworks such as cumulative prospect theory to simple heuristics that ignore part of the information and cannot accommodate individual differences within one model, have been proposed to predict people’s risky choices. They have shown in this paper that simpler implementations of cumulative prospect theory yielded more stable parameter estimates and were as robust in prediction as considerably more complex implementations. However, they clearly believe that Gigerenzer as the prime heuristics proponent is making things too simple and leaving out too much. They were willing to give Kahneman’s paramorphic model a pat on the back to make that point.

Pingback: Single Strategy Framework and the Process of Changing Weights - Judgment and Decision Making