This post is based on a paper: “Heuristic and Linear Models of Judgment: Matching Rules and Environments,” written by Robin M. Hogarth and Natalia Karelaia, Psychological Review 2007, Vol. 114, No. 3, 733–758 that predated Hogarth and Karelaia’s (What has Brunswik’s Lens Model Taught?) meta-analysis. It includes the underpinnings for that study.
Two classes of models have dominated research on judgment and decision making over past decades. In one, explicit recognition is given to the limits of information processing, and people are modeled as using simplifying heuristics (Gigerenzer, Kahneman, Tversky school). In the other (Hammond school), it is assumed that people can integrate all the information at hand and that this is combined and weighted as if using an algebraic—typically linear—model.
Hammond and his colleagues depicted Brunswik’s lens model within a linear framework that defines both judgments and the criterion being judged as functions of cues in the environment. Thus, the accuracy of judgment depends on both the inherent predictability of the environment and the extent to which the weights humans attach to different cues match those of the environment. In other words, accuracy depends on the characteristics of the cognitive strategies that people use and those of the environment.
According to Hogarth and Karelaia, the linear model has been the workhorse of judgment and decision-making research from both descriptive and prescriptive viewpoints. Despite the widespread use of the linear model in representing human judgment, its psychological validity has been questioned for many decision-making tasks. First, when the amount of information increases (e.g., more than three cues in a multiple-cue prediction task), people have difficulty in executing linear rules and resort to simplifying heuristics. Second, the linear model implies trade-offs between cues or attributes, and because people find these difficult to execute—both cognitively and emotionally —they often resort to trade-off avoiding heuristics.
How is heuristic performance evaluated? One approach is to identify instances in which heuristics violate coherence with the implications of statistical theory. The other considers the extent to which predictions match empirical results. These two approaches, labeled coherence and correspondence, respectively, may sometimes conflict in the impressions they imply of people’s judgmental abilities. In this paper the authors follow the second because their goal is to understand how the performance of heuristic rules and linear models is affected by the characteristics of the environments in which they are used.
At a theoretical level, the performance of heuristic rules is affected by several factors: how the environment weights cues, that is, noncompensatory (no effort to weigh positives and negatives together), compensatory, or equal weighting; cue redundancy; the predictability of the environment; and loss functions(based on wrong answers and exactingness of task). Heuristics work better when their characteristics match those of the environment. Thus, EW (equal weighting) predicts best in equal-weighting situations and TTB (take the best) in noncompensatory environments. As environments become more predictable, all models perform better, but differences between models also increase.
An important conclusion from the theoretical analysis of Hogarth and Karelaia is that unless linear cognitive ability is high, people are better off relying on trade-off avoiding heuristics as opposed to linear models. At the same time, however, the application of heuristic rules can involve error (i.e., variables not used in the appropriate order in SV (single variable) and TTB. This therefore raises the issue of estimating linear cognitive ability from empirical data and noting when this is large enough to do without heuristics.
Hogarth and Karelaia suggest that there is a trend for people to be more consistent in executing strategies when these are more valid. Perhaps more valid strategies lead to better feedback and are self-reinforcing? However, there is no relation between how predictable an environment is and people’s judgmental strategies.
An interesting feature of most tasks studied in the decision making literature is that they are difficult because people lack the experience necessary to take action without explicit
thought and thus are unable to invoke valid, automatic processes. Hogarth and Karelaia give the example, that the illuminating work conducted by Payne et al. demonstrated clear effort–accuracy trade-offs (involving models with different numbers of mental operations). However, these investigations were limited to relatively unfamiliar choices in which processing would have been deliberate rather than automatic.
This issue emphasizes the need to understand the natural ecology of decision-making tasks. Judgmental strategies can be characterized not only by apparent analytical complexity but also by the extent to which they are executed in a tacit or deliberate manner (post Learning, Feedback and Intuition), where the latter undoubtedly depends on the level of past experience as well as on human evolutionary heritage.
Based on the empirical analysis of Hogarth and Karelaia, judgmental performance using the LC (linear combination) models is roughly equal to that of using heuristics with error, that is, of SV and TTB under random cue ordering. However, is there a relation between linear cognitive ability and the knowledge necessary to know when and how to apply heuristic rules? Given their results, how should a decision maker approach a predictive task? Much depends on prior knowledge of task characteristics and thus on how the individual acquired the necessary knowledge. Basically—at one extreme—if either all cues are approximately equally valid or one does not know how to weight them because there is an absence of knowledge about the structure of the environment, EW should be used.
Similarly—at the other extreme—when facing a noncompensatory weighting function, TTB or SV would be hard to beat with LC. The problem lies in tasks that have more compensatory features. The key, therefore, lies in assessing cognitive linear ability. How likely is the judge to know the relative weights to give the variables? We expect that a minority of persons can meet these conditions, but that much also depends on the nature of the task and the individual’s experience.
Overall, the results suggest that for many tasks, the errors incurred by using LC strategies are greater than those implicit in using heuristics. Thus, judgmental performance could be improved if people explicitly used appropriate heuristics instead of relying on what is often their untested and unaided judgment. However, that people resist doing so has been documented many times. Hogarth and Karelaia believe that a high level of sophistication is needed to understand when to ignore information and use a heuristic. Perhaps LC strategies are psychologically attractive precisely because they allow people to feel they have considered all information.