What has Brunswik’s Lens Model Taught?

IMG_0058 (800x600)This post examines a paper by Robin Hogarth and Natalia Karelaia that represents a mountain of work in preparing a meta analysis that includes about 250 studies using the lens model to study the way humans make decisions that are probabilistically related to cues. The studies were conducted over 5 decades.  Hogarth’s work has been the subject of the posts Robin Hogarth on Expertise and Learning, Feedback, and Intuition. Hogarth suggests the examples of an analyst examining financial indicators to predict corporate bankruptcy, a manager using behavior in interviews to assess job candidates, or a physician looking at symptoms that indicate the severity of a disease. In all of these cases, the simple beauty of Brunswik’s lens model lies in recognizing that the person’s judgment and the criterion being predicted can be thought of as two separate functions of cues available in the environment of the decision. The accuracy of judgment therefore depends, first, on how predictable the criterion is on the basis of the cues and, second, the extent to which the function describing the person’s judgment matches its environmental counterpart.

Typically, the lens model and its multiple fallible indicators has been seen in terms of judgmental accuracy or correspondence as opposed to logical coherence.  An advantage of the lens model research is that, following the development of statistical methods in the 1960s, many researchers have used the same measures for capturing the contribution of different factors that determine the accuracy of judgment. Thus, it is possible to aggregate these measures across many studies and make statements that reflect the accumulation of results. One major assumption that goes with the lens model is that both environments and judges are often well modeled by linear functions. I have been trying to find where I have dealt with the lens model directly, and apparently, I have not so the figure below gives the basic idea.

lensimagesTo do the meta analysis, it was first necessary to look at what factors affect the accuracy of human judgment? Hogarth provides the following listing:

1. Tasks vary in the number of cues. Given well-established limitations on human information processing, it is often argued that the linear model does not provide a good description of judgment when the number of cues is large.

2. In addition to combining information, an important dimension of many tasks involves identifying and assessing levels of relevant information. Therefore, the meta analysis distinguishes between studies where cues are “given” as opposed to “achieved.” For the former, decision makers are provided with the explicit values of the cues by the investigator. For the latter, the values of the cues need to be inferred—and often even identified—by decision makers. What are the effects of achieving cue values prior to making judgments?

3. Inter-cue redundancy is an important functional element of decision environments. In particular, it facilitates the interchangeability of cues. Redundancy thus contributes to improving the reliability of overall judgments and can help limit information search without significant reductions in accuracy.

4. The meta analysis distinguishes between linear and nonlinear forms of functional relations between the criterion and cues in the ecology.  According to Hogarth, learning nonlinear relations is a difficult task and even when people acquire such knowledge, they experience difficulty in applying this knowledge consistently.

5. An additional important characteristic in some situations is the dispersion of weights in the ecology. Three cue-weighting schemes are utilized. First, a weighting function is additive non-compensatory if, when cue weights are ordered in magnitude, the weight of each cue exceeds the sum of those smaller than it. Second, all other weighting functions are additive compensatory. However, third, among the latter,  the special case of equal weighting is separated.

6. An important dimension of the Brunswikian research philosophy is the concept of representative design. The idea behind this concept is that greater generalizability of experimental results can be achieved by conducting experiments under conditions that are representative of people’s natural ecologies.  In field studies, cue and criterion values are, by definition, representative of the natural ecology of the tasks studied, that is, sampled from naturally occurring stimuli. In contrast, laboratory studies that use simulated (i.e., hypothetical) values of cues and criterion are typically not representative of natural ecologies.

7. Whereas field studies are contextually situated, laboratory experiments have involved both contextual and abstract tasks. It is possible that judgmental achievement and learning may be more effective in meaningful, contextual tasks by enhancing judges’ interest in getting things right.

8. Initial level of expertise in the task domain (i.e., familiarity with the task and having made similar judgments before) is important for achievement, and thus inexperienced judges and experts are split apart. However, experts achievement is not always good.

9. Learning has been an important topic within the lens model paradigm, where numerous studies have focused on how people learn to utilize cues that are only probabilistically related to a criterion. In the so-called multiple-cue probability learning studies, judgmental accuracy is measured over several blocks of trials, and feedback is often provided over the course of these trials.  Task information feedback has been shown to be effective and to work better than cognitive process feedback.  In addition, experience seems to improve a judges’ ability to benefit from feedback.

The meta analysis of the studies Hogarth and Karelaia examined  demonstrate several findings:

  • The evidence accumulated  is consistent with the conclusion that linear models can provide good representations of both human judgment and task environments.  However, there are clearly situations where human decisions are better described by nonlinear processes. For example, Hogarth notes, under time pressure, experts may rely more on intuitive judgment, consider few (if more than one) alternatives, simulate scenarios using imagination, and engage in experience-based pattern matching. According to Hogarth, the extent to which such tacit decision processes can be represented by linear models remains an open question. Human achievement is lower when there are nonlinearities in the ecology.
  • People learn best from feedback that instructs them about the characteristics of the tasks they face. When the number of cues is large, judges are less effective at matching environmental models. Inter-cue redundancy also makes it more difficult to match environmental models and thereby reduces achievement. The meta analysis also found that people respond to feedback more when redundancy is low. These results suggest that individuals may have preconceived, simplified expectations of decision environments and try to apply decision strategies that are coherent with these expectations. In the data, redundancy-free and equal-weighting environments are most favorable to the strategies that judges use. Perhaps, within the class of linear strategies, equal weighting is most attractive psychologically because it guarantees that the judge considers all information. The correct application of decision strategies that rely heavily on a single cue or a few cues requires a certain level of expertise.
  • The authors also analyzed empirically under what conditions the application of bootstrapping— or replacing judges by their linear models—is advantageous. The inconsistency that people exhibit in making judgments is sufficient for models of their judgments to be more accurate than they are themselves (i.e., eliminating inconsistency outweighs the benefits of idiosyncratic knowledge that is not captured by linear models). Hogarth and Karelaia found that, even after controlling for task differences between laboratory and field studies, field studies tend to report higher linear matching, obtain lower residual correlation between the linear models of the judge and the environment, and suggest greater advantages of bootstrapping models over unaided human judgment. However, some of the results highlighted the importance of identifying the task and judge characteristics that favor bootstrapping. The advantage of bootstrapping is smaller when cues are highly correlated or equally weighted  and when judges have some experience.

Karelaia, N., Hogarth, R.(2008). “Determinants of Linear Judgment: A Meta-Analysis of Lens Model Studies.” Psychological Bulletin. Vol. 134, No. 3, 404–426.

5 thoughts on “What has Brunswik’s Lens Model Taught?

  1. Pingback: Mindfulness | Judgment and Decision Making

  2. Pingback: Heuristic and Linear Models - Judgment and Decision Making

  3. Pingback: The Fog of the Blog-Parameter P or K? - Judgment and Decision Making

  4. Pingback: Big Models - Judgment and Decision Making

  5. Pingback: Kenneth R Hammond - Judgment and Decision Making

Leave a Reply

Your email address will not be published. Required fields are marked *