The Robust Beauty of Ordinary Information

RobustinfoI have been doing ordinary things lately, but working on this blog has not been one of them. Ordinary things have included fixing a garage door, cutting down a tree, buying a car, harvesting two hundred pounds of grapes, figuring out what to do with them, and getting a new roof. You would not believe that such things would keep me so busy for over a month, but they did. My mind seems more ready for heuristics than analysis. Konstantinos V. Katsikopoulos  has written many interesting papers, but somehow I have not included them before.  This paper takes a little different look at a familiar topic.

This post is based on a 2010 paper titled:  “The Robust Beauty of Ordinary Information,” that appeared in the Psychological Review  written by Katsikopoulos along with Lael J Schooler (Cognitive Niches)  and Ralph Hertwig (Affect Gap and Dialectical Bootstrapping). Katsikopoulos et al note that there is general belief in the effort–accuracy tradeoff: Only if people invest more cognitive effort do they stand to achieve more accuracy in their choices and judgments. More effort can take the form of searching for information exhaustively, spending plenty of time on the problem, or performing complex computations. Demonstrations that fast and frugal heuristics utilizing limited information search and noncompensatory processes can lead to more accurate inferences than models using more information and complex computations challenge the effort–accuracy tradeoff. But defenders of the effort-accuracy tradeoff suggest that the heuristics’ success rides on complex computations. For example, many heuristics, such as take-the-best, do not use all available cues but instead order them, look them up one by one, and stop searching as soon as a discriminating cue is encountered. This appears to require a lot of effort. The authors want to know whether or not simple heuristics really freeload on the effort hidden in the computation of cue orders?

Ordinary information refers to a person’s  limited knowledge or a person’s knowledge of the sign of the correlation between a cue and a criterion, called the cue’s direction. Cue directions often follow people’s basic conceptions about how the world works: For example, cities with a high unemployment rate (cue) tend to have a high homelessness rate (criterion).  The authors present two investigations. One looked at the performance of heuristics when using small samples, and the other at heuristics relying  on people’s intuitions about
cue directions.

In the investigations the benchmarks were linear regression and a Bayes model. The benchmarks use more complex computations and sophisticated information than the heuristics. They use the precise values of regression weights and cue validities, whereas take-the-best uses the order of cue validities. Minimalist and tallying do not require cue validities or orders but only cue directions.

The investigation using small samples of available data involved computer simulations run thousands of times focused on seeing how the benchmarks and heuristics perform with from 3% to 50% of the overall information. Figure 1 at the top of the post plots mean predictive accuracy, defined as a model’s proportion of correct inferences with the reduced information.  Clearly, the heuristics perform as well or better than the benchmarks especially when information is very limited. With the most limited information, tallying performed best.

The authors conclude that fast and frugal heuristics do not necessarily need good cue orders, relying instead on good cue directions to make accurate inferences. In the second investigation,
they ask: How good or bad are people’s intuitions about cue directions?

To do this they asked people’s intuitions about the directions of cues on 10 of the data sets used before. To see whether these intuitions can lead heuristics to perform well,  they tested the performance of bootstraps of heuristics. That is, they used the cue directions judged by participants (aggregate judgments) as input to take-the-best and tallying.

cuedirectionTable 3 shows the performance of the three take-the-best and three tallying models. In counterintuitive environments, for both take-the-best and tallying, the so-called calibrated model (taken from the investigation in Table 1) does much better than the bootstrap models. In
intuitive environments, however, the bootstrap models, especially the social bootstraps, do well and catch up or even outperform the calibrated model. I should note that they decided which environments were intuitive and which were counterintuitive after looking at the performance. Hogarth might call them instead: linear and nonlinear (Nonlinear ecology).

The authors speculate that people could arrive at intuitions about cue directions on the basis of their causal knowledge about how the world works. People seem to have a natural capacity to form causal representations, based, in part, on regularities such as causes typically temporally precede their effects. (What has Brunswik’s Model Taught?)

The effort–accuracy tradeoff carries the ring of a general law of cognition: Investing less effort is tantamount to achieving lower accuracy. However, it is not always true. Research on fast and frugal heuristics has demonstrated that less information and computation can yield better performance. Countering the argument that heuristics’ success rides on the effort put into calculating cue validities and orders, they showed that information limitations that reduce effort do not always hurt accuracy. Simple heuristics can be robust even if simplicity is secured through ordinary information about cue directions, garnered from limited knowledge or found in people’s intuitions.