This post is based on a paper: “Learning from experience in nonlinear environments: Evidence from a competition scenario,” authored by Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73. It is not a new topic, but adds to the evidence of our nonlinear shortcomings.

In 1980, Brehmer questioned whether people can learn from experience – more specifically, whether they can learn to make appropriate inferential judgments in probabilistic environments outside the psychological laboratory. His assessment was quite pessimistic. Other scholars have also highlighted difficulties in learning from experience. Klayman, for example, pointed out that in naturally occurring environments, feedback can be scarce, subject to distortion, and biased by lack of appropriate comparative data. Hogarth asked when experience-based judgments are accurate and introduced the concepts of kind and wicked learning environments (see post Learning, Feedback, and Intuition). In kind learning environments, people receive plentiful, accurate feedback on their judgments; but in wicked learning environments they don’t. Thus, Hogarth argued, a kind learning environment is a necessary condition for learning from experience whereas wicked learning environments lead to error. This paper explores the boundary conditions of learning to make inferential judgments from experience in kind environments. Such learning depends on both identifying relevant information and aggregating information appropriately. Moreover, for many tasks in the naturally occurring environment, people have prior beliefs about cues and how they should be aggregated.

Hogarth and Soyer experiments were based on participants being presented with the following scenario and question:

Imagine that Abyz is a popular skill-based computer game that you like to play. The game is played enthusiastically by thousands of young people like you. You know that different people have different playing skills; some are experts while there are others who are just learning the game. You estimate your own skill level to be better than 50% and worse than 50% of the Abyz playing population. Suppose there is an Abyz competition where 10 contestants are selected at random by lottery from the large number of people who play the game and you are one of the selected contestants. Estimate your chances of winning the competition when 3 of the 10 contestants are winners.

This task violates people’s prior expectations. That is, although the appropriate information aggregation rule is nonlinear, people’s prior beliefs are that it is linear.  The inferences Hogarth and Soyer examine are estimates of chances of success in competitions where there are specific numbers of winners and competitors and people know their relative ability level in the population of potential entrants. These estimates are relevant to decisions taken to enter many different types of competitions, for example: places in educational institutions (e.g., universities); jobs; grants; and many other situations where resources are limited and can only be allocated to a subset of those who enter the competitions. The results of the experiments suggest that kind learning environments are necessary but not sufficient for learning from experience. The participants showed difficulty in learning under conditions where they received immediate and accurate feedback involving either naturalistic outcomes (information on winning/losing and ranking) or the normatively correct probabilities. However, when the task was reformulated to have a linear probabilistic structure, feedback helped participants learn to make more accurate assessments.

Although linear aggregation appears to act as a default strategy, people do not realize its
limitations. Larrick and Soll, for example, have illustrated the common error of using the
(multiplicative) ratio ‘‘miles-per-gallon” for comparing gas consumption of automobiles in a linear manner instead of the more appropriate linear measure of consumption for a given distance (e.g., 100 miles). A possible explanation is that people do not receive feedback on these kinds of judgments. However, even with feedback, several studies show that people have problems in learning the properties of dynamic, nonlinear systems. For example, Cronin,
Gonzalez, and Sterman  have demonstrated the failure to understand simple principles of flows
and stocks in a number of naturally occurring environments. Instead, people persist in relying on a linear ‘‘correlation heuristic” in making their judgments. (My classic weakness is discussed in the post Medical Decisions –Risk Saavy)

Given the difficulties of processing nonlinear relations, some researchers have suggested that problems with nonlinear structures be transformed to linear versions if this is possible. However, it would be a mistake to assume that people cannot learn to handle all nonlinear tasks. In particular, Juslin and his colleagues have explored the use of exemplar–matching strategies. That is, instead of abstracting linear cue-criterion relations, people compare stimuli with exemplars of profiles available in memory and make inferences based on similarity between the stimuli and exemplars. Evidence suggests that people can recognize the need to use exemplar strategies and successfully learn to handle some nonlinear tasks. However, there are limitations. Much experience is needed to establish exemplars. Second, whereas categorical and binary data facilitate exemplar-based processing, continuous data can be a handicap. Third, it is impossible to use exemplar-based models to extrapolate beyond what has been experienced.

The results speak to whether judgmental tasks are perceived as relatively simple or complex and how this perception interacts with learning from experience. For example, the participants’ judgments did correlate (albeit imperfectly) with outcomes and it is possible that they thought they were performing adequately. The results also raise the normative issue of defining how information should be presented for decision making. Possible principles for reformulating problems include defaults, loss aversion, and reference points. Moreover, work by Larrick and Soll  and Juslin et al.  have demonstrated the advantages of transforming problems that require nonlinear aggregation to formats where this can be achieved in linear fashion. Hogarth and Soyer emphasizes the importance of problem formulation when people learn to make decisions through experience.

Hogarth and Soyer suggest that a strong case can be made that instead of attempting to assess one’s chances of success, one should think of the ability level (or resources) needed to have a good chance of winning. Hence, the focus should be on ‘‘How good do I need to be to achieve an aspired level of success?” as opposed to ‘‘What are my chances of success?”

This makes me think of the AHCA, the healthcare bill recently passed by the House of Representatives. One key to its approval seems to have been a little feint with respect to the idea of preexisting conditions. People would understand if the federal government said that we will cover preexisting conditions only up to 8 billion dollars over the next five years. But instead we string a couple of conditions together and voila those who voted for it have plausible deniability. We might call it the nonlinear feint. People are able to see through intentions both when the scenario is transparent or linear, but not so much with respect to nonlinear scenarios.

“Learning from experience in nonlinear environments: Evidence from a competition scenario,”  Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73.


1 thought on “Nonlinear

  1. Pingback: Antifragile | Judgment and Decision Making

Comments are closed.