Hogarth on Description

 

 

problemUntitledThis post is based on “Providing information for decision making: Contrasting description and simulation,” Journal of Applied Research in Memory and Cognition 4 (2015) 221–228, written by
Robin M. Hogarth and Emre Soyer. Hogarth and Soyer propose that providing information to help people make decisions can be likened to telling stories. First, the provider – or story teller – needs to know what he or she wants to say. Second, it is important to understand characteristics of the audience as this affects how information is interpreted. And third, the provider must match what is said to the needs of the audience. Finally, when it comes to decision making, the provider should not tell the audience what to do. Although Hogarth and Soyer do not mention it, good storytelling draws us into the descriptions so that we can “experience” the story. (see post 2009 Review of Judgment and Decision Making Research)

Hogarth and Soyer state that their interest in this issue was stimulated by a survey they conducted of how economists interpret the results of regression analysis. The economists were given the outcomes of the regression analysis in a typical, tabular format and the questions involved interpreting the probabilistic implications of specific actions given the estimation results. The participants had available all the information necessary to provide correct answers, but in general they failed to do so. They tended to ignore the uncertainty involved in predicting the dependent variable conditional on values of the independent variable. As such they vastly overestimated the predictive ability of the model. Another group of similar economists who only saw a bivariate scatterplot of the data were accurate in answering the same questions. These economists were not generally blinded by numbers as some in the population, but they still needed the visually presented frequency information.

The authors consider the communication of probabilistic forecasts. This means that the decision maker is provided with a probability distribution over possible future outcomes of a variable of interest. These can cover many different types of applications. Consider, for example, simple forecasts  involving the weather (e.g., “Will it rain tomorrow?”) as opposed to more complicated issues such as effects of climate change.  In particular, the meaning of probability is not clear to many in that it does not necessarily map into people’s experience. For example, imagine that a decision maker is told that the probability of rain tomorrow is 0.3. Now, let’s assume it does not rain the next day. How should she interpret the meaning of the forecast? Was it correct or incorrect? Given these issues, should analysts simply forget about numerical estimates and instead use verbal statements that describe feelings of uncertainty? Indeed, several studies show that verbal expressions of probability (e.g., phrases such as “unlikely”, “almost certain”, and so on) can be relatively effective. However, verbal expressions do not have exactly the same meaning for different people and it is problematic to generalize from these results.

Factors in human information processing according to Hogarth and Soyer:

  • Attention– we can only perceive a small fraction of what is actually in our visual field. Thus, anything that attracts attention is important and the reality in which we operate is bound by this attentional filter. (See post Invisible Gorilla)
  • Knowledge of relevant information–We often do not realize that we do not have the correct information to make a good decision. Imagine, for instance, a physician contemplating whether to prescribe a certain drug. Say, among the 20 publications that contain valid tests of the drug’s effectiveness, 17 demonstrate positive results. The verdict seems clear (17 vs. 3). If, however, there are also 15 unpublished manuscripts with valid tests, of which 13 show no effects, the story now changes (19 vs. 16) and the decision becomes harder. People treat the data they see as representative of the processes that generate them, that is, they are “naïve” statisticians. As a consequence, people make inferential errors due to “small sample” effects and the failure to realize that samples can be biased in different ways. People are typically unaware and inclined to consider such available yet potentially biased information as the whole story.
  • Sequence of information–Another important dimension is the distinction between whether the information is presented at once or whether it has been acquired across time. Hogarth and Soyer provide the example of deciding between two restaurants that are well known to you. In essence, you already have estimates of “how good” both restaurants are based on past experiences. All that you need to remember is your last overall impression. People are remarkably effective at extracting aggregate frequencies from single events that they have previously experienced and encoded sequentially.

Almost all decision aids involve changing how problems are described to help people make “better” decisions. The range of aids varies from complex decision analytic techniques (involving decision trees, multi-attribute utility functions, and so on) to simply reframing problems in order to direct attention in particular ways. Figure 1 (from Gerd Gigerenzer, also see post Bounded Rationality-the Adaptive Toolbox) provides a typical presentation used in the description of these problems by providing the component probabilities that should be combined by Bayes’ theorem, i.e., the prior probability of having the disease and the sensitivity and specificity of the test. Most people, however, are quite confused about how to combine this information correctly. I am certainly among those that get confused. The problem shown in Figure 1 gets me every time if I try to think in probability. You tell me that a woman has tested positive and that 90% of the women who do have breast cancer test positive, I will get it wrong every time if I do not tell myself to think of 1000 women. Thus, if problems were described in terms of “natural frequencies” people would both understand the data better and be able to perform the necessary calculations more easily.  In the next post, Hogarth and Soyer look at another decision aid, simulation.

 

1 thought on “Hogarth on Description

  1. Pingback: Hogarth on Simulation - Judgment and Decision Making

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.