Category Archives: Analysis

Hogarth on Simulation

scm1This post is a contination of the previous blog post Hogarth on Description. Hogarth and Soyer suggest that the information humans use for probabilistic decision making has two distinct sources: description of the particulars of the situations involved and through experience of past instances. Most decision aiding has focused on exploring effects of different problem descriptions and, as has been shown, is important because human judgments and decisions are so sensitive to different aspects of descriptions. However, this very sensitivity is problematic in that different types of judgments and decisions seem to need different solutions. To find methods with more general application, Hogarth and Soyer suggest exploiting the well-recognized human ability to encode frequency information, by building a simulation model that can be used to generate “outcomes” through a process that they call “simulated experience”.

Simulated experience essentially allows a decision maker to live actively through a decision situation as opposed to being presented with a passive description. The authors note that the difference between resolving problems that have been described as opposed to experienced is related to Brunswik’s distinction between the use of cognition and perception. In the former, people can be quite accurate in their responses but they can also make large errors. I note that this is similar to Hammond’s correspondence and coherence. With perception and correspondence, they are unlikely to be highly accurate but errors are likely to be small. Simulation, perception, and correspondence tend to be robust.

Continue reading

Hogarth on Description

 

 

problemUntitledThis post is based on “Providing information for decision making: Contrasting description and simulation,” Journal of Applied Research in Memory and Cognition 4 (2015) 221–228, written by
Robin M. Hogarth and Emre Soyer. Hogarth and Soyer propose that providing information to help people make decisions can be likened to telling stories. First, the provider – or story teller – needs to know what he or she wants to say. Second, it is important to understand characteristics of the audience as this affects how information is interpreted. And third, the provider must match what is said to the needs of the audience. Finally, when it comes to decision making, the provider should not tell the audience what to do. Although Hogarth and Soyer do not mention it, good storytelling draws us into the descriptions so that we can “experience” the story. (see post 2009 Review of Judgment and Decision Making Research)

Hogarth and Soyer state that their interest in this issue was stimulated by a survey they conducted of how economists interpret the results of regression analysis. The economists were given the outcomes of the regression analysis in a typical, tabular format and the questions involved interpreting the probabilistic implications of specific actions given the estimation results. The participants had available all the information necessary to provide correct answers, but in general they failed to do so. They tended to ignore the uncertainty involved in predicting the dependent variable conditional on values of the independent variable. As such they vastly overestimated the predictive ability of the model. Another group of similar economists who only saw a bivariate scatterplot of the data were accurate in answering the same questions. These economists were not generally blinded by numbers as some in the population, but they still needed the visually presented frequency information.

Continue reading

Single Strategy Framework and the Process of Changing Weights

 

cloudindexThis post starts from the conclusion of the previous post that the evidence supports a single strategy framework, looks at Julian Marewski’s criticism, and then piles on with ideas on how weights can be changed in a single strategy framework.

Marewski provided a paper for the special issue of the Journal of Applied Research in Memory and Cognition (2015)  on “Modeling and Aiding Intuition in Organizational Decision Making”:  “Unveiling the Lady in Black: Modeling and Aiding Intuition,” authored by Ulrich Hoffrage and Julian N. Marewski. The paper gives the parallel constraint satisfaction model a not so subtle knock:

By exaggerating and simplifying features or traits, caricatures can aid perceiving the real thing. In reality, both magic costumes and chastity belts are degrees on a continuum. In fact, many theories are neither solely formal or verbal. Glöckner and Betsch’s connectionist model of intuitive decision making, for instance, explicitly rests on both math and verbal assumptions. Indeed, on its own, theorizing at formal or informal levels is neither “good” nor “bad”. Clearly, both levels of description have their own merits and, actually, also their own problems. Both can be interesting, informative, and insightful – like the work presented in the first three papers of this special issue, which we hope you enjoy as much as we do. And both can border re-description and tautology. This can happen when a theory does not attempt to model processes. Examples are mathematical equations with free parameters that carry no explanatory value, but that are given quasi-psychological, marketable labels (e.g., “risk aversion”).

Continue reading

Strategy Selection — Single or Multiple?

spannerindexThis post tries to do a little tying together on a familiar subject. I look at a couple of papers that provide more perspective than typical research papers provide. First is the preliminary dissertation of Anke Söllner. She provides some educated synthesis which my posts need, but rarely get. Two of her papers which are also part of her dissertation are discussed in the posts Automatic Decision Making and Tool Box or Swiss Army Knife? I also look at a planned special issue of the Journal of Behavioral Decision Making to address “Strategy Selection: A Theoretical and Methodological Challenge.”

Söllner’s work is concerned with the question:  which framework–multiple strategy or single strategy– describes multi-attribute decision making best? In multi-attribute decision making we have to choose among two or more options. Cues can be consulted and each cue has some validity in reference to the decision criterion. If the criterion is an objective one (e.g., the quantity of oil), the task is referred to as probabilistic inference, whereas a subjective criterion (e.g., preference for a day trip) characterizes a preferential choice task. The multiple strategy framework is most notably the adaptive toolbox that includes fast and frugal heuristics as individual strategies. Single strategy frameworks assume that instead of selecting one from several distinct decision strategies, decision makers employ the same uniform decision making mechanism in every situation. The single strategy frameworks include the evidence accumulation model and the connectionist parallel constraint satisfaction model.

Continue reading

Coherence from the default mode network

coherenbrainThis post starts with the paper “Brains striving for coherence: Long-term cumulative plot formation in the default mode network,” authored by K. Tylén, P. Christensen, A. Roepstorff, T. Lund, S. Østergaard, and M. Donald. The paper appeared in NeuroImage 121 (2015) 106–114.

People are  capable of navigating and keeping track of all the parallel social activities of everyday life even when confronted with interruptions or changes in the environment. Tylen et al suggest that even though these situations present themselves in series of interrupted segments often scattered over huge time periods, they tend to constitute perfectly well-formed and coherent experiences in conscious memory. However, the underlying  mechanisms of such long-term integration is not well understood. While brain activity is generally traceable within the short time frame of working memory, these integrative processes last for minutes, hours or even days.

Continue reading

Superforecasting

superforecastingimagesThis post is a look at the book by Philip E Tetlock and Dan Gardner, Superforecasting– the Art and Science of Prediction.  Phil Tetlock is also the author of Expert Political Judgment: How Good Is It? How Can We Know?   In Superforecasting Tetlock blends discussion of the largely popular literature on decision making and his long duration scientific work on the ability of experts and others to predict future events.

In Expert Political Judgment: How Good Is It? How Can We Know? Tetlock found that the average expert did little better than guessing.  He also found that some did better. In Superforecasting he discusses the study of those who did better and how they did it.

Continue reading

The Mixed Instrumental Controller

mic_MG_5849This is more or less a continuation of the previous post based on Andy Clark’s “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015).   It further weighs in on the issue of changing strategies or changing weights (see post Revisiting Swiss Army Knife or Adaptive Tool Box). Clark has brought to my attention the terms model free and model based which seem to roughly equate to intuition/system 1 and analysis/system 2 respectively. With this translation, I am helped in trying to tie this into ideas like cognitive niches and parallel constraint satisfaction. Clark in a footnote:

Current thinking about switching between model-free and model based strategies places them squarely in the context of hierarchical inference, through the use of “Bayesian parameter averaging”. This essentially associates model-free schemes with simpler (less complex) lower levels of the hierarchy that may, at times, need to be contextualized
by (more complex) higher levels.

As humans, we have been able to use language, our social skills, and our understanding of hierarchy to extend our cognition.  Multiplication of large numbers is an example. We cannot remember enough numbers in our heads so we created a way to do any multiplication on paper or its equivalent if we can learn our multiplication tables. Clark cites the example of the way that learning to perform mental arithmetic has been scaffolded, in some cultures, by the deliberate use of an abacus. Experience with patterns thus made available helps to install appreciation of many complex arithmetical operations and relations. We structure (and repeatedly re-structure) our physical and social environments in ways that make available new knowledge and skills. Prediction-hungry brains, exposed in the course of embodied action to novel patterns of sensory stimulation, may thus acquire forms of knowledge that were genuinely out-of reach prior to such physical-manipulation-based re-tuning of the generative model. Action and perception thus work together to reduce prediction error against the more slowly evolving backdrop of a culturally distributed process that spawns a succession of designed environments whose impact on the development and unfolding of human thought and reason can hardly be overestimated. Continue reading

When to Quit

Hiking toward the snow

This post is based on the paper: “Multi-attribute utility models as cognitive search engines”, by Pantelis P. Analytis, Amit Kothiyal, and Konstantinos Katsikopoulos that appeared in Judgment and Decision Making, Vol. 9, No. 5, September 2014, pp. 403–419. This post does not look at persistence (post Persistence or delay (post Decision Delay) when you believe that you need more alternatives, but when to quit your search and stop within the available alternatives.

In optimal stopping problems, decision makers are assumed to search randomly to learn the utility of alternatives; in contrast, in one-shot multi-attribute utility optimization, decision makers are assumed to have perfect knowledge of utilities. The authors point out that these two contexts represent the boundaries of a continuum, of which the middle remains uncharted: How should people search intelligently when they possess imperfect information about the alternatives? They pose the example of trying to hire a new employee faced with several dozen applications listing their skills and credentials. You need interviews to determine each candidate’s potential. What is the best way to organize the interview process? First, you need to decide the order in which you will be inviting candidates. Then, after each interview you need to decide whether to make an offer to one of the interviewed candidates, thus stopping your search. The first problem is an ordering problem and the second a stopping problem. If credentials were adequate, you would not need an interview, and if credentials were worthless, you would invite people for interviews randomly.

Continue reading

Slippery slope hypocrites

hypocrisy7187159178399176This post looks at a paper, “Rational Hypocrisy: A Bayesian Analysis Based on Informal Argumentation and Slippery Slopes,” Cognitive Science 38 (2014) 1456–1467, written by Tage S. Rai and Keith J. Holyoak (posts Metaphor, Bidirectional Reasoning) that draws a connection between what may look like moral hypocrisy and the categories we select for cases with weak arguments by looking at the slippery slope argument. Moral hypocrisy is typically viewed as an ethical accusation: Someone is applying different moral standards to essentially identical cases, dishonestly claiming that one action is acceptable while otherwise equivalent actions are not. The authors provide the following example:

“I respect the jury’s verdict. But I have concluded that the prison sentence given to
Mr. Libby is excessive.” With these words, former President George W. Bush commuted
the sentence of I. Lewis “Scooter” Libby, Jr., for obstruction of justice and leaking the
identity of CIA operative Valerie Plame. Critics of the decision noted that Libby had actually received the minimum sentence allowable for his offense under the law and that many of Libby’s supporters, including the Bush administration, were actively pressing for mandatory minimum sentencing laws at a national level. Accordingly, critics of the decision saw it as a textbook case of moral hypocrisy: Different rules were being applied to Bush’s underling, Libby, than to everyone else in the United States.

The implicit assumption is that the hypocrite is being dishonest, or at least self deceptive, because the hypocrite must be aware (or should be aware) of the logical inconsistency and is therefore committing a falsehood. Rai and Holyoak have extended the analysis of Corner et al concerning slippery slope (post Slippery Slope) arguments to moral hypocrisy and suggest that the alleged hypocrite may be both honest and rational.

Continue reading

Heuristic and Linear Models

IMG_0184This post is based on a paper:  “Heuristic and Linear Models of Judgment: Matching Rules and Environments,” written by Robin M. Hogarth and Natalia Karelaia, Psychological Review 2007, Vol. 114, No. 3, 733–758  that predated Hogarth and Karelaia’s (What has Brunswik’s Lens Model Taught?) meta-analysis.  It includes the underpinnings for that study.

Two classes of models have dominated research on judgment and decision making over past decades. In one, explicit recognition is given to the limits of information processing, and people are modeled as using simplifying heuristics (Gigerenzer, Kahneman, Tversky school). In the other (Hammond school), it is assumed that people can integrate all the information at hand and that this is combined and weighted as if using an algebraic—typically linear—model.

Continue reading