Category Archives: Heuristics

The Mind is Flat

Nick Chater is the author of The Mind is Flat–the Remarkable Shallowness of the Improvising Brain, Yale University Press, New Haven, 2019. He is a professor of behavioral science at the Warwick Business School. The book is two parts and overall it is as ambitious as it is simple. The first part is the most convincing. He shows how misguided we are on our perceptions, emotions, and decision making.   Our vision seems to provide us with a full fledged model of our environment, when we really only can focus on a very small area with our furtive eye movements providing the impression of a complete detailed picture. Our emotions do not well up from deep inside, but are the results of in-the-moment interpretations based on the situation we are in, and highly ambiguous evidence from our own bodily state. Chater sees our beliefs, desires, and hopes as just as much inventions as our favorite fictional characters. Introspection does not work, because there is nothing to look at. We are imaginative creatures with minds that pretty much do everything on the fly. We improvise so our decision making is inconsistent as are our preferences.

Continue reading

Taming Uncertainty

Taming Uncertainty  by Ralph Hertwig (See posts Dialectical Bootstrapping and Harnessing the Inner Crowd.), Timothy J Pleskac (See post Risk Reward Heuristic.), Thorsten Pachur (See post Emotion and Risky Choice.) and the Center for Adaptive Rationality, MIT Press, 2019, is a new compendium that I found accidentally in a public library. There is plenty of interesting reading in the book. It takes the adaptive toolbox approach as opposed to the Swiss Army Knife . The book gets back cover raves from Cass Sunstein (See posts Going to Extremes, Confidence, Part 1.), Nick Chater, and Gerd Gigerenzer (See post Gigerenzer–Risk Saavy, and others.). I like the pieces, but not the whole.

 

Continue reading

Kind and Wicked Learning Environments

This post is based on a paper: “The Two Settings of Kind and Wicked Learning Environments” written by Robin M. Hogarth, Tomás Lejarraga, and Emre Soyer that appeared in Current Directions in Psychological Science 2015, Vol. 24(5) 379–385. Hogarth created the idea of kind and wicked learning environments and it is discussed in his book Educating Intuition.

Hogarth et al state that inference involves two settings: In the first, information is acquired (learning); in the second, it is applied (predictions or choices). Kind learning environments involve close matches between the informational elements in the two settings and are a necessary condition for accurate  inferences. Wicked learning environments involve mismatches.

Continue reading

Fuzzy-Trace Theory Explains Time Pressure Results

This post is based on a paper:  “Intuition and analytic processes in probabilistic reasoning: The role of time pressure,” authored by Sarah Furlan, Franca Agnoli, and Valerie F. Reyna. Valerie Reyna is, of course, the primary creator of fuzzy-trace theory. Reyna’s papers tend to do a good job of summing up the state of the decision making art and fitting in her ideas.

The authors note that although there are many points of disagreement, theorists generally agree that there are heuristic processes (Type 1) that are fast, automatic, unconscious, and require low effort. Many adult judgment biases are considered a consequence of these fast heuristic responses, also called default responses, because they are the first responses that come to mind. Type 1 processes are a central feature of intuitive thinking, requiring little cognitive effort or control. In contrast, analytic (Type 2) processes are considered slow, conscious, deliberate, and effortful, and they place demands on central working memory resources. Furlan, Agnoli, and Reyna assert that Type 2 processes are thought to be related to individual differences in cognitive capacity and Type 1 processes are thought to be independent of cognitive ability, a position challenged by the research presented in their paper. I was surprised by the given that intuitive abilities were unrelated to overall intelligence and cognitive abilities as set up by typical dual process theories.

Continue reading

Not that Irrational

This post is from Judgment and Decision Making, Vol. 11, No. 6, November 2016, pp. 601–610, and is based on the paper:  “The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated,” written by Andreas Glöckner. Danziger, Levav and Avnaim-Pesso analyzed legal rulings of Israeli parole boards concerning the effect of serial order in which cases are presented within ruling sessions. DLA analyzed 1,112 legal rulings of Israeli parole boards that cover about 40% of the parole requests of the country. They assessed the effect of the serial order in which cases are presented within a ruling session and took advantage of the fact that the ruling boards work on the cases in three sessions per day, separated by a late morning snack and a lunch break. They found that the probability of a favorable decision drops from about 65% to 5% from the first ruling to the last ruling within each session. This is equivalent to an odds ratio of 35. The authors argue that these findings provide support for extraneous factors influencing judicial decisions and speculate that the effect might be driven by mental depletion. Glockner notes that the article has attracted attention and the supposed order effect is considerably cited in psychology.

Continue reading

Strategy Selection — Single or Multiple?

spannerindexThis post tries to do a little tying together on a familiar subject. I look at a couple of papers that provide more perspective than typical research papers provide. First is the preliminary dissertation of Anke Söllner. She provides some educated synthesis which my posts need, but rarely get. Two of her papers which are also part of her dissertation are discussed in the posts Automatic Decision Making and Tool Box or Swiss Army Knife? I also look at a planned special issue of the Journal of Behavioral Decision Making to address “Strategy Selection: A Theoretical and Methodological Challenge.”

Söllner’s work is concerned with the question:  which framework–multiple strategy or single strategy– describes multi-attribute decision making best? In multi-attribute decision making we have to choose among two or more options. Cues can be consulted and each cue has some validity in reference to the decision criterion. If the criterion is an objective one (e.g., the quantity of oil), the task is referred to as probabilistic inference, whereas a subjective criterion (e.g., preference for a day trip) characterizes a preferential choice task. The multiple strategy framework is most notably the adaptive toolbox that includes fast and frugal heuristics as individual strategies. Single strategy frameworks assume that instead of selecting one from several distinct decision strategies, decision makers employ the same uniform decision making mechanism in every situation. The single strategy frameworks include the evidence accumulation model and the connectionist parallel constraint satisfaction model.

Continue reading

The Mixed Instrumental Controller

mic_MG_5849This is more or less a continuation of the previous post based on Andy Clark’s “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015).   It further weighs in on the issue of changing strategies or changing weights (see post Revisiting Swiss Army Knife or Adaptive Tool Box). Clark has brought to my attention the terms model free and model based which seem to roughly equate to intuition/system 1 and analysis/system 2 respectively. With this translation, I am helped in trying to tie this into ideas like cognitive niches and parallel constraint satisfaction. Clark in a footnote:

Current thinking about switching between model-free and model based strategies places them squarely in the context of hierarchical inference, through the use of “Bayesian parameter averaging”. This essentially associates model-free schemes with simpler (less complex) lower levels of the hierarchy that may, at times, need to be contextualized
by (more complex) higher levels.

As humans, we have been able to use language, our social skills, and our understanding of hierarchy to extend our cognition.  Multiplication of large numbers is an example. We cannot remember enough numbers in our heads so we created a way to do any multiplication on paper or its equivalent if we can learn our multiplication tables. Clark cites the example of the way that learning to perform mental arithmetic has been scaffolded, in some cultures, by the deliberate use of an abacus. Experience with patterns thus made available helps to install appreciation of many complex arithmetical operations and relations. We structure (and repeatedly re-structure) our physical and social environments in ways that make available new knowledge and skills. Prediction-hungry brains, exposed in the course of embodied action to novel patterns of sensory stimulation, may thus acquire forms of knowledge that were genuinely out-of reach prior to such physical-manipulation-based re-tuning of the generative model. Action and perception thus work together to reduce prediction error against the more slowly evolving backdrop of a culturally distributed process that spawns a succession of designed environments whose impact on the development and unfolding of human thought and reason can hardly be overestimated. Continue reading

Embodied(Grounded) prediction(cognition)

 

clark514DJ8Bec6L._SX329_BO1,204,203,200_This post is based on a paper by Andy Clark: “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015). Andy Clark is a philosopher at the University of Edinburgh whose tastes trend toward the wild shirt. He is a very well educated philosopher in the brain sciences and a good teacher. The paper seems to put forward some major ideas for decision making even though that is not its focus. Hammond’s idea of the Cognitive Continuum is well accommodated. It also seems quite compatible with Parallel Constraint Satisfaction, but leaves room for Fast and Frugal Heuristics. It seems to provide a way to merge Parallel Constraint Satisfaction and Cognitive Niches. I do not really understand PCS well enough, but it seems potentially to add hierarchy to PCS and make it into a generative model that can introduce fresh constraint satisfaction variables and constraints as new components. If you have not read the post Prediction Machine, you should because the current post skips much background. It is also difficult to distinguish Embodied Prediction and Grounded Cognition. There are likely to be posts that follow on the same general topic.

Continue reading

Risk Reward Heuristic

riskrewardindexThis post is based on a paper: “Ecologically Rational Choice and the Structure of the Environment”, that appeared in the Journal of Experimental Psychology: 2014, Vol. 143, No. 5. The authors are Timothy J. Pleskac and Ralph Hertwig. The paper is based on the idea that decision making theory has largely ignored the idea that risk and reward are tied together with payoff magnitudes signaling their probabilities.

How people should and do deal  with uncertainty is one of the most vexing problems in theorizing about choice. The researchers suggests a process that is inferential in nature and rests on the notion that probabilities can be approximated from statistical regularities that govern real-world gambles. In the environment there are typically multiple fallible indicators to guide your way. When some cues become unreliable or unavailable, the organism can exploit this redundancy by substituting or alternating between different cues. This is possible because of what Brunswik called the mutual substitutability or vicarious functioning of cues. It is these properties of intercue relationships and substitutability that Pleskac and Hertwig suggest offer a new perspective on how people make decisions under uncertainty. Under uncertainty, cues such as the payoffs associated with different courses of actions may be accessible, whereas other cues—in this case, the probability with which those payoffs occur—are not. This missing probability information has been problematic for choice theories as typically both payoffs and probabilities are used in determining the value of options and in choosing. However, if payoffs and probabilities are interrelated, then this ecological property can permit the decision maker to infer hidden or unknown probability distributions from the payoffs themselves, thus easing the problem of making decisions under uncertainty.

Continue reading

Emotion and Risky Choice

brainhetwigUntitledThis post is based on the paper: “The Neural Basis of Risky Choice with Affective Outcomes,”  written by Renata S. Suter, Thorsten Pachur, Ralph Hertwig, Tor Endestad, and Guido Biele
that appeared in PLOS ONE journal.pone.0122475  April 1, 2015. The paper is similar to one discussed in the post Affect Gap that included Pachur and Hertwig although that paper did not use fMRI. Suter et al note that both normative and many descriptive theories of decision making under risk have typically investigated choices involving relatively affect-poor, monetary outcomes. This paper compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems.

The paper is notable in that it not only examined behavioral differences between affect rich and affect poor risky choice, but also watched the brains of the people making the decisions with fMRI. The researchers assert that the traditional notion of a mechanism that assumes sensitivity to outcome and probability information and expectation maximization may not hold when options elicit relatively high levels of affect. Instead, qualitatively different strategies may be used in affect-rich versus affect-poor decisions. This is not much of a leap.

In order to examine the neural underpinnings of cognitive processing in affect-rich and affect poor decisions, the researchers asked participants to make choices between two options with relatively affect-rich outcomes (drugs that cause a side effect with some probability) as well as between two options with relatively affect-poor outcomes (lotteries that incur monetary losses with some probability). The monetary losses were matched to each individual’s subjective monetary evaluation of the side effects, permitting a within-subject comparison between affect-rich and affect-poor choices in otherwise monetarily equivalent problems. This was cleverly done. Specifically, participants were first asked to indicate the amount of money they considered equivalent to specific nonmonetary outcomes (here: side effects; Fig 1A). The monetary amounts indicated (willingness-to-pay; WTP) were then used to construct individualized lotteries in which either a side effect (affect-rich problem) or a monetary loss (affect-poor problem) occurred with some probability. For example, consider a participant who specified a WTP of $18 to avoid insomnia and $50 to avoid depression. In the affect-rich problem, she would be presented with a choice between drug A, leading to insomnia
with a probability of 15% (no side effects otherwise), and drug B, leading to depression
with a probability of 5% (no side effects otherwise). In the corresponding affect-poor problem,
she would be presented with a choice between lottery A, leading to a loss of $18 with a probability of 15% (nothing otherwise), and lottery B, leading to a loss of $50 with a probability of 5% (nothing otherwise). This paradigm allowed the authors to compare the decision mechanisms underlying affect-rich versus affect-poor risky choice on the basis of lottery problems that were equivalent in monetary terms (Fig 1A and 1B).

figure1hertwig

 

To assess whether experiencing a side effect was rated as evoking stronger negative affect than losing the equivalent monetary amount, Suter et al analyzed the ratings. Fig 2. presents them.

figure2hertwig
Were the differences in affect associated with different choices? Their findings showed that, despite the monetary equivalence between affect-rich and affect-poor problems, people reversed their preferences between the corresponding problems in 46.07%  of cases, on average. To examine the cognitive mechanisms underlying affect-rich and affect-poor choices, the researchers modeled them using Cumulative Prospect Theory (CPT). On average, CPT based on individually fitted parameters correctly described participants’ choices in 82.45% of affect-rich choices and in 90.42% of affect-poor choices. Modeling individuals’ choices using CPT, they found affect-rich choice was best described by a substantially more strongly curved weighting function than affect-poor choice, signaling that the psychological impact of probability information is diminished in the context of emotionally laden outcomes. Participants seemed to avoid the option associated with the worse side effects, irrespective of their probabilities, and therefore often ended up choosing the option with the lower expected value.

The neural testing was complicated and used extensive computational modeling analysis. Neuroimaging analyses further supported the hypothesis that choices between affect-rich options are based on qualitatively different cognitive processes than choices between affect poor options; the two triggered qualitatively different brain circuits. Affect-rich problems engage more affective processing, as indicated by stronger activation in the amygdala. The results suggested that affect-poor choice is based on calculative processes, whereas affect-rich choice involves emotional processing and autobiographical memories. When a choice elicits strong emotions, decision makers seem to focus instead on the potential outcomes and the memories attached to them.

According to Suter et al, on a theoretical level, models assuming expectation maximization (and implementing the weighting of some function of outcome by some function of probability) may fail to accurately predict people’s choices in the context of emotionally laden outcomes. Instead, alternative modeling frameworks (e.g., simplifying, lexicographic cognitive strategies) may be more appropriate. On a practical level, the researchers suggest that to the extent that people show strongly attenuated sensitivity to probability information (or even neglect it altogether) in decisions with affect-rich outcomes, different decision aids may be required to help them make good choices. For instance, professionals who communicate risks, such as doctors or policy makers, may need to pay special attention to refocusing people’s attention on the probabilities of (health) risks by illustrating those risks visually.

This paper does not present things in ways that I have seen often. It focuses on the most compensatory analytic strategies like prospect theory and says that these strategies do not reflect how we make decisions that are emotionally laden.  It suggests that simplifying lexicographic strategies may be more appropriate. Other studies that have used decision times and eye tracking instead of fMRI have made it clear that compensatory analytic strategies do not reflect actual decision making, although not as definitively. We also know it from our own experiences. However, from my understanding, this does not necessarily push us to lexicographic strategies. There are compensatory strategies like parallel constraint satisfaction that might also be the explanation. It may be that this is just part of the cognitive niches v. parallel constraint satisfaction or evidence accumulation decision models debate. Fuzzy trace theory is another candidate that is not a lexicographic strategy.