For that fuzzy trace gist-intuition, I will summarize posts here with the newest posts on top
Stemmler’s doctoral dissertation : “Just do it! Guilt as a moral intuition to cooperate–A parallel constraint satisfaction approach,” (post Simultaneous Feeling and Deciding) also made me aware of the concept of grounded cognition. Stemmler indicates that his findings are mainly in line with current approaches of grounded cognition. Models of grounded cognition assume that cognitive processing and conceptual knowledge is grounded in the perceptual and action systems. According to Stemmler in these approaches, the experience of an emotion is based on the way a situation is temporarily conceptualized or categorized. Conceptualizations of emotions represent abstract conceptual constructs that aggregate information from different perceptual and action systems.
The post is based on a commentary, “Is Anything Sacred Anymore?” that appeared in Psychological Inquiry, 23: 155-161, 2012. The authors are Peter H Ditto, Brittany Liu, and Sean P Wojcik. The commentary examines the paper: “The Moral Dyad: A Fundamental Template Unifying Moral Judgment,” by Gray, Waytz, and Young, that appeared in the Psychological Inquiry: An International Journal for the Advancement of Psychological Theory, 23:2, 206-215. I have found commentary articles easier for me to understand since they have to examine two or more positions.
This post is based on a doctoral dissertation: “Just do it! Guilt as a moral intuition to cooperate–A parallel constraint satisfaction approach,” written by Thomas Stemmler at the University of Wurzburg. Stemmler does a good job of fitting together some ideas that I have been unable to fit together. Ideas of Haidt, Glockner, Lerner, and Holyoak are notably connected. He conducted five experiments examining guilt and cooperation to test, in the most simple terms, the hypothesis that making moral judgments is closer to making an aesthetic judgment than to reasoning about the moral justifications of an action, and that moral intuitions come from moral emotions. The hypothesis is based on Jonathan Haidt’s idea that the role of reasoning is literally to provide reasons (or arguments) for the intuitively made judgment if there is a need to communicate it. Part of the hypothesis is also that emotional intuitions in moral decision-making are the result of compensatory information processing which follows principles of parallel constraint satisfaction (PCS).
My most examined post What Has Brunswik’s Lens Model Taught? was based on a paper authored by Karelaia and Hogarth. It only seems right to look at some of Karelaia’s other work. This post is based on a review of the literature paper surrounding the premise that even when it comes to making decisions, an activity that is often quite conscious, deliberate and intentional, people are typically not as aware as they could be. Karelaia and Reb argue that as a result, decision quality may suffer and that mindfulness, the state of being openly attentive to and aware of what is taking place in the present, both internally and externally, can help people make better decisions.
This post looks at the medical/health component of decision making as addressed in Gerd Gigerenzer’s new book, Risk Saavy, How to Make Good Decisions. He points out both the weaknesses of screening tests and our understanding of the results. We have to overcome our tendency to see linear relationships when they are nonlinear. Doctors are no different. The classic problem is an imperfect screening test for a relatively rare disease. You cannot think in fractions or percentages. You must think in absolute frequencies. Breast cancer screening is one example. Generally, it can catch about 90% of breast cancers and only about 9% test positive who do not have breast cancer. So if you have a positive test, that means chances are you have breast cancer. No! You cannot let your intuition get involved especially when the disease is more rare than the test’s mistakes. If we assume that 10 out of 1000 women have breast cancer, then 90% or 9 will be detected, but about 90 of the 1000 women will test positive who do not have disease. Thus only 9 of the 99 who test positive actually have breast cancer. I know this, but give me a new disease or a slightly different scenario and let a month pass, I will still be tempted to shortcut the absolute frequencies and get it wrong.
Gerd Gigerenzer has a 2014 book out entitled: Risk Saavy, How to Make Good Decisions, that is a refinement of his past books for the popular press. It is a little too facile, but it is worthwhile. Gigerenzer has taught me much, and he will likely continue. My discussion of the book will be divided into two posts. This one will be a general look, while the next post will concentrate on Gigerenzer’s take on medical decision making.
As in many books like this, the notes provide insight. Gigerenzer points out his disagreements with Kahneman with respect to heuristics all being part of the unconscious system. As he notes heuristics, for instance the gaze heuristic, can be used consciously or unconsciously. This has been a major issue in my mind with Kahneman’s System 1 and System 2. Kahneman throws heuristics exclusively into the unconscious system. I also side with Gigerenzer over Kahneman, Ariely, and Thaler that the unconscious system is associated with bias. As Gigerenzer states: “A system that makes no errors is not intelligent.”
This post is a reaction to the column by Bret Stephens that appeared in the October 21, 2014, Wall Street Journal, entitled: “What the Ebola Experts Miss.” The column starts out:
Of course we should ban all nonessential travel from Liberia, Guinea, Sierra Leone and any other country badly hit by the Ebola virus.
I have been doing ordinary things lately, but working on this blog has not been one of them. Ordinary things have included fixing a garage door, cutting down a tree, buying a car, harvesting two hundred pounds of grapes, figuring out what to do with them, and getting a new roof. You would not believe that such things would keep me so busy for over a month, but they did. My mind seems more ready for heuristics than analysis. Konstantinos V. Katsikopoulos has written many interesting papers, but somehow I have not included them before. This paper takes a little different look at a familiar topic.
I first read Lindblom’s paper, “The Science of Muddling Through,” in a comparative systems political science class. It appealed to me then and after more than forty years, it still does. He did a good job of exposing the logical extreme of the rational model as ridiculous, at least in government. At the same time, he used terminology for his incremental model that made it difficult to publicly embrace. As a city planner, I could decry “disjointed incrementalism,” to try to get elected officials to look a bit further into the future, but had he called it coherent accumulation, I would not have ever had a chance. After fifty five years, many of his examples still seem quite relevant.
This post is based on a July 2009 paper, “Strategic Decision Making Paradigms: a Primer for Senior Leaders,” that was written by Col. Charles D. Allen and Dr. Breena E. Coates both of the Army War College. Although much in the paper has been touched upon in prior posts, it summarizes several models for strategic decision making, and includes some with which I am unfamiliar. It is public sector oriented it includes some good examples related to the defense of the nation. It also sets the stage for another post on muddling through. Strategic decisions entail “ill-structured, “messy” or “wicked problems” that do not have quick, easy solutions. They often end in so called “error of the third kind”, where complex problems are often addressed with a correct solution to the wrong problem.
This post is based on the paper: “The Affect Gap in Risky Choice: Affect-Rich Outcomes Attenuate Attention to Probability Information,” authored by Thorsten Pachur, Ralph Hertwig, and Roland Wolkewitz that appeared in Decision, 2013, Volume 1, No. 1, p 64-78. This is a continuation of the affect/ emotion theme. It is more of a valence based idea than Lerner’s Appraisal Tendency Framework. This is more thinking about emotion than actually experiencing it although the two can come together.
Often risky decisions involve outcomes that can create considerable emotional reactions. Should we travel by plane and tolerate a minimal risk of a fatal terrorist attack or take the car and run the risk of traffic jams and car accidents? How do people make such decisions?
This post is based on a paper by Rebecca Ferrer, William Klein, Jennifer Lerner, Valerie Reyna, and Dacher Keltner: “Emotions and Health Decison-Making, Extending the Appraisal Tendency Framework to Improve Health and Healthcare,” in Behavioral Economics and Public Health, 2014.
The authors use the appraisal tendency framework (ATF) to predict how emotions may interact with situational factors to improve or degrade health-related decisions. The paper examines four categories of judgments and thought processes as related to health decisions: risk perception, valuation and reward-seeking, interpersonal attribution, and depth of information processing. They illustrate ways in which a better understanding of emotion can improve judgments and choices regarding health.
This is the second post based on a paper: “Emotion and Decision Making,” that is to appear in the 2014 Annual Review of Psychology. It was written by Jennifer S. Lerner, Ye Li, Piercarlo Valdesolo, and Karim Kassam.
David Hume: “Reason is, and ought only to be, the slave of the passions, and can never pretend to any other office than to serve and obey them.”
Still, most of us have made some bad decisions under the influence of emotion. There are unwanted effects of emotion on decision making, but as Lerner et al note, they can only sometimes be reduced.
This post is based on a paper: “Emotion and Decision Making,” that is to appear in the 2014 Annual Review of Psychology. It was written by Jennifer S. Lerner, Ye Li, Piercarlo Valdesolo, and Karim Kassam. It is a review article. This post will set out seven themes that Lerner et al set out from the literature. I will be examining the remainder of the review article in a post to follow. Previous posts have dealt with stress, regret, feeling is for doing, etc. but this post looks at the topic in a general way. I have made the mistake of thinking of emotion as just feeding intuition, but this paper reemphasizes that this is a big mistake.
The last post Persistence looked at persistence as being partially determined by the distribution of the waiting times for the reward. A fat tailed distribution might rationally steer one toward giving up after a short waiting period. Robin Hogarth (Educating Intuition) has recently published a paper: “Ambiguous Incentives and the Persistence of Effort: Experimental Evidence” in the Journal of Economic Behavior & Organization, Volume 100, April 2014, page 1-19, with Marie Claire Villeval that looks at economic activities where the reward is mundane–money. It is more aimed at looking at what determines our persistence from the employers point of view, but I believe it could be more broadly applicable.
This post is based on a 2013 paper: “Rational Temporal Predictions Can Underlie Apparent Failures to Delay Gratification,” by Joseph T. McGuire and Joseph W. Kable that appeared in Psychological Review Vol. 120, No. 2, 395–410. An important category of seemingly bad decisions involves failure to postpone gratification. A person pursuing a desirable long-run outcome may abandon it in favor of a short-run alternative that has been available all along. The authors’ account recognizes that decision makers generally face uncertainty regarding the time at which future outcomes will materialize. When timing is uncertain, the value of persistence depends crucially on the nature of a decision maker’s prior temporal beliefs–the expected distribution of waiting times. If you expect an exponential or normal distribution of waiting times, you will not typically expect your waiting time to increase. However, in fat tailed distributions once you have waited a while, a delay’s predicted remaining length increases as a function of time already waited. In this type of situation, the rational, utility-maximizing strategy is to persist for a limited amount of time and then give up. They conclude that delay-of-gratification failure, generally viewed as a manifestation of limited self-control capacity, can instead arise as an adaptive response to the perceived statistics of one’s environment.
This post is a continuation of the theme of when decisions are made and how we delay or wait or decide not to decide. This post is based on a 2014 paper by Teichert, Ferrera, and Grinband, “Humans Optimize Decision-Making by Delaying Decision Onset,” in PLoS ONE. Again, this paper is beyond my understanding at least as to the details. It has some excellent figures and graphics that are pretty, but I do not think that I really understand them. These are my shortcomings. What interests me about this is the contrast with my previous post Deciding not to Decide. This paper examines decision onset and nondecision time while “Deciding not to Decide” suggested an explicit decision to inhibit the decision. I find making an inhibitory decision a more satisfying explanation than delaying decision onset, although they could be the same thing or the situations may be so different that there is no real comparison.
This post is an executive summary of a 2013 paper about deciding not to decide. (“Deciding Not to Decide: Computational and Neural Evidence for Hidden Behavior in Sequential Choice,” by Sebastian Gluth, Jorg Rieskamp, and Christian Buchel, that appeared in PLoS Comput Bio 9(10). Quite frankly the detail of the paper is beyond me, but the general ideas are interesting.
Many decisions are not triggered by a single event but based on multiple sources of information. When purchasing a new computer, for instance, we certainly look at the price, but not without accounting for further aspects like capabilities, quality and appearance. According to Gluth et al, usually, these multi-attribute decisions evolve sequentially, that is, as long as the collected evidence is insufficient to motivate a particular choice we search for more information to resolve our uncertainty. Importantly, such ‘‘decisions not to decide’’ are not directly observable but can promote significant changes in behavior.
David Brooks has a way of irritating me. For some reason, he seems like a very serious person so I cannot dismiss him out of hand. But, on June 16, 2014, he wrote “The Structures of Growth—Learning is no Easy Task,” in the New York Times, about certain human activities having logarithmic learning functions and others as having exponential functions. I realize that I am envious of his being able to push such sloppy work out the door to millions of readers. It is just a column, but read it for yourself.
His basis was a blog by Scott H. Young in early 2013, who as far as I can tell made much less outlandish representations about learning or domains of growth. Young explains that anything that you try to improve will have a growth curve, and that it is a mistake to assume that it will be linear. Young says that athletic performance, productivity, and mastery of a complex skill tend to be logarithmic. Early progress on logarithmic growth activities can make you overconfident if you do not realize that the curve will soon flatten. He notes that exponential functions tend to be limited to ranges and apply to technological improvement, business growth, wealth, and rewards to talent.
Continuing on the delay theme, this post is based on the paper: “Delay, Doubt, and Decision: How Delaying a Choice Reduces the Appeal of (Descriptively) Normative Options” written by Niels Van de Ven, Thomas Gilovich, and Marcel Zeelenberg, that appeared in Psychological Science in 2010.
The authors examined whether choosing to delay making a choice between a focal option and an alternative tends to make people subsequently less likely to choose what they would otherwise have chosen. They based their efforts on a regularity in elections in the United States that is known as the incumbent rule.
As you get older even those of us not labeled as procrastinators realize that some decisions never have to be made. You can wait a little bit and it becomes irrelevant or the decision becomes obvious. Using my adaptation of the parallel constraint satisfaction model, your intuitive processing often does not come up with a clear cut answer and sends the analytic system out for more information. This is a common point for us to insert delay if we can. Other times we make a decision and then get an opportunity to change it without any real penalty. Frank Partnoy’s book Wait- The Art and Science of Delay examines the overall issue mostly with a series of anecdotes. The book provides some insights.
This post is based on, “A spiral model of musical decision-making,” written by Daniel Bangert, Emery Schubert and Dorottya Fabian that appeared in Frontiers of Psychology on April 22, 2014. Research has shed light on how both intuition and deliberation are used by musicians. Bangert et. al. refer to Hallam who interviewed twenty-two performers about their practice habits and found differences between those who were “intuitive/serialists” who allowed their interpretation to evolve unconsciously versus “analytic/holists.” who relied on deliberate, conscious analysis of the piece. Other research has shown that while performing, musicians pay deliberate attention to certain specific musical aspects (performance cues) and also have spontaneous performance thoughts.
This post is based on a paper by Amy L Baylor, “A U-Shaped Model for the Development of Intuition by Expertise.” that appeared in New Ideas in Psychology, in 2001. I am bringing her ideas up now, because they are important for my next post. Although today intuition seems to have become unconscious thinking, Baylor saw it as closer to insight–far more special– in this paper. I notice this more because, I just finished reading Seeing What Others Don’t by Gary Klein which is about insight. Baylor’s questions are: Does a more naive view of a field lead to greater new insights? Or does expertise facilitate one’s capability for intuition in a given field? How can both of these positions be reconciled? It is interesting also that Baylor’s references do not duplicate authors that I have seen before.
This is the next step in my continuing trip to look at what a dual process theory means and whether or not is a useful distinction. This post looks at the 2013 paper that appeared in Perspectives on Psychological Science “Dual-Process Theories of Higher Cognition: Advancing the Debate,” written by Jonathan Evans and Keith Stanovich. Their paper divides up their ideas somewhat, but for simplicity, I am pretending they speak in unison. From what I can tell, these are the current dual process review of the literature guys.
intuition and deliberation do not seem to be completely distinct processes. To account for the underlying processes of intuitive and deliberate decision making, models that postulate a common underlying process
such as a parallel constraint satisfaction mechanism seem to be more suitable. Overall, according to the authors the reported experiments add to the accumulating body of evidence that automatic information integration plays a crucial role in decision making, independent
of whether people decide intuitively or deliberately. Eyetracking technology seems to be a promising approach to investigate these automatic processes.
I should note that this paper indicates the primacy of intuitive automatic processes which seems to differ from Hammond’s idea that quasi-rationality is where decisions begin. My interpretation is, of course, suspect.
Hammond’s cognitive continuum theory proposes that different forms of cognition (intuitive, analytical, common sense) are situated in relation to one another along a continuum that places intuitive processing at one end and analytical processing at the other. The properties of reasoning (e.g., cognitive control, awareness of cognitive ability, speed of cognitive activity) vary in degree, and the structural features of the tasks that invoke reasoning processes also vary along the continuum, according to the degree of cognitive activity they are predicted to induce.
This post attempts to summarize: “Toward a computational theory of conscious processing” by Stanislas Dehaene, Lucie Charles, Jean-Remi King and Sebastien Marti that appeared in Current Opinion in Neurobiology 2014, 25:76–84. Even more than normally in my posts, I should note that if this were a research paper, everything should probably be in quotations or have a footnote. None of the ideas are mine except for my mistakes. The paper is a review of the research done so far. The post The Global Neuronal Workspace is based on Dehaene’s work and might also be of interest. Connectome–How the Brain’s Wiring Makes Us Who We Are, by Sebastian Seung is extremely readable.
I have ignored group decision making to a large extent, but bootstrapping has somehow brought me back to it–especially dialectical bootstrapping which seems to be one person group decision making. Obviously, group decision making is important. This post will focus on political decision making. Two books from 2007, Scott Page’s: The Difference — How the Power of Diversity Creates Better Groups, Firms, Schools and Societies and Bryan Caplan’s: The Myth of the Rational Voter–Why Democracies Choose Bad Policies look at it from far apart.
How can a set of individually mediocre estimates become superior when averaged? The secret is a statistical fact that, although well known in measurement theory, has implications that are often not intuitively evident . A subjective quantitative estimate can be expressed as an additive function of three components: the truth (the true value of the estimated quantity), random error (random fluctuations in the judge’s performance), and systematic error (i.e., the judge’s systematic tendency to over- or underestimate the true value). Averaging estimates increases accuracy in two ways: It cancels out random error, and it can reduce systematic error.
This is a mild revolution for me. I was always irritated when someone suggested that someone should pull himself up by his bootstraps. This seemed quite impossible to me. But apparently even my computer is bootstrapping when it is booting. ‘‘Bootstrapping’’ alludes to Baron Munchhausen, who claimed to have escaped from a swamp by pulling himself up by, depending on who tells the story, his own hair or bootstraps.
The checklist is a heuristic. Gigerenzer explains that there needs to be something between mere intuition and complex calculations, and those might often be called rules of thumb. Although a checklist can be many things, it also fits between mere intuition and a bunch of analytic reasoning. The best checklists are like Gigerenzer’s fast and frugal tree where you take the best of a yes or no question starting with the most important question and work your way down the tree to the decision. Gigerenzer talks about “ecological rationality”–the match between the structure of a heuristic and the structure of an environment.
I have mentioned Michael Mauboussin’s book The Success Equation before, but this will be the closest I come to a review. However, his notes and bibliography somehow miss both Ken Hammond and Robin Hogarth which frankly seems unlikely. Hogarth’s books Educating Intuition (post Learning, Feedback and Intuition) and Dance with Chance (post Dancing with Chance) have much in common.
Hogarth reminded me of this when he noted our poorer performance when dealing with nonlinear relationships, “Human achievement is lower when there are nonlinearities in the ecology.” (What has Brunswik’s Lens Model Taught?). This reminds me of derivative financial instruments. My intuition cannot handle a straddled put (I think I made that up.) I can learn and feed it in, but give me 15 seconds to figure it out and my performance will be worse than chance. This is kind of a big deal if John von Neumann’s analogy is correct: that studying nonlinear relationships is similar to studying non elephants.
This post is based on the paper, “Fuzzy Trace Theory and Medical Decisions by Minors: Differences in Reasoning between Adolescents and Adults,” by Evan Wilhelms and Valerie Reyna that appeared in the June 2013, Journal of Medical Philosophy. This is an application of Fuzzy Trace Theory to the medical decision setting. The concept is more generally addressed in the first of three posts: FTT Meaning, Memory, and Development.
The Take the Best heuristic probably deserves its own post and this is it. Heuristic decision-making models, like Take-the-best, rely on environmental regularities. They conduct a limited search, and ignore available information, by assuming there is structure in the decision making environment. Take-the best relies on at least two regularities: diminishing returns, which says that information found earlier in search is more important than information found later; and correlated information, which says that information found early in search is predictive of information found later.
A simple but common type of decision requires choosing which of two alternatives has the greater (or the lesser) value on some variable of interest. Examples of these forced-choice decisions range from the everyday (e.g., deciding whether a red or a green curry will taste better for lunch), to the moderately important (e.g., deciding whether Madrid or Rome will provide the more enjoyable holiday), to the very important (e.g., deciding whether cutting the red or the black wire is more likely to lead to the destruction of the world).
This post is based on the paper: “Single-process versus multiple-strategy models of decision making: Evidence from an information intrusion paradigm,” written by Anke Söllner, Arndt Bröder, Andreas Glöckner, and Tilmann Betsch. It is a well done overview of multi-attribute decision models (Multi-attribute decision making deals with preferential choice e.g., “Which dessert do you like better?” and probabilistic inferences e.g., “Which dessert contains more calories?”). along with clever experiments.
This post is based on a paper that does a good job of providing a general picture of some of the big questions in decision making. Transitivity is usually considered required for rationality. In this paper, they use it as a measure of both intuition and analysis. Of course, transitivity does not work in rock, paper, scissors, and humans seem to be able to be quite irrational in certain of their preferences.
This post examines a paper by Robin Hogarth and Natalia Karelaia using the lens model to study the way humans make decisions that are probabilistically related to cues. The simple beauty of Brunswik’s lens model lies in recognizing that the person’s judgment and the criterion being predicted can be thought of as two separate functions of cues available in the environment of the decision. The accuracy of judgment therefore depends, first, on how predictable the criterion is on the basis of the cues and, second, the extent to which the function describing the person’s judgment matches its environmental counterpart.
I used material from Ken Hammond in his book Human Judgment and Social Policy, Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice in my previous post. In the book he makes the point, previously lost on me, that a key risk is never mentioned in discussions of the 1986 Challenger disaster–the risk of a false negative.
Sometimes it is easy to figure out the subject of the next post. Other times, nothing seems interesting. In my opinion, no one has written on the subject of judgment and decision making in a more insightful and interesting way than Kenneth R. Hammond. I have looked at two of his books in previous posts: Judgments Under Stress and Beyond Rationality (three posts). For now I am going to cherry pick part of the epilogue of his book Human Judgment and Social Policy, Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice.
This is clearly an example of the blind trying to lead when sight is a real advantage. Glockner displays PCS1 and PCS2 in some figures in his January 2014 paper in the Journal of Judgment and Decision Making. Since I tend to look at the pictures, this got my interest. Was this some different model or some innovation? I have provided some narrative explanations of Parallel Constraint Satisfaction in earlier posts, but here I am going to look at the difference between PCS1 and PCS2.
This post is based on a paper by several scientists at the Max Planck Institute for Research on Collective Goods. It is a merging of several things that I have been interested in over the years: social psychology, public good economics, city planning, and epidemiology (at least in a metaphoric sense). Politicians loved the simplicity of “broken windows,” and I was willing as a city planner to use it if it got more resources for what I wanted. Being tough on crime was an easier sell than normal city planning administration.
This is the second post based on Pohl’s paper, “On the Use of Recognition in Inferential Decision Making.” Pohl looks at what have come to be seen as weaknesses of the recognition heuristic.
“Intuition is nothing more or less than recognition.” Daniel Kahneman delivers this and credits Simon in Thinking Fast and Thinking Slow. Pohl’s article does not address this statement, but it helps me address it. Maybe the statement is not making intuition simpler, but making recognition much more complicated.
What does this post have to do with judgment and decision making? In the long run, it might have some connection. For now, I just think that it is a cool experiment. During education, the human brain learns the decimal system and, ultimately, it becomes very intuitive that the digit 4 in 41 stands for four decades, while the digit 4 in 14 stands for four units. But what is it exactly that we understand? In investigating these questions Dotan and Dehaene aimed not only to describe the various cognitive representations of numbers in educated adults, but also to dissect the successive stages by which multi-digit Arabic numbers are converted into quantities.
This post is based on the paper presented at the 2013 Annual Conference of the of the Cognitive Science Society, “Justified True Belief Triggers False Recall of “Knowing”” by Derek Powell, Zachary Horne, Angel Pinillos, and Keith J. Holyoak. People’s beliefs are the primary drivers of their actions, yet these beliefs are often uncertain—the products of limited information about the world and interconnections between other (often uncertain) beliefs.
This post is based on the paper: “Can We Trust Intuitive Jurors? Standards of Proof and the Probative Value of Evidence in Coherence-Based Reasoning,” Glockner and Engel explain that Jury members have a difficult task. They have to make decisions based on pieces of information that are usually contradictory, essentially always incomplete, presented in multiple formats (making them hard to compare and integrate), and introduced by parties clearly intending to bias the jury. How do jury members then make meaningful decisions? Their behavior is explained by sense making and constructing coherent stories from the evidence. Jurors attempt to create complete narratives from the pieces of evidence they hear.
This post is snarkey, but I could not resist. Satel’s book is a masterpiece of wild assertions countered with broad caveats. Brooks took it and flew away.
One of the great issues in decision making is knowing which feedback is worthwhile. A player like Ryan Succop gets feedback from every kick. A kick that is made but does not split the uprights can set him thinking on his technique. The post might have been better titled regression to the mean lite.
I have found the AHQR (Agency for Health Care Quality and Research) to give generally good advice in its publications. This post is based on “Chapter 6. Clinical Reasoning, Decision Making, and Action: Thinking Critically and Clinically,” by Patricia Benner, Ronda G. Hughes, and Molly Sutphen in Patient Safety and Quality: An Evidence-Based Handbook for Nurses, 2008. I think of AHQR as the guideline or checklist people in the Gigerenzer tradition. The authors are important in nursing education and it would not surprise me if nursing educators had contributions to make to the area of judgment and decision making as a whole. Having had the occasion to see nurses in action, I have respect for their ability to know which rules to follow and which to ignore.
This post brings up the latest paper by Dan Kahan and his colleagues, Erica Dawson, Ellen Peters, and Paul Slovic: “Motivated Numeracy and Enlightened Self-Government.” The experiment was designed to test two opposing accounts of conflict over decision relevant science. The first—the Science Comprehension Thesis (“SCT”)—attributes such conflicts to the limited capacity of the public to understand the significance of valid empirical evidence. The second—the Identity-protective Cognition Thesis (“ICT”)—sees a particular recurring form of group conflict as disabling the capacities that individuals have to make sense of decision-relevant science: when policy-relevant facts become identified as symbols of membership in and loyalty to affinity groups that figure in important ways in individuals’ lives, they will be motivated to engage empirical evidence and other information in a manner that more reliably connects their beliefs to the positions that predominate in their particular groups than to the positions that are best supported by the evidence.
Holyoak in his chapter explains that analogy is closely related to metaphor and related forms of symbolic expression that arise in everyday language (e.g., “the evening of life,” “the idea blossomed”), in literature, the arts, and cultural practices such as ceremonies. Metaphors are a special kind of analogy, in that the source and target domains are always semantically distant, and the two domains are often blended rather than simply mapped (e.g., in “the idea blossomed,” the target is directly described in terms of an action term derived from the source). Metaphors impact our decision making.
According to Keith Holyoak, the most important influence on analogy research in the cognitive-science tradition has been concerned with the representation of knowledge within computational systems. Holyoak credits philosopher Mary Hesse, who was in turn influenced by Aristotle’s discussions of analogy in scientific classification and Black’s interactionist view of metaphor. Hesse placed great stress on the purpose of analogy as a tool for scientific discovery and conceptual change, and on the close connections between causal relations and analogical mapping.
This post is long in coming. It is surprising that analogy has not come up before in over 70 posts. I can certainly recall times when someone convinced me with a clever analogy, and then it turned out to be dead wrong. Nevertheless, we need analogy and we are vulnerable to analogy. It seems to me that we try to turn our love of stories, anecdotal evidence, into analogies sometimes to generally bad results. Keith Holyoak is one of the leaders in trying to understand how humans use analogy.
In my 60s I can attest to my weakened ability to recall. It is ridiculous. This post looks at a paper that is written most prominently by the authors of fuzzy trace theory, Brainerd and Reyna. “Dual-Retrieval Models and Neurocognitive Impairment” appeared online on August 26, 2013 in the Journal of Experimental Psychology. (The post also uses an online source, The Cornell Chronicle, in an article dated September 5, 2013, entitled: “Breakthrough discerns normal memory loss from disease”, and was written by Karene Booker.) It comes up with some interesting conclusions.
Dan Kahan has an article in the October 2013, issue of Science, “A Risky Science Communication Environment for Vaccines,” with a specific example of the HPV vaccine issues. Kahan has written a good article and one that may have not pleased several people. Kahan says quite well although indirectly that vaccination is really not a cultural cognition issue–yet, but we could make it one if we are not careful.
This post is based on the 2011 paper by Julian Marewski and Lael Schooler published in the Psychological Review, “Cognitive Niches: An Ecological Model of Strategy Selection.” How do people select among different strategies to accomplish a given task? By using ACT-R along with heuristic decision strategies, the authors can create a more general bidirectional model that seems to be competitive with such models as parallel constraint satisfaction.
The idea of bidirectional reasoning seems to have really got going by way of a 1999 paper entitled: “Bidirectional Reasoning in Decision Making by Constraint Satisfaction.” The researchers wrote that one of the most deep-rooted assumptions about human reasoning is that the flow of inference is inherently unidirectional, moving from premises to be accepted as given to inferred conclusions. Holyoak et al posited that there is an alternative conception of reasoning and decision making in which inferences are inherently bidirectional, so that the distinction between premises and conclusion is blurred.
I came across ACT-R and determined it to be worthy of a look when two arguing European psychologists spoke of it approvingly. What is it? The basis of the acronym is Adaptive Control of Thought – Rational. It was created by John Anderson of Carnegie Mellon. Carnegie Mellon is known by me to be one of if not the preeminent computer science schools and ACT-R has roots in artificial intelligence.
A paper, “Unconscious influences on decision making: A critical review,” by Ben R. Newell and David R. Shanks that has been “to be published in Behavioral and Brain Sciences” for over a year, decides that there is no proof of unconscious influences on decision making. The paper seems to lump Dijksterhuis, Gladwell, Glockner, Kahneman, and Gigerenzer together as seeing unconscious influences where there are none proven.
I loved the book Why Men Rebel by Ted Robert Gurr. I read it over forty years ago. It gave me what seemed like a truth that I had not thought about before. That truth was: “Don’t expect anyone to be happy based on some threshold level of consumption and attainment of goals.” Our expectations are created by looking around and seeing what everyone else has and with media and communications, we all know what everyone else has. This is the concept of relative deprivation.
This post looks at the paper “Do people learn option or strategy routines in multi-attribute decisions? The answer depends on subtle factors” A specific strategy that had been successful in several trials was still used after changes in the environment that rendered simpler solutions available. Routinization even prevented many participants from finding simple solutions to new problems in which the routinized strategy could not be used. Hence, routinization may be beneficial in a stable task environment, but it may become detrimental in a changing world.
This post looks at a little history and a little psychology that make clear that humans do not even try to maximize expected utility in certain circumstances. If this post were to fit in logically it should have been before prospect theory. Prospect theory manages to internalize these little foibles. The bottom line is that in certain circumstances we have plenty of brain power, but we still do not want to maximize expected utility (at least as normally measured).
This post examines two papers studying expertise and decision making. The first paper is “Expert intuitions: How to model the decision strategies of airport customs officers?” They asked Swiss airport customs officers in Zurich and Bern and a novice control group to decide which passengers (described on several cue dimensions) they would submit to a search. Additionally, participants estimated the validities of the different cues. The second paper is “Deliberation Versus Intuition: Decomposing the Role of Expertise In Judgement and Decision Making” . This article has an interesting premise — basically saying that real experts have both knowledge and experience.
The title of this post is probably stretching, but according to the deliberation without attention (DWA) hypothesis, people facing a difficult choice will make a better decision after a period of distraction than after an equally long period of conscious deliberation, an effect referred to as the unconscious thought advantage (UTA). This post looks at a paper, “The unconscious thought advantage: Further replication failures from a search for confirmatory evidence,” written by Mark Nieuwenstein and Hedderik van Rijn that appeared in Judgment and Decision Making, Vol. 7, No. 6, November 2012, pp. 779–798.
This post discusses a paper entitled: “Rational decision making: balancing RUN and JUMP modes of analysis.”
In his latest book, The World Until Yesterday, Jared Diamond looks at how several societies that have avoided technological advancements over long periods of recent time can teach us something. He indicates that you can look at these traditional societies as separate experiments of how a society develops and maybe by picking and choosing, you can find ways to enhance today’s first world societies. Dealing with risk and making decisions are part of that.
This post summarizes the second half of a paper entitled: “Theory-informed design of values clarification methods: A cognitive psychological perspective on patient health-related decision making.” It includes a summary of the general agreements of the four theories and seven recommendations based on these agreements to aid value clarification for patients. Again, I think it is almost amazing that these theories are being examined in one paper. I am impressed by the clarity and usefulness of the examination.
This post provides the gist of the first half of a paper entitled: “Theory-informed design of values clarification methods: A cognitive psychological perspective on patient health-related decision making,” that appeared in Social Science & Medicine 77 (2013), and was written by Arwen H. Pieterse, Marieke de Vries, Marleen Kunneman, Anne M. Stiggelbout, and Deb Feldman-Stewart.
The authors adopt the view of the multiple strategy approach, that people are equipped with a repertoire of different decision strategies, but when you do this the strategy selection problem arises: How does the decision maker determine which strategy to choose?
This post looks at a paper by Andreas Glockner and Thorsten Pachur entitled: “Cognitive models of risky choice: Parameter stability and predictive accuracy of prospect theory,” that appeared in Cognition in 2012. The paper looks at the changeable parameters in prospect theory and tries to determine their explanatory value and also the extent which individuals have stable parameters.
Prospect theory is a descriptive model of decision making and considered by some the greatest psychological advance ever. A descriptive model tries to describe and predict actual behavior and not theoretically ideal behavior. It belongs to Daniel Kahneman and Amos Tversky, and earned Kahneman the Nobel Prize for economics.
The authors have organized the literature with respect to four key points that have emerged from research reported since 2007: (1) it is important to distinguish between temporal and on temporal factors when conceptualizing processes involved in remembering the past and imagining the future; (2) despite impressive similarities between remembering the past and imagining the future, theoretically important differences have also emerged; (3) the component processes that comprise the default network supporting memory-based simulations are beginning to be identified; and (4) this network can couple flexibly with other networks to support complex goal-directed simulations.
Dan Gilbert, who was featured in the post on Regret, has done much work on affective forecasting. In many respects this post based on the paper co-written with T.D. Wilson, “Why the brain talks to itself: sources of error in emotional prediction,” is just a more generalized explanation of affective forecasting and its shortcomings.
This post will address the paper written by Kahneman and Gary Klein, his one time archrival with respect to expertise. Kahneman is a proponent of heuristics and bias as explaining intuitive decision making while Klein is the proponent of naturalistic decision making. Kahneman seems to look at wrong with decision making while Klein seems to look at what is right.
This post is based on an interesting paper published in the November 2011 Information Systems Security Association Journal. “A Call to Arms: It’s Time to Learn Like Experts,” is authored by Jay Jacobs.
Ski guides who use helicopters or tracked vehicles to get skiers into the treasured deep powder must evaluate avalanche possibilities. Iain Stewart-Patterson of Thompson Rivers University, Kamloops, BC, examines avalanche expertise.
Robin Hogarth looks at expertise along with intuition.
Feedback is an important part of learning and accordingly decision making. Feedback can have varying levels of relevance. Relevance depends on which measure is selected and then measuring it well. Learning is further advanced if the consequences of error are greater.
Cass Sunstein is an accomplished member of the coherence school of decision making. This post looks at his two books referred to in the title of the post. The post will not be a good summary, but includes a few things I found interesting.
Bruce Hood is an experimental psychologist and in Supersense he argues that beliefs in the supernatural are a consequence of reasoning processes about natural properties and events in our world.Robert Wright is an author and philosopher. He carries on with Atran and Axelrod’s “Who we are” with his idea of moral imagination especially from the viewpoint of religion.
According to Glockner and Betsch, deliberate constructions (DCs) are the opportunity for the deliberate/analytical system to provide input into decision making. The Parallel Constraint Satisfaction rule holistically considers the information contained in a network.
Parallel Constraint Satisfaction Theory is a descriptive theory of decision making whose main proponents are Andreas Glockner and Tilman Betsch. They propose that decision making uses analytic processes for information search and production and intuitive (automatic) processes for combining information and making the decisions.
Schiff indicates that to an unacceptably large extent, clinical diagnosis is an open loop system. Schiff says that physicians lack systematic methods for calibrating diagnostic decisions based on feedback from their outcomes. Worse yet the organizations that employ these physicians have no way to learn about the thousands of diagnostic decisions.
Over the last few years, Bernanke’s gist has been that the fed among all possible actors would do whatever it could to help the economy and reduce unemployment. His lengthy explanation last week indicated that he had been thinking a lot about being ready to change this. Maybe that changed the gist from full speed ahead to easing off. The slightly more complicated message changed the gist.
This post looks at a paper that was put together at a conference on risk perception and risk communication regarding vaccination decisions in the age of social media. The paper: “Opportunities and challenges of Web 2.0 for vaccination decisions”, was published in the Journal Vaccine 30(2012)3727-3733.
According to fuzzy-trace theory, experts differ developmentally from novices and thus should rely more on gist based intuition. This view is not unlike Hogarth’s as presented in Educating Intuition. Reyna’s research has found that experts used less information and processed it less precisely than novices.
This is the second part of my look at fuzzy trace theory. Reyna first compares the predictions of prospect theory to fuzzy trace theory with respect to framing effects in adults. Framing effects describe shifts in risk preferences from risk aversion when prospects are described in terms of gains to risk seeking when the same prospects are described in terms of losses.
I am going to try to summarize the basic ideas in three separate posts based on Valerie Reyna’s paper: “A new intuitionism: Meaning, memory, and development in Fuzzy Trace Theory” found in the May 2012 issue of Judgment and Decision Making.
Robert Kurzban, evolutionary psychologist, suggests in this book that our brains are like iphones with lots of apps. The apps or modules are just as likely to be unconnected as connected. That is why we can believe conflicting things all at once. Not only are some of the modules unconnected, but they may work better that way.
Gigerenzer says that we must teach risk literacy in medical school and statistical literacy to all in primary school. He and his colleagues go into considerable detail to say how this should be done.
This post tries to summarize some unique and interesting material with respect to the causes and consequences of our statistical illiteracy based on the Gigerenzer monograph.
Gig Gigerenzer addresses the statistics of health decision making that includes doctors who have skills much like the public, and also the researchers, screening test providers, drug makers, and device makers who tend to have excellent skills. They sometimes use those skills to take advantage of the doctors, patients, and the public (for my purposes journalists are just part of the public). This monograph puts the pieces together and is discussed in three posts.
Decisions with consequences that are experienced over time are everywhere. They include spending, investments, diet, fertility, education, etc. The DU model, discounted utility, assumes that people evaluate the pleasures and pains resulting from a decision by exponentially discounting the value of outcomes according to how delayed in time they are.
Dr Thomas Tape wrote an article “Coherence and correspondence in medicine.” As you might expect, Dr Tape is applying some of the ideas of Kenneth Hammond to medicine. Tape notes that the distinction between coherence (making logical sense) and correspondence (being empirically correct) seldom appears in the medical literature.
This backward facing emotion is a combination of self-blame and disappointment. Interestingly, we all know that it influences our decision making. Daniel Kahneman in Thinking Fast and Slow notes that neither prospect theory or utility theory take regret into account.
Kay’s Obliquity describes the process of achieving complex objectives indirectly. Matthew May’s In Pursuit of Elegance has the important idea of “what isn’t there can often trump what is.”
Ken Hammond (2000) Judgments Under Stress, wants to talk about constancy while stress is a constancy disruptor. Hammond’s mentor, Egon Brunswik, saw constancy as the essence of life. Hammond asserts that the orientation of the organism is directed toward maintaining stable relations with the environment, and that disruption of those stable relations is the definition of stress.
This is a personal tale of trivial decision making. After considering my Dad’s weakening decision making skills, I was humbled into considering my own. And then it turned out that he had made some pretty good decisions after all. It just took a little time to get there.
Consciousness (2012) is both a science book and a personal book. Free will versus determinism is one of the subjects. This seems to be near the essence of judgment. Christof Koch says that classical determinism is out, but the strong version of free will is also out.
The book Dance with Chance is written by three scientists each with contributions in the field of judgment and decision making. They are Spyros Makridakis, Robin Hogarth, and Anil Gaba.
Cultural cognition has grown from the ideas of Mary Douglas and Aaron Wildavsky. The research, ideas, etc of the Cultural Cognition Project at Yale Law School are at www.culturalcognition.net – home . Dan Kahan leads the project.
It is clear to even the layman that intuition is not just one thing. Glockner and Witteman look at the theorizations and research and decide that there are four processes underlying intuition.
After finishing the post on The Art of Choosing, I felt the need to comment on all three books including The Paradox of Choice and The Myth of Choice.
Sheena Iyengar comes up with 4 quick tips in the afterword of her second edition:
- Cut your options to between 5 and 9.
- Gain confidence in your choices by using expert advice.
- Categorize the choices available.
- Condition yourself by starting out with fewer choices and building up to greater more complex choices.
The core premise of this post based on Zeelenberg, Nelissen, and Peters paper, “Emotion, Motivation, and Decision Making A Feeling-Is-for-Doing Approach is that emotional processes form part of the intuitional component of decision making.
Hammond has his own theme and that is basically to use both coherence and correspondence in the study of human judgment. He asserts that is not possible to do a little of both coherence and correspondence in one judgment, but it is possible to oscillate between the tactics of judgment.
Part 2 looks at Hammond’s discussion of the correspondence and coherence researchers.
Rationality has been the tool that we use to combat uncertainty. After 5000 years, everyone still seems to have his own definition. Part of the confusion is the continuing struggle between intuition and analysis. As Hammond says, he will go beyond rationality in this book because we need to, and search for wisdom. The book is about judgment, the core cognitive process by which we are judged by others.
This post is based on Chapter 10 of Bounded Rationality and on Gut Feelings both authored primarily by Gerd Gigerenzer. It is largely a shopping list of heuristics or rules of thumb. Gigerenzer makes the important point that more information and more choice are not always better. Less is more under certain conditions.
Howard Rheingold, Jim Surowiecki, and Scott Page each look in their own ways at how diversity can help us make better decisions. They all build on that fact of cultural evolution, that it is great for each of us to make better judgments, but that for us to progress we need to make better decisions together.
This post examines Chapter 19 of Bounded Rationality.
That chapter looks at three ways that cultural processes produce boundedly rational heuristics/algorithms. They are:
- Simple imitation and learning heuristics
- Over cultural evolutionary time scale, the aids in number one have given rise to complex motivations, rules, cues, etc.
- From these first two, group processes that distribute cognition, knowledge, skill, and labor have arisen.
Mosier traces the evolution of the aircraft cockpit as an example of the change of a probabilistic environment into an ecological hybrid–an environment with both probabilistic and deterministic features and elements. Mosier makes the case that judgment and decision making in a hybrid ecology requires coherence (rationality and consistency-justifiable decisions) as the primary strategy to achieve correspondence (objective accuracy).
Be wary of your intuition, because it is poorly adapted to solving problems in the modern world. They recommend thinking twice before you decide to trust intuition over rational analysis, especially in important matters. They suggest that knowing about the illusions can help you avoid their consequences.
Wilson suggests the 3 C’s of human evolution: cognition, culture, and cooperation and that we are evolution’s newest transition from groups of organisms to groups as organisms. He mentions “enforced equality” as possibly the key adaptation.
When between-group selection dominates within-group selection, a major evolutionary transition occurs and the group becomes a new higher level organism. Within groups, altruistic behavior is selectively disadvantageous, but it may be favored between groups and thus counteract the within group selection.
Simple and robust heuristics can match a specific optimizing strategy.
Our minds are not examples of what appears to be rational design–they are a kluge. Nevertheless, there are things we can do to make better choices.
Cultural Evolution We are not that much smarter than other creatures, but we have a unique ability to learn from others, especially using grammatical language.
Development Stages of Intuition and Analysis We tend to base our actions more and more on gist-based intuition as we age.
Intuition in J/DM “Intuition is capable of dealing with complex tasks through extensive information processing without noticeable effort.”
Power of Memory Sharp memory is often trumped by hazy memory.
Justifying our Decisions: Great for Plausible Deniability, not so Great for Medical Diagnosis Humans are often justifying. If the subject is social, moral, or political, the justification typically does not impact the decision. If the subject requires rational justification that is consistent and coherent, humans need rigorous training or they will get it wrong. If the subject is not that important and not moral or political, the justification process may actually get us to change our minds to make the justification work.
Human Kinds Perception Our brains benefit us by putting things and maybe humans into categories. We just do it. We cannot stop doing it, but we can counteract it when it is not appropriate.