“Human achievement is lower when there are nonlinearities in the ecology.” (What has Brunswik’s Lens Model Taught?).
Taming Uncertainty by Ralph Hertwig (See posts Dialectical Bootstrapping and Harnessing the Inner Crowd.), Timothy J Pleskac (See post Risk Reward Heuristic.), Thorsten Pachur (See post Emotion and Risky Choice.) and the Center for Adaptive Rationality, MIT Press, 2019, is a new compendium that I found accidentally in a public library. There is plenty of interesting reading in the book. It takes the adaptive toolbox approach as opposed to the Swiss Army Knife . The book gets back cover raves from Cass Sunstein (See posts Going to Extremes, Confidence, Part 1.), Nick Chater, and Gerd Gigerenzer (See post Gigerenzer–Risk Saavy, and others.). I like the pieces, but not the whole.
This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that arming teachers is a solution to anything. However, going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.
This post is based on “Providing information for decision making: Contrasting description and simulation,” Journal of Applied Research in Memory and Cognition 4 (2015) 221–228, written by
Robin M. Hogarth and Emre Soyer. Hogarth and Soyer propose that providing information to help people make decisions can be likened to telling stories. First, the provider – or story teller – needs to know what he or she wants to say. Second, it is important to understand characteristics of the audience as this affects how information is interpreted. And third, the provider must match what is said to the needs of the audience. Finally, when it comes to decision making, the provider should not tell the audience what to do. Although Hogarth and Soyer do not mention it, good storytelling draws us into the descriptions so that we can “experience” the story. (see post 2009 Review of Judgment and Decision Making Research)
Hogarth and Soyer state that their interest in this issue was stimulated by a survey they conducted of how economists interpret the results of regression analysis. The economists were given the outcomes of the regression analysis in a typical, tabular format and the questions involved interpreting the probabilistic implications of specific actions given the estimation results. The participants had available all the information necessary to provide correct answers, but in general they failed to do so. They tended to ignore the uncertainty involved in predicting the dependent variable conditional on values of the independent variable. As such they vastly overestimated the predictive ability of the model. Another group of similar economists who only saw a bivariate scatterplot of the data were accurate in answering the same questions. These economists were not generally blinded by numbers as some in the population, but they still needed the visually presented frequency information.
This post is based on the paper: “The role of cognitive abilities in decisions from experience: Age differences emerge as a function of choice set size,” by Renato Frey, Rui Mata, and Ralph Hertwig that appeared in Cognition 142 (2015) 60–80.
People seldom enjoy access to summarized information about risky options before making
a decision except for things like weather forecasts that explicitly state a probability. Instead, they may search for information and learn from the environment—thus making decisions from experience. Many consequential decisions—including health care choices, finances, and everyday risks (e.g., driving in bad weather; crossing a busy street)—are made without full knowledge of the possible outcomes and their probabilities so we must make decisions from experience. According to the authors, the mind’s most notable transformation across the life span is a substantial decline in processing speed, working memory and short-term memory capacity —all components potentially involved in search and learning processes.
This post is based on a paper: “Ecologically Rational Choice and the Structure of the Environment”, that appeared in the Journal of Experimental Psychology: 2014, Vol. 143, No. 5. The authors are Timothy J. Pleskac and Ralph Hertwig. The paper is based on the idea that decision making theory has largely ignored the idea that risk and reward are tied together with payoff magnitudes signaling their probabilities.
How people should and do deal with uncertainty is one of the most vexing problems in theorizing about choice. The researchers suggests a process that is inferential in nature and rests on the notion that probabilities can be approximated from statistical regularities that govern real-world gambles. In the environment there are typically multiple fallible indicators to guide your way. When some cues become unreliable or unavailable, the organism can exploit this redundancy by substituting or alternating between different cues. This is possible because of what Brunswik called the mutual substitutability or vicarious functioning of cues. It is these properties of intercue relationships and substitutability that Pleskac and Hertwig suggest offer a new perspective on how people make decisions under uncertainty. Under uncertainty, cues such as the payoffs associated with different courses of actions may be accessible, whereas other cues—in this case, the probability with which those payoffs occur—are not. This missing probability information has been problematic for choice theories as typically both payoffs and probabilities are used in determining the value of options and in choosing. However, if payoffs and probabilities are interrelated, then this ecological property can permit the decision maker to infer hidden or unknown probability distributions from the payoffs themselves, thus easing the problem of making decisions under uncertainty.
This post looks at the medical/health component of decision making as addressed in Gerd Gigerenzer’s new book, Risk Saavy, How to Make Good Decisions. First, Gigerenzer has contributed greatly to improving health decision making. This blog includes three consecutive posts on the Statistics of Health Decision Making based on Gigerenzer’s work.
He points out both the weaknesses of screening tests and our understanding of the results. We have to overcome our tendency to see linear relationships when they are nonlinear. Doctors are no different. The classic problem is an imperfect screening test for a relatively rare disease. You cannot think in fractions or percentages. You must think in absolute frequencies. Breast cancer screening is one example. Generally, it can catch about 90% of breast cancers and only about 9% test positive who do not have breast cancer. So if you have a positive test, that means chances are you have breast cancer. No! You cannot let your intuition get involved especially when the disease is more rare than the test’s mistakes. If we assume that 10 out of 1000 women have breast cancer, then 90% or 9 will be detected, but about 90 of the 1000 women will test positive who do not have disease. Thus only 9 of the 99 who test positive actually have breast cancer. I know this, but give me a new disease or a slightly different scenario and let a month pass, I will still be tempted to shortcut the absolute frequencies and get it wrong.
This post is based on the paper: “The Affect Gap in Risky Choice: Affect-Rich Outcomes Attenuate Attention to Probability Information,” authored by Thorsten Pachur, Ralph Hertwig, and Roland Wolkewitz that appeared in Decision, 2013, Volume 1, No. 1, p 64-78. This is a continuation of the affect/ emotion theme. It is more of a valence based idea than Lerner’s Appraisal Tendency Framework. This is more thinking about emotion than actually experiencing it although the two can come together.
Often risky decisions involve outcomes that can create considerable emotional reactions. Should we travel by plane and tolerate a minimal risk of a fatal terrorist attack or take the car and run the risk of traffic jams and car accidents? How do people make such decisions? Decisions under risk typically obey the principle of the maximization of expectation.
The expectation expresses the average of an option’s outcomes, each weighted by its
probability. This, of course, underlies expected utility theory and cumulative prospect theory and these models do a good job in accounting for choices among relatively affect-poor
I have mentioned Michael Mauboussin’s book The Success Equation before, but this will be the closest I come to a review. The title makes it sound like a self help book, but it is much more substantial. However, his notes and bibliography somehow miss both Ken Hammond and Robin Hogarth which frankly seems unlikely. Hogarth’s books Educating Intuition (post Learning, Feedback and Intuition) and Dance with Chance (post Dancing with Chance) have much in common.
Mauboussin most unique contribution from my view is to bring Bill James and his successors from baseball to the world of skill and luck and investment. And Mauboussin is amazingly honest about the luck involved in investment which is his world. He pretty much says that you cannot be an expert in his field but only experienced. Using sports, especially baseball, makes the book’s ideas much more understandable. That brings us to the idea for this post. Mauboussin calls it reversion to the mean and Kahneman calls it regression to the mean. Either way, baseball makes it more understandable.
Dan Kahan has an article in the October 2013, issue of Science, “A Risky Science Communication
Environment for Vaccines,” with a specific example of the HPV vaccine issues. Kahan has written a good article and one that may have not pleased several people. It fits together with my post Web 2.0 for Vaccination Decisions. Kahan makes the case for having scientifically based scientific risk communication strategies which is something that Betsch and Reyna et al try to do in “Opportunities and challenges of Web 2.0 for vaccination decisions.” Kahan may make a bigger case for just not being stupid and ignoring everything we already know about risk communication. He says quite well although indirectly that vaccination is really not a cultural cognition issue–yet, but we could make it one if we are not careful.