This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that arming teachers is a solution to anything. However, going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.
This post is based on the paper: “Free-energy minimization and the dark-room problem,” written by Karl Friston, Christopher Thornton and Andy Clark that appeared in Frontiers in Psychology in May 2012. Recent years have seen the emergence of an important new fundamental theory of brain function (Posts Embodied Prediction and Prediction Error Minimization). This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose over arching principle is the minimization of surprise (or, equivalently, the maximization of expectation). A puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. People do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.”
This post is based on the paper: “The role of cognitive abilities in decisions from experience: Age differences emerge as a function of choice set size,” by Renato Frey, Rui Mata, and Ralph Hertwig that appeared in Cognition 142 (2015) 60–80.
People seldom enjoy access to summarized information about risky options before making
a decision except for things like weather forecasts that explicitly state a probability. Instead, they may search for information and learn from the environment—thus making decisions from experience. Many consequential decisions—including health care choices, finances, and everyday risks (e.g., driving in bad weather; crossing a busy street)—are made without full knowledge of the possible outcomes and their probabilities so we must make decisions from experience. According to the authors, the mind’s most notable transformation across the life span is a substantial decline in processing speed, working memory and short-term memory capacity —all components potentially involved in search and learning processes.
This post is based on a paper: “Ecologically Rational Choice and the Structure of the Environment”, that appeared in the Journal of Experimental Psychology: 2014, Vol. 143, No. 5. The authors are Timothy J. Pleskac and Ralph Hertwig. The paper is based on the idea that decision making theory has largely ignored the idea that risk and reward are tied together with payoff magnitudes signaling their probabilities.
How people should and do deal with uncertainty is one of the most vexing problems in theorizing about choice. The researchers suggests a process that is inferential in nature and rests on the notion that probabilities can be approximated from statistical regularities that govern real-world gambles. In the environment there are typically multiple fallible indicators to guide your way. When some cues become unreliable or unavailable, the organism can exploit this redundancy by substituting or alternating between different cues. This is possible because of what Brunswik called the mutual substitutability or vicarious functioning of cues. It is these properties of intercue relationships and substitutability that Pleskac and Hertwig suggest offer a new perspective on how people make decisions under uncertainty. Under uncertainty, cues such as the payoffs associated with different courses of actions may be accessible, whereas other cues—in this case, the probability with which those payoffs occur—are not. This missing probability information has been problematic for choice theories as typically both payoffs and probabilities are used in determining the value of options and in choosing. However, if payoffs and probabilities are interrelated, then this ecological property can permit the decision maker to infer hidden or unknown probability distributions from the payoffs themselves, thus easing the problem of making decisions under uncertainty.
Having the good fortune to be lost in Venice, I was reminded of the nuances of the recognition heuristic. My wife found the perfect antique store which was unsurprisingly closed for lunch. We went on, but a couple of hours later we tried to recreate our steps. For much of the journey, we did well having only to recognize that we had seen a particular store or archway or bridge before. Unfortunately, that broke down when we realized that we were retracing our steps in a five minute period. We still remembered that walk to the restaurant the first night, but it was unfortunately not differentiated in our minds. This was certainly an example of less knowledge being more. Eventually, using a combination of GPS and maps we found our way back to our hotel, but we never did find that antique store. And I was trying.
This post is based on a paper by Benjamin Hilbig, Martha Michalkiewicz, Marta Castela, Rudiger Pohl and Edgar Erdfelder: “Whatever the cost? Information integration in memory-based inferences depends on cognitive effort.” that was scheduled to appear in Memory and Cognition 2014. Fundamental uncertainty is a not uncommon situation for our decision making. There is an ongoing argument with the fast and frugal heuristics toolbox approach and the single tool approaches of evidence accumulation and parallel constraint satisfaction. However that argument depends on the particular task, the type of task, and on and on. I am still waiting for a giant table that puts all those things together.
This post looks at the medical/health component of decision making as addressed in Gerd Gigerenzer’s new book, Risk Saavy, How to Make Good Decisions. First, Gigerenzer has contributed greatly to improving health decision making. This blog includes three consecutive posts on the Statistics of Health Decision Making based on Gigerenzer’s work.
He points out both the weaknesses of screening tests and our understanding of the results. We have to overcome our tendency to see linear relationships when they are nonlinear. Doctors are no different. The classic problem is an imperfect screening test for a relatively rare disease. You cannot think in fractions or percentages. You must think in absolute frequencies. Breast cancer screening is one example. Generally, it can catch about 90% of breast cancers and only about 9% test positive who do not have breast cancer. So if you have a positive test, that means chances are you have breast cancer. No! You cannot let your intuition get involved especially when the disease is more rare than the test’s mistakes. If we assume that 10 out of 1000 women have breast cancer, then 90% or 9 will be detected, but about 90 of the 1000 women will test positive who do not have disease. Thus only 9 of the 99 who test positive actually have breast cancer. I know this, but give me a new disease or a slightly different scenario and let a month pass, I will still be tempted to shortcut the absolute frequencies and get it wrong.
I first read Lindblom’s paper, “The Science of Muddling Through,” in a comparative systems political science class. It appealed to me then and after more than forty years, it still does. He did a good job of exposing the logical extreme of the rational model as ridiculous, at least in government. At the same time, he used terminology for his incremental model that made it difficult to publicly embrace. As a city planner, I could decry “disjointed incrementalism,” to try to get elected officials to look a bit further into the future, but had he called it coherent accumulation, I would not have ever had a chance. After fifty five years, many of his examples still seem quite relevant.
This post is based on a July 2009 paper, “Strategic Decision Making Paradigms: a Primer for Senior Leaders,” that was written by Col. Charles D. Allen and Dr. Breena E. Coates both of the Army War College. Although much in the paper has been touched upon in prior posts, it summarizes several models for strategic decision making, and includes some with which I am unfamiliar. It is public sector oriented it includes some good examples related to the defense of the nation. It also sets the stage for another post on muddling through. Strategic decisions entail “ill-structured, “messy” or “wicked problems” that do not have quick, easy solutions. They often end in so called “error of the third kind”, where complex problems are often addressed with a correct solution to the wrong problem.
This post is a continuation of the theme of when decisions are made and how we delay or wait or decide not to decide. This post is based on a 2014 paper by Teichert, Ferrera, and Grinband, “Humans Optimize Decision-Making by Delaying Decision Onset,” in PLoS ONE. Again, this paper is beyond my understanding at least as to the details. It has some excellent figures and graphics that are pretty, but I do not think that I really understand them. These are my shortcomings. What interests me about this is the contrast with my previous post Deciding not to Decide. This paper examines decision onset and nondecision time while “Deciding not to Decide” suggested an explicit decision to inhibit the decision. I find making an inhibitory decision a more satisfying explanation than delaying decision onset, although they could be the same thing or the situations may be so different that there is no real comparison.
This post is an executive summary of a 2013 paper about deciding not to decide. (“Deciding Not to Decide: Computational and Neural Evidence for Hidden Behavior in Sequential Choice,” by Sebastian Gluth, Jorg Rieskamp, and Christian Buchel, that appeared in PLoS Comput Bio 9(10). Quite frankly the detail of the paper is beyond me, but the general ideas are interesting.
Many decisions are not triggered by a single event but based on multiple sources of information. When purchasing a new computer, for instance, we certainly look at the price, but not without accounting for further aspects like capabilities, quality and appearance. According to Gluth et al, usually, these multi-attribute decisions evolve sequentially, that is, as long as the collected evidence is insufficient to motivate a particular choice we search for more information to resolve our uncertainty. Importantly, such ‘‘decisions not to decide’’ are not directly observable but can promote significant changes in behavior.