Category Archives: Uncertainty

Confidence, Part III

In Confidence, Part II, the authors conclude that confidence is computed continuously, online, throughout the decision making process, thus lending support to models of the mind as a device that computes with probabilistic estimates and probability distributions.

 

The Embodied Mind

One such explanation is that of predictive processing/embodied mind. Andy Clark, Jacob Hohwy, and Karl Friston have all helped to weave together this concept. Our minds are blends of top down and bottom up processing where error messages and the effort to fix those errors makes it possible for us to engage the world. According to the embodied mind model, our minds do not just reside in our heads. Our bodies determine how we interact with the world and how we shape our world so that we can predict better. Our evolutionary limitations have much to do with how our minds work. One example provided by Andy Clark and Barbara is a robot without any brain imitating human walking nearly perfectly (video go to 2:40). Now how does this tie into confidence?  Confidence at a conscious level is the extent of our belief that our decisions are correct. But the same thing is going on as a fundamental part of perception and action. Estimating the certainty of our own prediction error signals of our own mental states and processes is as Clark notes:  “clearly a delicate and tricky business. For it is the prediction error signal that…gets to ‘carry the news’.”

Continue reading

Confidence, Part II

Now the confidence heuristic is not the only thing Trump takes advantage of, but we will leave those for another time. I will also avoid the question of whether or not Trump is actually confident. So what is the relationship of confidence and decision making? Daniel Kahneman in Thinking, Fast and Slow on page 13  describes:

a puzzling limitation of our mind:  our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight.

Continue reading

Confidence, Part I

Confidence is defined as our degree of belief that a certain thought or action is correct. There is confidence in your own individual decisions or perceptions and then the between person confidence where you defer your own decision making to someone else.

Why am I thinking of confidence? An article by Cass Sunstein explains it well. The article appeared in Bloomberg, Politics & Policy, October 18, 2018, Bloomberg Opinion, “Donald Trump is Amazing. Here’s the Science to Prove It.”

Continue reading

A Nice Surprise

This post is based on a paper written by Andy Clark, author of Surfing Uncertainty (See Paper Predictive Processing for a fuller treatment.),A nice surprise? Predictive processing and the active pursuit of novelty,”  that appeared in Phenomenology and the Cognitive Sciences, pp. 1-14. DOI: 10.1007/s11097-017-9525-z. For me this is a chance to learn how Andy Clark has polished up his arguments since his book.  It also strikes me as connected to my recent posts on Curiosity and Creativity.

Clark and Friston (See post The Prediction Machine) depict human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory simulations. But we know that we are attracted to the unexpected. We humans often seem to actively seek out surprising events, deliberately seeking novel and exciting streams of sensory stimulation. So how does that square with the idea of minimizing prediction error.

Continue reading

Denver Bullet Study

This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that  arming teachers is a solution to anything. However,  going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.

Continue reading

Dark Room Problem- Minimizing Surprise

dark_roomThis post is based on the paper: “Free-energy minimization and the dark-room problem,” written by Karl Friston, Christopher Thornton and Andy Clark that appeared in Frontiers in Psychology in May 2012. Recent years have seen the emergence of an important new fundamental theory of brain function (Posts Embodied Prediction and Prediction Error Minimization). This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose over arching principle is the minimization of surprise (or, equivalently, the maximization of expectation). A puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. People do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.”

Continue reading

Aging and Decisions from Experience

homerThis post is based on the paper: “The role of cognitive abilities in decisions from experience: Age differences emerge as a function of choice set size,” by Renato Frey, Rui Mata,  and Ralph Hertwig that appeared in Cognition 142 (2015) 60–80.

People seldom enjoy access to summarized information about risky options before making
a decision except for things like weather forecasts that explicitly state a probability. Instead, they may search for information and learn from the environment—thus making decisions from experience. Many consequential decisions—including health care choices, finances, and everyday risks (e.g., driving in bad weather; crossing a busy street)—are made without full knowledge of the possible outcomes and their probabilities so we must make decisions from experience. According to the authors, the mind’s most notable transformation across the life span is a substantial decline in processing speed, working memory and short-term memory capacity —all components potentially involved in search and learning processes.

Continue reading

Risk Reward Heuristic

riskrewardindexThis post is based on a paper: “Ecologically Rational Choice and the Structure of the Environment”, that appeared in the Journal of Experimental Psychology: 2014, Vol. 143, No. 5. The authors are Timothy J. Pleskac and Ralph Hertwig. The paper is based on the idea that decision making theory has largely ignored the idea that risk and reward are tied together with payoff magnitudes signaling their probabilities.

How people should and do deal  with uncertainty is one of the most vexing problems in theorizing about choice. The researchers suggests a process that is inferential in nature and rests on the notion that probabilities can be approximated from statistical regularities that govern real-world gambles. In the environment there are typically multiple fallible indicators to guide your way. When some cues become unreliable or unavailable, the organism can exploit this redundancy by substituting or alternating between different cues. This is possible because of what Brunswik called the mutual substitutability or vicarious functioning of cues. It is these properties of intercue relationships and substitutability that Pleskac and Hertwig suggest offer a new perspective on how people make decisions under uncertainty. Under uncertainty, cues such as the payoffs associated with different courses of actions may be accessible, whereas other cues—in this case, the probability with which those payoffs occur—are not. This missing probability information has been problematic for choice theories as typically both payoffs and probabilities are used in determining the value of options and in choosing. However, if payoffs and probabilities are interrelated, then this ecological property can permit the decision maker to infer hidden or unknown probability distributions from the payoffs themselves, thus easing the problem of making decisions under uncertainty.

Continue reading

Recognition heuristic and cognitive effort

2015-04-22 14.27.57Having the good fortune to be lost in Venice, I was reminded of the nuances of the recognition heuristic. My wife found the perfect antique store which was unsurprisingly closed for lunch. We went on, but a couple of hours later we tried to recreate our steps. For much of the journey, we did well having only to recognize that we had seen a particular store or archway or bridge before. Unfortunately, that broke down when we realized that we were retracing our steps in a five minute period. We still remembered that walk to the restaurant the first night, but it was unfortunately not differentiated in our minds. This was certainly an example of less knowledge being more. Eventually, using a combination of GPS and maps we found our way back to our hotel, but we never did find that antique store. And I was trying.

This post is based on a paper by Benjamin Hilbig, Martha Michalkiewicz, Marta Castela, Rudiger Pohl and Edgar Erdfelder: “Whatever the cost? Information integration in memory-based inferences depends on cognitive effort.” that was scheduled to appear in Memory and Cognition 2014. Fundamental uncertainty is a not uncommon situation for our decision making. There is an ongoing argument with the fast and frugal heuristics toolbox approach and the single tool approaches of evidence accumulation and parallel constraint satisfaction. However that argument depends on the particular task, the type of task, and on and on. I am still waiting for a giant table that puts all those things together.

Continue reading

Medical Decisions–Risk Saavy

screeningLearnMoreThis post looks at the medical/health component of decision making as addressed in Gerd Gigerenzer’s new book, Risk Saavy, How to Make Good Decisions. First, Gigerenzer has contributed greatly to improving health decision making. This blog includes three consecutive posts on the Statistics of Health Decision Making based on Gigerenzer’s work.

He points out both the weaknesses of screening tests and our understanding of the results. We have to overcome our tendency to see linear relationships when they are nonlinear. Doctors are no different. The classic problem is an imperfect screening test for a relatively rare disease. You cannot think in fractions or percentages. You must think in absolute frequencies. Breast cancer screening is one example. Generally, it can catch about 90% of breast cancers and only about 9% test positive who do not have breast cancer. So if you have a positive test, that means chances are you have breast cancer. No! You cannot let your intuition get involved especially when the disease is more rare than the test’s mistakes. If we assume that 10 out of 1000 women have breast cancer, then 90% or 9 will be detected, but about 90 of the 1000 women will test positive who do not have disease. Thus only 9 of the 99 who test positive actually have breast cancer. I know this, but give me a new disease or a slightly different scenario and let a month pass, I will still be tempted to shortcut the absolute frequencies and get it wrong.

Continue reading