Category Archives: Evaluation/Accuracy/Achievement

Elastic

This post is based on the book, Elastic–Flexible Thinking in a Time of Change by Leonard Mlodinow, Pantheon Books, New York, 2018. Mlodinow is a physicist and worked with Stephen Hawking. His previous book Subliminal evidently gave him considerable access to interesting people like Seth MacFarlane.  He mentions that Stephen Hawking’s pace of communicating was at best six words a minute with public presentations being done ahead of time. Mlodinow notes that this slowing of the pace of a conversation is actually quite helpful in forcing you to consider the words as opposed to thinking of what you are going to say while the other person is talking so that you can have an instant response.

Continue reading

AI–Building a Mind is Hard

This post is inspired by the book:  Rebooting AI – Building Artificial Intelligence We Can Trust, written by Gary Marcus and Ernest Davis, New York, 2019. Gary Marcus (see post Kluge) is a well known author and artificial intelligence entrepreneur and Ernest Davis is a professor of computer science at Carnegie Mellon. To oversimplify, the authors emphasize that the successes of AI are narrow and tend to be greedy, opaque, and brittle. They provide history of AI seemingly about being ready for prime time decade after decade after decade. Self driving cars are almost there, but they are not. Human frailties in driving result in a death about every 100,000,000 miles driven, but Marcus and Davis indicate that self driving cars require human intervention every 10,000 miles which is 10,000 times in 100,000,000 miles. It may be a very long time before we are ready to sign off on self-driving cars, because the progress thus far has been the easy part.

Continue reading

Taming Uncertainty

Taming Uncertainty  by Ralph Hertwig (See posts Dialectical Bootstrapping and Harnessing the Inner Crowd.), Timothy J Pleskac (See post Risk Reward Heuristic.), Thorsten Pachur (See post Emotion and Risky Choice.) and the Center for Adaptive Rationality, MIT Press, 2019, is a new compendium that I found accidentally in a public library. There is plenty of interesting reading in the book. It takes the adaptive toolbox approach as opposed to the Swiss Army Knife . The book gets back cover raves from Cass Sunstein (See posts Going to Extremes, Confidence, Part 1.), Nick Chater, and Gerd Gigerenzer (See post Gigerenzer–Risk Saavy, and others.). I like the pieces, but not the whole.

 

Continue reading

Nervous States- Democracy and the Decline of Reason

This book, Nervous States – Democracy and the Decline of Reason, 2019, written by William Davies tries to explain the state we are in. The end of truth or the domination of feelings or the end of expertise all come to mind. People perceive that change is so fast that the slow knowledge developed by reason and learning is devalued, while instant knowledge that will be worthless tomorrow like that used by commodity, bond or stock trading networks is highly valued. Davies builds on Hayek and says many things that ring true. In three posts, I will present the main points of Davies’ book, argue with some of the points, and present what Davies says we can do about it.  Devaluing reason is a big deal for decision making.

Continue reading

Confidence, Part III

In Confidence, Part II, the authors conclude that confidence is computed continuously, online, throughout the decision making process, thus lending support to models of the mind as a device that computes with probabilistic estimates and probability distributions.

 

The Embodied Mind

One such explanation is that of predictive processing/embodied mind. Andy Clark, Jacob Hohwy, and Karl Friston have all helped to weave together this concept. Our minds are blends of top down and bottom up processing where error messages and the effort to fix those errors makes it possible for us to engage the world. According to the embodied mind model, our minds do not just reside in our heads. Our bodies determine how we interact with the world and how we shape our world so that we can predict better. Our evolutionary limitations have much to do with how our minds work. One example provided by Andy Clark and Barbara Webb is a robot without any brain imitating human walking nearly perfectly (video go to 2:40). Now how does this tie into confidence?  Confidence at a conscious level is the extent of our belief that our decisions are correct. But the same thing is going on as a fundamental part of perception and action. Estimating the certainty of our own prediction error signals of our own mental states and processes is as Clark notes:  “clearly a delicate and tricky business. For it is the prediction error signal that…gets to ‘carry the news’.”

Continue reading

Confidence, Part I

Confidence is defined as our degree of belief that a certain thought or action is correct. There is confidence in your own individual decisions or perceptions and then the between person confidence where you defer your own decision making to someone else.

Why am I thinking of confidence? An article by Cass Sunstein explains it well. The article appeared in Bloomberg, Politics & Policy, October 18, 2018, Bloomberg Opinion, “Donald Trump is Amazing. Here’s the Science to Prove It.”

Continue reading

A Nice Surprise

This post is based on a paper written by Andy Clark, author of Surfing Uncertainty (See Paper Predictive Processing for a fuller treatment.),A nice surprise? Predictive processing and the active pursuit of novelty,”  that appeared in Phenomenology and the Cognitive Sciences, pp. 1-14. DOI: 10.1007/s11097-017-9525-z. For me this is a chance to learn how Andy Clark has polished up his arguments since his book.  It also strikes me as connected to my recent posts on Curiosity and Creativity.

Clark and Friston (See post The Prediction Machine) depict human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory simulations. But we know that we are attracted to the unexpected. We humans often seem to actively seek out surprising events, deliberately seeking novel and exciting streams of sensory stimulation. So how does that square with the idea of minimizing prediction error.

Continue reading

Interoception and Theory of Mind

This post is based on the paper: “The role of interoceptive inference in theory of mind,” by
Sasha Ondobaka, James Kilner, and Karl Friston, Brain Cognition, 2017 Mar; 112: 64–68.

Understanding or inferring the intentions, feelings and beliefs of others is a hallmark of human social cognition often referred to as having a Theory of Mind.  ToM has been described as a cognitive ability to infer the intentions and beliefs of others, through processing of their physical appearance, clothes, bodily and facial expressions. Of course, the repertoire of hypotheses of our ToM is borrowed from the hypotheses that cause our own behavior.

But how can processing of internal visceral/autonomic information (interoception) contribute to the understanding of others’ intentions? The authors consider interoceptive inference as a special case of active inference. Friston (see post Prediction Error Minimization)  has theorized that the goal of the brain is to minimize prediction error and that this can be achieved both by changing predictions to match the observed data and, via action, changing the sensory input to match predictions.  When you drop the knife and then catch it with the other hand, you are using active inference.

Continue reading

Denver Bullet Study

This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that  arming teachers is a solution to anything. However,  going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.

Continue reading

Kenneth R Hammond

This post is based on selections from: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22. I am going to become more familiar with the work of the authors since they clearly share my admiration for  Hammond and were his colleagues. They also understand better than I how he fit into the discipline of judgment and decision making (The links take you to past Posts.). I merely cherry pick my opinion of his most significant contributions.

As a student of Egon Brunswik, Hammond advanced Brunswik’s theory of probabilistic functionalism and the idea of representative design. Hammond pioneered the use of Brunswik’s lens model as a framework for studying how individuals use information from the task environment to make judgments. Hammond introduced the lens model equation to the study of judgment processes, and used this to measure the utility of different forms of feedback in multiple-cue probability learning.

Hammond proposed cognitive continuum theory which states that quasirationality is an important middle-ground between intuition and analysis and that cognitive performance is dictated by the match between task properties and mode of cognition. Intuition (often also referred to as System 1, experiential, heuristic, and associative thinking) is generally considered to be an unconscious, implicit, automatic, holistic, fast process, with great capacity, requiring little cognitive effort. By contrast, analysis (often also referred to as System 2, rational, and rule-based thinking) is generally characterized as a conscious, explicit, controlled, deliberative, slow process that has limited capacity and is cognitively demanding.  For Hammond, quasirationality is distinct from rationality. It comprises different combinations of intuition and analysis, and so may sometimes lie closer to the intuitive end of the cognitive continuum and at other times closer to the analytic end. Brunswik  pointed to the adaptive nature of perception (and cognition). Dhami and Mumpower suggest that for Hammond, modes of cognition are determined by properties of the task (and/or expertise with the task). Task properties include, for example, the amount of information, its degree of redundancy, format, and order of presentation, as well as the decision maker’s familiarity with the task, opportunity for feedback, and extent of time pressure. The cognitive mode induced will depend on the number, nature and degree of task properties present.

Movement along the cognitive continuum is characterized as oscillatory or alternating, thus allowing different forms of compromise between intuition and analysis. Success on a task inhibits movement along the cognitive continuum (or change in cognitive mode) while failure stimulates it. In my opinion,  Glöckner and his colleagues have built upon Hammond’s work. Parallel constraint satisfaction theory suggests that intuition and analysis operate in an integrative fashion and in concert with Hammond’s idea of oscillation between the two. Glockner suggests that intuition makes the decisions through an iterative lens model type process, but sends analysis out for more information  when there is no clear winner.

Hammond returned to the themes of analysis and intuition and the cognitive continuum in his last book entitled Beyond Rationality: The Search for Wisdom in a Troubled Time, published at age 92 in 2007. This is a frank look at the world that pulls few punches. At the heart of his argument is the proposition that the key to wisdom lies in being able to match modes of cognition to properties of the task.

In 1996, Hammond published a book entitled Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice which attempted to understand the policy formation process. The book emphasized two key themes. The first theme was whether our decision making should be judged on coherence competence or on correspondence competence. The issue, according to Hammond, was whether in a policy context, it was more important to be rational (internally and logically consistent) or to be empirically accurate. Analysis is best judged with coherence, while intuition is best judged by accuracy. To achieve balance–quasirationality and eventually wisdom, the key lies in how we think about error, which was the second theme. Hammond  emphasized the duality of error. Brunswik demonstrated that the error distributions for intuitive and analytical processes were quite different. Intuitive processes led to distributions in which there were few precisely correct responses but also few large errors, whereas with analysis there were often many precisely correct responses but occasional large errors. According to Hammond, duality of error inevitably occurs whenever decisions must be made in the face of irreducible uncertainty, or uncertainty that cannot be reduced at the moment action is required. Thus, there are two potential mistakes that may arise — false positives (Type I errors) and false negatives (Type II errors)—whenever policy decisions involve dichotomous choices, such as whether to admit or reject college applications, claims for welfare benefits, and so on. Hammond argued that any policy problem involving irreducible uncertainty has the potential for dual error, and consequently unavoidable injustice in which mistakes are made that favor one group over another. He identified two tools of particular value for analyzing policy making in the face of irreducible environmental uncertainty and duality of error. These were Signal Detection Theory and the Taylor-Russell  paradigm. These concepts also applicable to best designing airplane instruments (See post Technology and the Ecological Hybrid.).