Confidence is defined as our degree of belief that a certain thought or action is correct. There is confidence in your own individual decisions or perceptions and then the between person confidence where you defer your own decision making to someone else.
Why am I thinking of confidence? An article by Cass Sunstein explains it well. The article appeared in Bloomberg, Politics & Policy, October 18, 2018, Bloomberg Opinion, “Donald Trump is Amazing. Here’s the Science to Prove It.”
I discovered that I was a celiac a few months ago and accordingly I am on a gluten free diet. Compared to most conditions discovered in one’s late sixties, celiac disease seems almost inconsequential. However, it fits into the idea of prediction error minimization. In effect, the environment has changed and I need to change my predictions. Bread and beer are now bad. My automatic, intuitive prediction machine has not been getting it right. It is disorienting. I can no longer “See food, eat food.” I can change the environment at home, but in the wider world I need to be aware. My brain needs to dedicate perpetual, and at least for now, conscious effort to this cause. It is almost as if I became instantly even dumber. It makes me more self absorbed in social settings that involve food. Not known for my social skills, I have been a good listener, but now not so much. On my Dad’s 94th birthday, I ate a big piece of German chocolate cake, enjoyed it thoroughly, and then remembered that it was not allowed. In my particular case, I do not get sick or nauseated when I make such a mistake so my commitment is always under threat. This demands an even larger share of my brain to be compliant. My main incentive to comply is those photos of my scalloped small intestine. I note that I was diagnosed after years of trying to figure out my low ferritin levels. (It will be extremely disappointing if I find that my ferritin is still low.) Continue reading →
This post is based on a paper written by Andy Clark, author of Surfing Uncertainty (See Paper Predictive Processing for a fuller treatment.), “A nice surprise? Predictive processing and the active pursuit of novelty,” that appeared in Phenomenology and the Cognitive Sciences, pp. 1-14. DOI: 10.1007/s11097-017-9525-z. For me this is a chance to learn how Andy Clark has polished up his arguments since his book. It also strikes me as connected to my recent posts on Curiosity and Creativity.
Clark and Friston (See post The Prediction Machine) depict human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory simulations. But we know that we are attracted to the unexpected. We humans often seem to actively seek out surprising events, deliberately seeking novel and exciting streams of sensory stimulation. So how does that square with the idea of minimizing prediction error.
This post is derived from a review article: “The Role of Intuition in the Generation and Evaluation Stages of Creativity,” authored by Judit Pétervári, Magda Osman and Joydeep Bhattacharya that appeared in Frontiers of Psychology, September 2016 doi: 10.3389/fpsyg.2016.01420. It struck me that in all this blog’s posts, creativity had almost never come up. Then I threw it together with Edward O Wilson’s 2017 book: The Origins of Creativity, Liveright Publishing, New York. (See posts Evolution for Everyone and Cultural Evolution for more from Edward O. Wilson. He is the ant guy. He is interesting, understandable, and forthright.)
Creativity is notoriously difficult to capture by a single definition. Petervari et al suggest that creativity is a process that is broadly similar to problem solving, in which, for both, information is coordinated toward reaching a specific goal, and the information is organized in a novel, unexpected way. Problems which require creative solutions are ill-defined, primarily because there are multiple hypothetical solutions that would satisfy the goals. Wilson sees creativity beyond typical problem solving.
This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that arming teachers is a solution to anything. However, going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.
This post is based on selections from: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22. I am going to become more familiar with the work of the authors since they clearly share my admiration for Hammond and were his colleagues. They also understand better than I how he fit into the discipline of judgment and decision making (The links take you to past Posts.). I merely cherry pick my opinion of his most significant contributions.
As a student of Egon Brunswik, Hammond advanced Brunswik’s theory of probabilistic functionalism and the idea of representative design. Hammond pioneered the use of Brunswik’s lens model as a framework for studying how individuals use information from the task environment to make judgments. Hammond introduced the lens model equation to the study of judgment processes, and used this to measure the utility of different forms of feedback in multiple-cue probability learning.
Hammond proposed cognitive continuum theory which states that quasirationality is an important middle-ground between intuition and analysis and that cognitive performance is dictated by the match between task properties and mode of cognition. Intuition (often also referred to as System 1, experiential, heuristic, and associative thinking) is generally considered to be an unconscious, implicit, automatic, holistic, fast process, with great capacity, requiring little cognitive effort. By contrast, analysis (often also referred to as System 2, rational, and rule-based thinking) is generally characterized as a conscious, explicit, controlled, deliberative, slow process that has limited capacity and is cognitively demanding. For Hammond, quasirationality is distinct from rationality. It comprises different combinations of intuition and analysis, and so may sometimes lie closer to the intuitive end of the cognitive continuum and at other times closer to the analytic end. Brunswik pointed to the adaptive nature of perception (and cognition). Dhami and Mumpower suggest that for Hammond, modes of cognition are determined by properties of the task (and/or expertise with the task). Task properties include, for example, the amount of information, its degree of redundancy, format, and order of presentation, as well as the decision maker’s familiarity with the task, opportunity for feedback, and extent of time pressure. The cognitive mode induced will depend on the number, nature and degree of task properties present.
Movement along the cognitive continuum is characterized as oscillatory or alternating, thus allowing different forms of compromise between intuition and analysis. Success on a task inhibits movement along the cognitive continuum (or change in cognitive mode) while failure stimulates it. In my opinion, Glöckner and his colleagues have built upon Hammond’s work. Parallel constraint satisfaction theory suggests that intuition and analysis operate in an integrative fashion and in concert with Hammond’s idea of oscillation between the two. Glockner suggests that intuition makes the decisions through an iterative lens model type process, but sends analysis out for more information when there is no clear winner.
Hammond returned to the themes of analysis and intuition and the cognitive continuum in his last book entitled Beyond Rationality: The Search for Wisdom in a Troubled Time, published at age 92 in 2007. This is a frank look at the world that pulls few punches. At the heart of his argument is the proposition that the key to wisdom lies in being able to match modes of cognition to properties of the task.
In 1996, Hammond published a book entitled Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice which attempted to understand the policy formation process. The book emphasized two key themes. The first theme was whether our decision making should be judged on coherence competence or on correspondence competence. The issue, according to Hammond, was whether in a policy context, it was more important to be rational (internally and logically consistent) or to be empirically accurate. Analysis is best judged with coherence, while intuition is best judged by accuracy. To achieve balance–quasirationality and eventually wisdom, the key lies in how we think about error, which was the second theme. Hammond emphasized the duality of error. Brunswik demonstrated that the error distributions for intuitive and analytical processes were quite different. Intuitive processes led to distributions in which there were few precisely correct responses but also few large errors, whereas with analysis there were often many precisely correct responses but occasional large errors. According to Hammond, duality of error inevitably occurs whenever decisions must be made in the face of irreducible uncertainty, or uncertainty that cannot be reduced at the moment action is required. Thus, there are two potential mistakes that may arise — false positives (Type I errors) and false negatives (Type II errors)—whenever policy decisions involve dichotomous choices, such as whether to admit or reject college applications, claims for welfare benefits, and so on. Hammond argued that any policy problem involving irreducible uncertainty has the potential for dual error, and consequently unavoidable injustice in which mistakes are made that favor one group over another. He identified two tools of particular value for analyzing policy making in the face of irreducible environmental uncertainty and duality of error. These were Signal Detection Theory and the Taylor-Russell paradigm. These concepts also applicable to best designing airplane instruments (See post Technology and the Ecological Hybrid.).
Why do almost all people tell the truth in ordinary everyday
life? […] The reason is, firstly because it is easier; for
lying demands invention, dissimulation, and a good memory
(Friedrich Nietzsche, page 54, Human, All Too Human: A Book for Free Spirits, 1878)
“I just fired the head of the F.B.I. He was crazy, a real nut job,” Mr. Trump said, according to the document, which was read to The New York Times by an American official. “I faced great pressure because of Russia. That’s taken off.”
Mr. Trump added, “I’m not under investigation.” (Pres. Donald Trump, discussion with Russian diplomats, May 10, 2017).
This post is based on the paper: ” ‘ I can see it in your eyes’: Biased Processing and Increased Arousal in Dishonest Responses,” authored by Guy Hochman, Andreas Glockner, Susan Fiedler, and Shahar Ayal, that appeared in the Journal of Behavioral Decision Making, December 2015.
This post is based on a paper that appeared in Judgment and Decision Making, Vol. 12, No. 4, July 2017, pp. 369–381, “How generalizable is good judgment? A multi-task, multi-benchmark study,” authored by Barbara A. Mellers, Joshua D. Baker, Eva Chen, David R. Mandel, and Philip E. Tetlock. Tetlock is a legend in decision making, and it is likely that he is an author because it is based on some of his past work and not because he was actively involved. Nevertheless, this paper, at least, provides an opportunity to go over some of the ideas in Superforecasting and expand upon them. Whoops! I was looking for an image to put on this post and found the one above. Mellers and Tetlock looked married and they are. I imagine that she deserved more credit in Superforecasting, the Art and Science of Prediction. Even columnist David Brooks who I have derided in the past beat me to that fact. (http://www.nytimes.com/2013/03/22/opinion/brooks-forecasting-fox.html)
The authors note that Kenneth Hammond’s correspondence and coherence (Beyond Rationality) are the gold standards upon which to evaluate judgment. Correspondence is being empirically correct while coherence is being logically correct. Human judgment tends to fall short on both, but it has gotten us this far. Hammond always decried that psychological experiments were often poorly designed as measures, but complimented Tetlock on his use of correspondence to judge political forecasting expertise. Experts were found wanting although they were better when the forecasting environment provided regular, clear feedback and there were repeated opportunities to learn. According to the authors, Weiss & Shanteau suggested that, at a minimum, good judges (i.e., domain experts) should demonstrate consistency and
discrimination in their judgments. In other words, experts should make similar judgments if cases are alike, and dissimilar judgments when cases are unalike. Mellers et al suggest that consistency and discrimination are silver standards that could be useful. (As an aside, I would suggest that Ken Hammond would likely have had little use for these. Coherence is logical consistency and correspondence is empirical discrimination.)
Although I have had much respect for Dan Kahan’s work, I have had a little trouble with the Identity protective Cognition Thesis (ICT). The portion in bold in the quote below from “Motivated Numeracy and Enlightened Self-Government” has never rung true.
On matters like climate change, nuclear waste disposal, the financing of economic stimulus programs, and the like, an ordinary citizen pays no price for forming a perception of fact that is contrary to the best available empirical evidence: That individual’s personal beliefs and related actions—as consumer, voter, or public discussant—are too inconsequential to affect the level of risk he or anyone else faces or the outcome of any public policy debate. However, if he gets the ‘wrong answer” in relation to the one that is expected of members of his affinity group, the impact could be devastating: the loss of trust among peers, stigmatization within his community, and even the loss of economic opportunities.
Why should Thanksgiving be so painful if it were true? I do not even know what my friends think of these things. Now at some point issues like climate change become so politically tainted that you may avoid talking about them to not antagonize your friends, but that does not change my view. But now Kahan has a better explanation.
This post is based on a comment paper: “Honest People Tend to Use Less–Not More—Profanity: Comment on Feldman et al.’s (2017) Study,” that appeared in Social Psychological and Personality Science 1-5 and was written by R. E. de Vries, B. E. Hilbig, Ingo Zettler, P. D. Dunlop, D. Holtrop, K. Lee, and M. C. Ashton. Why would honesty suddenly be important with respect to decision making when I have largely ignored it in the past? You will have to figure that out for yourself. It reminded me that most of our decision making machinery is based on relative differences. We compare, but we are not so good at absolutes. Thus, when you get a relentless fearless liar, the relative differences are widened and this is likely to spread out what seems to be a reasonable decision.