Category Archives: Systems

Confidence, Part III

In Confidence, Part II, the authors conclude that confidence is computed continuously, online, throughout the decision making process, thus lending support to models of the mind as a device that computes with probabilistic estimates and probability distributions.

 

The Embodied Mind

One such explanation is that of predictive processing/embodied mind. Andy Clark, Jacob Hohwy, and Karl Friston have all helped to weave together this concept. Our minds are blends of top down and bottom up processing where error messages and the effort to fix those errors makes it possible for us to engage the world. According to the embodied mind model, our minds do not just reside in our heads. Our bodies determine how we interact with the world and how we shape our world so that we can predict better. Our evolutionary limitations have much to do with how our minds work. One example provided by Andy Clark and Barbara is a robot without any brain imitating human walking nearly perfectly (video go to 2:40). Now how does this tie into confidence?  Confidence at a conscious level is the extent of our belief that our decisions are correct. But the same thing is going on as a fundamental part of perception and action. Estimating the certainty of our own prediction error signals of our own mental states and processes is as Clark notes:  “clearly a delicate and tricky business. For it is the prediction error signal that…gets to ‘carry the news’.”

Continue reading

Confidence, Part II

Now the confidence heuristic is not the only thing Trump takes advantage of, but we will leave those for another time. I will also avoid the question of whether or not Trump is actually confident. So what is the relationship of confidence and decision making? Daniel Kahneman in Thinking, Fast and Slow on page 13  describes:

a puzzling limitation of our mind:  our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight.

Continue reading

Celiac Disease and the Prediction Machine

I discovered that I was a celiac a few months ago and accordingly I am on a gluten free diet.  Compared to most conditions discovered in one’s late sixties, celiac disease seems almost inconsequential. However, it fits into the idea of prediction error minimization.  In effect, the environment has changed and I need  to change my predictions. Bread and beer are now bad. My automatic, intuitive prediction machine has not been getting it right. It is disorienting. I can no longer “See food, eat food.” I can change the environment at home, but in the wider world I need to be aware. My brain needs to dedicate perpetual, and at least for now, conscious effort to this cause. It is almost as if I became instantly even dumber.  It makes me more self absorbed in social settings that involve food. Not known for my social skills, I have been a good listener, but now not so much.  On my Dad’s 94th birthday, I ate a big piece of German chocolate cake, enjoyed it thoroughly, and then remembered that it was not allowed. In my particular case, I do not get sick or nauseated when I make such a mistake so my commitment is always under threat. This demands an even larger share of my brain to be compliant. My main incentive to comply is those photos of my scalloped small intestine. I note that I was diagnosed after years of trying to figure out my low ferritin levels. (It will be extremely disappointing if I find that my ferritin is still low.) Continue reading

Intuition and Creativity

This post is derived from a review article: “The Role of Intuition in the Generation and Evaluation Stages of Creativity,” authored by Judit Pétervári, Magda Osman and Joydeep Bhattacharya that appeared in Frontiers of Psychology, September 2016 doi: 10.3389/fpsyg.2016.01420. It struck me that in all this blog’s posts, creativity had almost never come up. Then I threw it together with Edward O Wilson’s 2017 book:  The Origins of Creativity, Liveright Publishing, New York. (See posts Evolution for Everyone and Cultural Evolution for more from Edward O. Wilson. He is the ant guy. He is interesting, understandable, and forthright.)

Creativity is  notoriously difficult to capture by a single definition. Petervari et al suggest that creativity is a process that is broadly similar to problem solving, in which, for both, information is coordinated toward reaching a specific goal, and the information is organized in a novel, unexpected way.  Problems which require creative solutions are ill-defined, primarily because there are multiple hypothetical solutions that would satisfy the goals. Wilson sees creativity beyond typical problem solving.

Continue reading

Kenneth R Hammond

This post is based on selections from: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22. I am going to become more familiar with the work of the authors since they clearly share my admiration for  Hammond and were his colleagues. They also understand better than I how he fit into the discipline of judgment and decision making (The links take you to past Posts.). I merely cherry pick my opinion of his most significant contributions.

As a student of Egon Brunswik, Hammond advanced Brunswik’s theory of probabilistic functionalism and the idea of representative design. Hammond pioneered the use of Brunswik’s lens model as a framework for studying how individuals use information from the task environment to make judgments. Hammond introduced the lens model equation to the study of judgment processes, and used this to measure the utility of different forms of feedback in multiple-cue probability learning.

Hammond proposed cognitive continuum theory which states that quasirationality is an important middle-ground between intuition and analysis and that cognitive performance is dictated by the match between task properties and mode of cognition. Intuition (often also referred to as System 1, experiential, heuristic, and associative thinking) is generally considered to be an unconscious, implicit, automatic, holistic, fast process, with great capacity, requiring little cognitive effort. By contrast, analysis (often also referred to as System 2, rational, and rule-based thinking) is generally characterized as a conscious, explicit, controlled, deliberative, slow process that has limited capacity and is cognitively demanding.  For Hammond, quasirationality is distinct from rationality. It comprises different combinations of intuition and analysis, and so may sometimes lie closer to the intuitive end of the cognitive continuum and at other times closer to the analytic end. Brunswik  pointed to the adaptive nature of perception (and cognition). Dhami and Mumpower suggest that for Hammond, modes of cognition are determined by properties of the task (and/or expertise with the task). Task properties include, for example, the amount of information, its degree of redundancy, format, and order of presentation, as well as the decision maker’s familiarity with the task, opportunity for feedback, and extent of time pressure. The cognitive mode induced will depend on the number, nature and degree of task properties present.

Movement along the cognitive continuum is characterized as oscillatory or alternating, thus allowing different forms of compromise between intuition and analysis. Success on a task inhibits movement along the cognitive continuum (or change in cognitive mode) while failure stimulates it. In my opinion,  Glöckner and his colleagues have built upon Hammond’s work. Parallel constraint satisfaction theory suggests that intuition and analysis operate in an integrative fashion and in concert with Hammond’s idea of oscillation between the two. Glockner suggests that intuition makes the decisions through an iterative lens model type process, but sends analysis out for more information  when there is no clear winner.

Hammond returned to the themes of analysis and intuition and the cognitive continuum in his last book entitled Beyond Rationality: The Search for Wisdom in a Troubled Time, published at age 92 in 2007. This is a frank look at the world that pulls few punches. At the heart of his argument is the proposition that the key to wisdom lies in being able to match modes of cognition to properties of the task.

In 1996, Hammond published a book entitled Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice which attempted to understand the policy formation process. The book emphasized two key themes. The first theme was whether our decision making should be judged on coherence competence or on correspondence competence. The issue, according to Hammond, was whether in a policy context, it was more important to be rational (internally and logically consistent) or to be empirically accurate. Analysis is best judged with coherence, while intuition is best judged by accuracy. To achieve balance–quasirationality and eventually wisdom, the key lies in how we think about error, which was the second theme. Hammond  emphasized the duality of error. Brunswik demonstrated that the error distributions for intuitive and analytical processes were quite different. Intuitive processes led to distributions in which there were few precisely correct responses but also few large errors, whereas with analysis there were often many precisely correct responses but occasional large errors. According to Hammond, duality of error inevitably occurs whenever decisions must be made in the face of irreducible uncertainty, or uncertainty that cannot be reduced at the moment action is required. Thus, there are two potential mistakes that may arise — false positives (Type I errors) and false negatives (Type II errors)—whenever policy decisions involve dichotomous choices, such as whether to admit or reject college applications, claims for welfare benefits, and so on. Hammond argued that any policy problem involving irreducible uncertainty has the potential for dual error, and consequently unavoidable injustice in which mistakes are made that favor one group over another. He identified two tools of particular value for analyzing policy making in the face of irreducible environmental uncertainty and duality of error. These were Signal Detection Theory and the Taylor-Russell  paradigm. These concepts also applicable to best designing airplane instruments (See post Technology and the Ecological Hybrid.).

 

Curiosity

 

Although I have had much respect for Dan Kahan’s work, I have had a little trouble with the Identity protective Cognition Thesis (ICT). The portion in bold in the quote below from “Motivated Numeracy and Enlightened Self-Government” has never rung true.

On matters like climate change, nuclear waste disposal, the financing of economic stimulus programs, and the like, an ordinary citizen pays no price for forming a perception of fact that is contrary to the best available empirical evidence: That individual’s personal beliefs and related actions—as consumer, voter, or public discussant—are too inconsequential to affect the level of risk he or anyone else faces or the outcome of any public policy debate. However, if he gets the ‘wrong answer” in relation to the one that is expected of members of his affinity group, the impact could be devastating: the loss of trust among peers, stigmatization within his community, and even the loss of economic opportunities.

Why should Thanksgiving be so painful if it were true? I do not even know what my friends think of these things. Now at some point issues like climate change become so politically tainted that you may avoid talking about them to not antagonize your friends, but that does not change my view. But now Kahan has a better explanation.

Continue reading

Nonlinear

This post is based on a paper: “Learning from experience in nonlinear environments: Evidence from a competition scenario,” authored by Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73. It is not a new topic, but adds to the evidence of our nonlinear shortcomings.

In 1980, Brehmer questioned whether people can learn from experience – more specifically, whether they can learn to make appropriate inferential judgments in probabilistic environments outside the psychological laboratory. His assessment was quite pessimistic. Other scholars have also highlighted difficulties in learning from experience. Klayman, for example, pointed out that in naturally occurring environments, feedback can be scarce, subject to distortion, and biased by lack of appropriate comparative data. Hogarth asked when experience-based judgments are accurate and introduced the concepts of kind and wicked learning environments (see post Learning, Feedback, and Intuition). In kind learning environments, people receive plentiful, accurate feedback on their judgments; but in wicked learning environments they don’t. Thus, Hogarth argued, a kind learning environment is a necessary condition for learning from experience whereas wicked learning environments lead to error. This paper explores the boundary conditions of learning to make inferential judgments from experience in kind environments. Such learning depends on both identifying relevant information and aggregating information appropriately. Moreover, for many tasks in the naturally occurring environment, people have prior beliefs about cues and how they should be aggregated.

Continue reading

Fuzzy-Trace Theory Explains Time Pressure Results

This post is based on a paper:  “Intuition and analytic processes in probabilistic reasoning: The role of time pressure,” authored by Sarah Furlan, Franca Agnoli, and Valerie F. Reyna. Valerie Reyna is, of course, the primary creator of fuzzy-trace theory. Reyna’s papers tend to do a good job of summing up the state of the decision making art and fitting in her ideas.

The authors note that although there are many points of disagreement, theorists generally agree that there are heuristic processes (Type 1) that are fast, automatic, unconscious, and require low effort. Many adult judgment biases are considered a consequence of these fast heuristic responses, also called default responses, because they are the first responses that come to mind. Type 1 processes are a central feature of intuitive thinking, requiring little cognitive effort or control. In contrast, analytic (Type 2) processes are considered slow, conscious, deliberate, and effortful, and they place demands on central working memory resources. Furlan, Agnoli, and Reyna assert that Type 2 processes are thought to be related to individual differences in cognitive capacity and Type 1 processes are thought to be independent of cognitive ability, a position challenged by the research presented in their paper. I was surprised by the given that intuitive abilities were unrelated to overall intelligence and cognitive abilities as set up by typical dual process theories.

Continue reading

David Brooks- Innocent of Cognitive Continuum

heartindexDavid Brooks seems to be a fascination of mine.  The New York Times columnist surprises me both in positive and negative ways. I only mention it when the surprise is negative. Below is an excerpt from his November 25, 2016, column.

And this is my problem with the cognitive sciences and the advice world generally. It’s built on the premise that we are chess masters who make decisions, for good or ill. But when it comes to the really major things we mostly follow our noses. What seems interesting, beautiful, curious and addicting?

Have you ever known anybody to turn away from anything they found compulsively engaging?

We don’t decide about life; we’re captured by life. In the major spheres, decision-making, when it happens at all, is downstream from curiosity and engagement. If we really want to understand and shape behavior, maybe we should look less at decision-making and more at curiosity. Why are you interested in the things you are interested in? Why are some people zealously seized, manically attentive and compulsively engaged?

Now that we know a bit more about decision-making, maybe the next frontier is desire. Maybe the next Kahneman and Tversky will help us understand what explains, fires and orders our loves.

I can imagine his frustration with the advice world and maybe with Kahneman and Tversky (see post Prospect Theory), but it appears that Brooks is only looking at the advice world. Brooks would benefit by looking at the work of Ken Hammond. The post Cognitive Continuum examines some of Hammond’s 1980 work. Hammond has those chess masters to whom Brooks refers as one extreme of the cognitive continuum.  The post Intuition in J-DM   looks at the work of Tilmann Betsch and Andreas Glockner in what is called Parallel Constraint Satisfaction theory.

Betsch and Glockner believe that information integration and output formation (choice, preference) is intuitive.  Analysis involves directed search (looking for valid cues or asking an expert for advice), making sense of information, anticipating future events, etc.  Thus, they see a judgment as a collaboration of intuition and analysis. The depth of analysis varies, but intuition is always working so preferences are formed even without intention.  Limiting processing time and capacity constrains only input.  Thus, once information is in the system, intuition will use that information irrespective of amount and capacity.

Curiosity might be considered the degree of dissonance we encounter in our automatic decision making that in effect tells us to analyze–find more information and examine it.  We do mostly follow our noses, because it is adaptive. But it is also adaptive to be able to recognize change that is persistent and must be responded to. A parameter of the parallel constraint satisfaction model is the individual sensitivity to differences between cue validities. This implies that individuals respond differently to changing cue validities. Some change quickly when they perceive differences and others change at a glacial pace.

The post Rationality Defined Again:  RUN & JUMP  looks at the work of Tilmann Betsch and Carsten Held. Brooks in his opinion piece seems to be suggesting that analytic processing is pretty worthless. Betsch and Held have seen this before. They note that research on non-analytic processing has led some authors to conclude that intuition is superior to analysis or to at least promote it as such with the obvious example being Malcolm Gladwell in Blink.  Such a notion, however, neglects the important role of decision context. The advantages and disadvantages of the different types of thought depend on the nature of the task. Moreover, the plea for a general superiority of intuition neglects the fact that analysis is capable of things that intuition is not. Consider, for example, the case of routine maintenance and deviation decisions. Routine decisions will lead to good results if prior experiences are representative for the task at hand. In a changing world, however, routines can become obsolete.

In the absence of analytic thought, adapting to changing contexts requires slow, repetitive learning. Upon encountering repeated failure, the individual’s behavioral tendencies will change. The virtue of deliberate analysis, Brooks’ chess mastering, lies in its power to quickly adapt to new situations without necessitating slow reinforcement learning. Whereas intuition is fast and holistic due to parallel processing, it is a slave to the pre-formed structure of knowledge as well as the representation of the decision problem. The relations among goals, situations, options and outcomes that result from prior  knowledge provide the structural constraints under which intuitive processes operate. They can work very efficiently but, nevertheless, cannot change these constraint. The potential of analytic thought dwells in the power to change the structure of the representation of a decision problem.

I believe that Brooks realizes that analytic thought is one thing that distinguishes us from other creatures even though it does not seem to inform much of our decision making. The post Embodied(Grounded) prediction(cognition  might also open a window for Brooks.

Individual tendencies in Stroop test predicts use of model based learning

stroopindexThis post is based on the paper: “Cognitive Control Predicts Use of Model-Based Reinforcement-Learning,” authored by A. Ross Otto, Anya Skatova, Seth Madlon-Kay, and Nathaniel D. Daw, Journal of Cognitive Neuroscience. 2015 February ; 27(2): 319–333. doi:10.1162/jocn_a_00709. The paper is difficult to understand, but covers some interesting subject matter. Andy Clark alerted me to these authors in his book Surfing Uncertainty.
This paper makes the obvious assertion that dual process theories of decision making abound, and that a recurring theme  is that the systems rely differentially upon automatic or habitual versus deliberative or goal-directed modes of processing. According to Otto et al a popular refinement of this idea proposes that the two modes of choice arise from distinct strategies for learning the values of different actions, which operate in parallel.  In this theory, habitual choices are produced by model-free reinforcement learning (RL), which learns which actions tend to be followed by rewards. In contrast, goal-directed choice is formalized by model-based RL, which reasons prospectively about the value of candidate actions using knowledge (a learned internal “model”) about the environment’s structure and the organism’s current goals. Whereas model-free choice involves requires merely retrieving the (directly learned) values of previous actions, model-based valuation requires a sort of mental simulation – carried out at decision time – of the likely consequences of candidate actions, using the learned internal model. Under this framework, at any given moment both the model-based and model-free systems can provide action values to guide choices, inviting a critical question: how does the brain determine which system’s preferences ultimately control behavior?

Continue reading