“Human achievement is lower when there are nonlinearities in the ecology.” (What has Brunswik’s Lens Model Taught?).
This post is inspired by the book: Rebooting AI – Building Artificial Intelligence We Can Trust, written by Gary Marcus and Ernest Davis, New York, 2019. Gary Marcus (see post Kluge) is a well known author and artificial intelligence entrepreneur and Ernest Davis is a professor of computer science at Carnegie Mellon. To oversimplify, the authors emphasize that the successes of AI are narrow and tend to be greedy, opaque, and brittle. They provide history of AI seemingly about being ready for prime time decade after decade after decade. Self driving cars are almost there, but they are not. Human frailties in driving result in a death about every 100,000,000 miles driven, but Marcus and Davis indicate that self driving cars require human intervention every 10,000 miles which is 10,000 times in 100,000,000 miles. It may be a very long time before we are ready to sign off on self-driving cars, because the progress thus far has been the easy part.
Taming Uncertainty by Ralph Hertwig (See posts Dialectical Bootstrapping and Harnessing the Inner Crowd.), Timothy J Pleskac (See post Risk Reward Heuristic.), Thorsten Pachur (See post Emotion and Risky Choice.) and the Center for Adaptive Rationality, MIT Press, 2019, is a new compendium that I found accidentally in a public library. There is plenty of interesting reading in the book. It takes the adaptive toolbox approach as opposed to the Swiss Army Knife . The book gets back cover raves from Cass Sunstein (See posts Going to Extremes, Confidence, Part 1.), Nick Chater, and Gerd Gigerenzer (See post Gigerenzer–Risk Saavy, and others.). I like the pieces, but not the whole.
This post is based on selections from: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22. I am going to become more familiar with the work of the authors since they clearly share my admiration for Hammond and were his colleagues. They also understand better than I how he fit into the discipline of judgment and decision making (The links take you to past Posts.). I merely cherry pick my opinion of his most significant contributions.
As a student of Egon Brunswik, Hammond advanced Brunswik’s theory of probabilistic functionalism and the idea of representative design. Hammond pioneered the use of Brunswik’s lens model as a framework for studying how individuals use information from the task environment to make judgments. Hammond introduced the lens model equation to the study of judgment processes, and used this to measure the utility of different forms of feedback in multiple-cue probability learning.
Hammond proposed cognitive continuum theory which states that quasirationality is an important middle-ground between intuition and analysis and that cognitive performance is dictated by the match between task properties and mode of cognition. Intuition (often also referred to as System 1, experiential, heuristic, and associative thinking) is generally considered to be an unconscious, implicit, automatic, holistic, fast process, with great capacity, requiring little cognitive effort. By contrast, analysis (often also referred to as System 2, rational, and rule-based thinking) is generally characterized as a conscious, explicit, controlled, deliberative, slow process that has limited capacity and is cognitively demanding. For Hammond, quasirationality is distinct from rationality. It comprises different combinations of intuition and analysis, and so may sometimes lie closer to the intuitive end of the cognitive continuum and at other times closer to the analytic end. Brunswik pointed to the adaptive nature of perception (and cognition). Dhami and Mumpower suggest that for Hammond, modes of cognition are determined by properties of the task (and/or expertise with the task). Task properties include, for example, the amount of information, its degree of redundancy, format, and order of presentation, as well as the decision maker’s familiarity with the task, opportunity for feedback, and extent of time pressure. The cognitive mode induced will depend on the number, nature and degree of task properties present.
Movement along the cognitive continuum is characterized as oscillatory or alternating, thus allowing different forms of compromise between intuition and analysis. Success on a task inhibits movement along the cognitive continuum (or change in cognitive mode) while failure stimulates it. In my opinion, Glöckner and his colleagues have built upon Hammond’s work. Parallel constraint satisfaction theory suggests that intuition and analysis operate in an integrative fashion and in concert with Hammond’s idea of oscillation between the two. Glockner suggests that intuition makes the decisions through an iterative lens model type process, but sends analysis out for more information when there is no clear winner.
Hammond returned to the themes of analysis and intuition and the cognitive continuum in his last book entitled Beyond Rationality: The Search for Wisdom in a Troubled Time, published at age 92 in 2007. This is a frank look at the world that pulls few punches. At the heart of his argument is the proposition that the key to wisdom lies in being able to match modes of cognition to properties of the task.
In 1996, Hammond published a book entitled Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice which attempted to understand the policy formation process. The book emphasized two key themes. The first theme was whether our decision making should be judged on coherence competence or on correspondence competence. The issue, according to Hammond, was whether in a policy context, it was more important to be rational (internally and logically consistent) or to be empirically accurate. Analysis is best judged with coherence, while intuition is best judged by accuracy. To achieve balance–quasirationality and eventually wisdom, the key lies in how we think about error, which was the second theme. Hammond emphasized the duality of error. Brunswik demonstrated that the error distributions for intuitive and analytical processes were quite different. Intuitive processes led to distributions in which there were few precisely correct responses but also few large errors, whereas with analysis there were often many precisely correct responses but occasional large errors. According to Hammond, duality of error inevitably occurs whenever decisions must be made in the face of irreducible uncertainty, or uncertainty that cannot be reduced at the moment action is required. Thus, there are two potential mistakes that may arise — false positives (Type I errors) and false negatives (Type II errors)—whenever policy decisions involve dichotomous choices, such as whether to admit or reject college applications, claims for welfare benefits, and so on. Hammond argued that any policy problem involving irreducible uncertainty has the potential for dual error, and consequently unavoidable injustice in which mistakes are made that favor one group over another. He identified two tools of particular value for analyzing policy making in the face of irreducible environmental uncertainty and duality of error. These were Signal Detection Theory and the Taylor-Russell paradigm. These concepts also applicable to best designing airplane instruments (See post Technology and the Ecological Hybrid.).
This post is based on a paper that appeared in Judgment and Decision Making, Vol. 12, No. 4, July 2017, pp. 369–381, “How generalizable is good judgment? A multi-task, multi-benchmark study,” authored by Barbara A. Mellers, Joshua D. Baker, Eva Chen, David R. Mandel, and Philip E. Tetlock. Tetlock is a legend in decision making, and it is likely that he is an author because it is based on some of his past work and not because he was actively involved. Nevertheless, this paper, at least, provides an opportunity to go over some of the ideas in Superforecasting and expand upon them. Whoops! I was looking for an image to put on this post and found the one above. Mellers and Tetlock looked married and they are. I imagine that she deserved more credit in Superforecasting, the Art and Science of Prediction. Even columnist David Brooks who I have derided in the past beat me to that fact. (http://www.nytimes.com/2013/03/22/opinion/brooks-forecasting-fox.html)
The authors note that Kenneth Hammond’s correspondence and coherence (Beyond Rationality) are the gold standards upon which to evaluate judgment. Correspondence is being empirically correct while coherence is being logically correct. Human judgment tends to fall short on both, but it has gotten us this far. Hammond always decried that psychological experiments were often poorly designed as measures, but complimented Tetlock on his use of correspondence to judge political forecasting expertise. Experts were found wanting although they were better when the forecasting environment provided regular, clear feedback and there were repeated opportunities to learn. According to the authors, Weiss & Shanteau suggested that, at a minimum, good judges (i.e., domain experts) should demonstrate consistency and
discrimination in their judgments. In other words, experts should make similar judgments if cases are alike, and dissimilar judgments when cases are unalike. Mellers et al suggest that consistency and discrimination are silver standards that could be useful. (As an aside, I would suggest that Ken Hammond would likely have had little use for these. Coherence is logical consistency and correspondence is empirical discrimination.)
This post starts with the paper “Brains striving for coherence: Long-term cumulative plot formation in the default mode network,” authored by K. Tylén, P. Christensen, A. Roepstorff, T. Lund, S. Østergaard, and M. Donald. The paper appeared in NeuroImage 121 (2015) 106–114.
People are capable of navigating and keeping track of all the parallel social activities of everyday life even when confronted with interruptions or changes in the environment. Tylen et al suggest that even though these situations present themselves in series of interrupted segments often scattered over huge time periods, they tend to constitute perfectly well-formed and coherent experiences in conscious memory. However, the underlying mechanisms of such long-term integration is not well understood. While brain activity is generally traceable within the short time frame of working memory, these integrative processes last for minutes, hours or even days.
I love Stanislas Dehaene’s experiments, his general ideas and his book: Consciousness and the Brain: Deciphering How the Brain Codes our Thoughts, Viking, New York 2014 is a great synthesis and with respect to the title, it is a fine book. However, with respect to how it deals with decision making, I am mostly disappointed.
Consciousness: Informer or Informer/Decider? Although Dehaene’s Global Neuronal Workspace Theory describes what we feel as consciousness as the global sharing of information, in the book he seems to promote the idea of consciousness as the decider as well as the informer. Dehaene writes:
“My picture of consciousness imples a natural division of labor. In the basement, an army of unconscious workers does the exhausting work, sifting through piles of data. Meanwhile, at the top, a select board of executives, examining only a brief of the situation, slowly makes conscious decisions…No one can act on mere probabilities–at some point, a dictatorial process is needed to collapse all uncertainties and decide….Consciousness may be the brain’s scale tipping device—collapsing all unconscious probabilities into a single conscious sample so that we can move on to further decisions.” p89
I like the informer part, but I like the parallel constraint satisfaction (post Parallel Constraint Satisfaction Theory) idea that consciousness is asked to get more information (information search and production) which the unconscious system turns into a decision. In my scenario the visual system seems to have priority to get to the conscious level, then other sensory systems, and then the other unconscious systems push the most difficult or interesting decisions they have at any particular time through to the conscious system. Maybe there is some sort of priority ranking. Clearly, most rather mundane decisions seem to break through to consciousness only occasionally. As a part of breaking through to consciousness, more of the modular systems are alerted to the issue and maybe information can come from inside or maybe we seek information from others or examine the environment. We get the new information and the wheels of the parallel constraint system start whirring again to see if the decision can be made. Now, I do see a cognitive continuum so that yes certain decisions may stay with the board of executives. Dehaene uses the example of multidigit arithmetic. For most of us, it seems to consist of a series of introspective steps that we can accurately report. For instance, to multiply 30 by 47, I might multiply 30 by 40 and get 1200 and then add it to 7 by 30 to get 1410. But for a numerical savants that could be done in the unconscious. Nevertheless, there are certain things where consciousness does seem to be where the decisions are made. Complex multi-step questions where the emotions are more or less uninvolved might be examples.
Maybe the interesting part is the sort of phase change between the unconscious and the conscious. There is a lot happening there. Dehaene says that consciousness is doing the collapsing, but it seems to me it is already done once it reaches consciousness. Maybe that is not an important argument. One theory is that conscious perception occurs when the stimulus allows the accumulation of sufficient sensory evidence to reach a threshold, at which point the brain ‘decides’ whether it has seen anything, and what it is. The mechanisms of conscious access would then be comparable to those of other decisions, involving an accumulation toward a threshold — with the difference that conscious perception would correspond to a global high-level ‘decision to engage’ many of the brain’s internal resources. Dehaene mentions this in a paper that was discussed in the post A Theory of Consciousness.
Consciousness Gives Us the Power of a Sophisticated Serial Computer. Dehaene is a believer in the Bayesian unconscious. “A strict logic governs the brain’s unconscious circuits–they appear ideally organized to perform statistically accurate inferences concerning our sensory inputs.” Both the unconscious and conscious systems seem to work in a linear fashion (Brunswik’s Lens Model), but the conscious system can redirect.
“This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire cycle may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” p100
Dehaene and his colleagues have studied schizophrenics. They found a basic deficit of consciousness perception in schizophrenia. Words had to be presented for a longer time before schizophrenics reported conscious seeing. “Schizophrenics’ main problem seems to lie in the global integration of incoming information into a coherent whole.” Dehaene suggests that schizophrenics have a “global loss of top-down connectivity. This loss impairs capacity for conscious monitoring, top-down attention, working memory, and decision making. Apparently in schizophrenics, the prediction machine is not making enough predictions. With reduced top down messages, sensory inputs are never explained and error messages remain triggering multiple explanations. Schizophrenics thus see the need for complicated explanations that can lead to the far fetched interpretations of their surroundings that may express themselves as bizarre hallucinations and delusions.
Dehaene suggests that consciousness allows us to share information with others and that leads to better decisions. Dehaene’s most interesting idea is that our social abilities allow us to make decisions together and that these are better decisions. Although one can argue that language is imperfect and that much of it is used to transmit trivia and gossip, Dehaene provides evidence that our conversations are more than tabloids. This is a point that needed to be made to me. I was tending to believe that there was almost a direct tradeoff between cognitive skills and social skills and even though that tradeoff was adaptive, maybe it was close. Dehaene puts forth the argument that two heads are better than one and that consciousness makes this possible (This is also directly in line with Scott Page’s: The Difference — How the Power of Diversity Creates Better Groups, post Diversity or Systematic Error).
He cites the experiments of Iranian psychologist Bahador Bahrami. Bahrami had pairs of subjects examine two displays and were asked to decide on each trial whether the first or second contained a near threshold target image. The subjects initially made the decision independently and if they differed were asked to resolve the conflict through a brief discussion. As long as the abilities of the individuals were similar, pairing them yielded a significant improvement in accuracy. Nuances were not was shared to gain this, but simply a categorical answer (first or second display) and a judgment of confidence.
Dehaene suggests that Bayesian decision theory tells us that the very same decision rules should apply to our own thoughts and to those that we receive from others. In both cases, optimal decision making demands that each source of information, whether internal or external, should be weighted as accurately as possible, by an estimate of its reliability, before all the information is brought together into a single decision space. This sounds much like cue validities in Brunswik’s lens model or Parallel Constraint Satisfaction theory. According to Dehaene, once this workspace was opened to social inputs from other minds, we were able reap the benefits of a collective decision making algorithm: by comparing our knowledge with that of others, we achieve better decisions.
I occasionally like to go far afield from judgment and decision making, and here I go again. This post takes a look at Michio Kaku’s 2014 book, The Future of the Mind–The Scientific Quest To Understand, Enhance, And Empower The Mind, Doubleday, New York.
Decision models can sometimes seem very explanatory, but they seem so simple minded when I read in Kaku’s book that we have two separate centers of consciousness and that we may all have photographic memories.
Stemmler’s doctoral dissertation : “Just do it! Guilt as a moral intuition to cooperate–A parallel constraint satisfaction approach,” (post Simultaneous Feeling and Deciding) also made me aware of the concept of grounded cognition. Stemmler indicates that his findings are mainly in line with current approaches of grounded cognition. Models of grounded cognition assume that cognitive processing and conceptual knowledge is grounded in the perceptual and action systems. According to Stemmler in these approaches, the experience of an emotion is based on the way a situation is temporarily conceptualized or categorized. Conceptualizations of emotions represent abstract conceptual constructs that aggregate information from different perceptual and action systems. Since these approaches assume that conceptual knowledge is stored with relation to other information which was co-activated within a current situation, the situations itself can activate knowledge, similar to the assumption of network-models of emotion. For instance, situations may be accompanied by memory-retrieval of similar situations which then have to be adapted to the current situation. Hence, present and past situational information is integrated in a coherent fashion. Situational information, memories from the past and current affective feelings can be integrated to form a meaningful gestalt.
This post is the first after a few technical issues. Some of my decision making has been suboptimal, but we will keep trying. The post is based on a commentary, “Is Anything Sacred Anymore?” that appeared in Psychological Inquiry, 23: 155-161, 2012. The authors are Peter H Ditto, Brittany Liu, and Sean P Wojcik. The commentary examines the paper: “The Moral Dyad: A Fundamental Template Unifying Moral Judgment,” by Gray, Waytz, and Young, that appeared in the Psychological Inquiry: An International Journal for the Advancement of Psychological Theory, 23:2, 206-215. I have found commentary articles easier for me to understand since they have to examine two or more positions.
Ditto et al agree with Gray et al about the central role of mind perception in moral judgment and are intrigued by the idea that moral evaluation requires not just an intentional moral agent but also a suffering moral patient, and moreover that this dyadic structure of agent and patient, intention and suffering is the center of morality. They do not agree that interpersonal harm is the very meaning of morality, that no act can be morally offensive unless it is perceived to result in suffering.