Category Archives: Evaluation/Accuracy/Achievement

Denver Bullet Study

This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that  arming teachers is a solution to anything. However,  going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.

Continue reading

Kenneth R Hammond

This post is based on selections from: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22. I am going to become more familiar with the work of the authors since they clearly share my admiration for  Hammond and were his colleagues. They also understand better than I how he fit into the discipline of judgment and decision making (The links take you to past Posts.). I merely cherry pick my opinion of his most significant contributions.

As a student of Egon Brunswik, Hammond advanced Brunswik’s theory of probabilistic functionalism and the idea of representative design. Hammond pioneered the use of Brunswik’s lens model as a framework for studying how individuals use information from the task environment to make judgments. Hammond introduced the lens model equation to the study of judgment processes, and used this to measure the utility of different forms of feedback in multiple-cue probability learning.

Hammond proposed cognitive continuum theory which states that quasirationality is an important middle-ground between intuition and analysis and that cognitive performance is dictated by the match between task properties and mode of cognition. Intuition (often also referred to as System 1, experiential, heuristic, and associative thinking) is generally considered to be an unconscious, implicit, automatic, holistic, fast process, with great capacity, requiring little cognitive effort. By contrast, analysis (often also referred to as System 2, rational, and rule-based thinking) is generally characterized as a conscious, explicit, controlled, deliberative, slow process that has limited capacity and is cognitively demanding.  For Hammond, quasirationality is distinct from rationality. It comprises different combinations of intuition and analysis, and so may sometimes lie closer to the intuitive end of the cognitive continuum and at other times closer to the analytic end. Brunswik  pointed to the adaptive nature of perception (and cognition). Dhami and Mumpower suggest that for Hammond, modes of cognition are determined by properties of the task (and/or expertise with the task). Task properties include, for example, the amount of information, its degree of redundancy, format, and order of presentation, as well as the decision maker’s familiarity with the task, opportunity for feedback, and extent of time pressure. The cognitive mode induced will depend on the number, nature and degree of task properties present.

Movement along the cognitive continuum is characterized as oscillatory or alternating, thus allowing different forms of compromise between intuition and analysis. Success on a task inhibits movement along the cognitive continuum (or change in cognitive mode) while failure stimulates it. In my opinion,  Glöckner and his colleagues have built upon Hammond’s work. Parallel constraint satisfaction theory suggests that intuition and analysis operate in an integrative fashion and in concert with Hammond’s idea of oscillation between the two. Glockner suggests that intuition makes the decisions through an iterative lens model type process, but sends analysis out for more information  when there is no clear winner.

Hammond returned to the themes of analysis and intuition and the cognitive continuum in his last book entitled Beyond Rationality: The Search for Wisdom in a Troubled Time, published at age 92 in 2007. This is a frank look at the world that pulls few punches. At the heart of his argument is the proposition that the key to wisdom lies in being able to match modes of cognition to properties of the task.

In 1996, Hammond published a book entitled Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice which attempted to understand the policy formation process. The book emphasized two key themes. The first theme was whether our decision making should be judged on coherence competence or on correspondence competence. The issue, according to Hammond, was whether in a policy context, it was more important to be rational (internally and logically consistent) or to be empirically accurate. Analysis is best judged with coherence, while intuition is best judged by accuracy. To achieve balance–quasirationality and eventually wisdom, the key lies in how we think about error, which was the second theme. Hammond  emphasized the duality of error. Brunswik demonstrated that the error distributions for intuitive and analytical processes were quite different. Intuitive processes led to distributions in which there were few precisely correct responses but also few large errors, whereas with analysis there were often many precisely correct responses but occasional large errors. According to Hammond, duality of error inevitably occurs whenever decisions must be made in the face of irreducible uncertainty, or uncertainty that cannot be reduced at the moment action is required. Thus, there are two potential mistakes that may arise — false positives (Type I errors) and false negatives (Type II errors)—whenever policy decisions involve dichotomous choices, such as whether to admit or reject college applications, claims for welfare benefits, and so on. Hammond argued that any policy problem involving irreducible uncertainty has the potential for dual error, and consequently unavoidable injustice in which mistakes are made that favor one group over another. He identified two tools of particular value for analyzing policy making in the face of irreducible environmental uncertainty and duality of error. These were Signal Detection Theory and the Taylor-Russell  paradigm. These concepts also applicable to best designing airplane instruments (See post Technology and the Ecological Hybrid.).


Consistency and Discrimination as Measures of Good Judgment

This post is based on a paper that appeared in Judgment and Decision Making, Vol. 12, No. 4, July 2017, pp. 369–381, “How generalizable is good judgment? A multi-task, multi-benchmark study,” authored by Barbara A. Mellers, Joshua D. Baker, Eva Chen, David R. Mandel, and Philip E. Tetlock.  Tetlock is a legend in decision making, and it is likely that he is an author because it is based on some of his past work and not because he was actively involved. Nevertheless, this paper, at least, provides an opportunity to go over some of the ideas in Superforecasting and expand upon them. Whoops! I was looking for an image to put on this post and found the one above. Mellers and Tetlock looked married and they are.  I imagine that she deserved more credit in Superforecasting, the Art and Science of Prediction. Even columnist David Brooks who I have derided in the past beat me to that fact. (

The authors note that Kenneth Hammond’s correspondence and coherence (Beyond Rationality) are the gold standards upon which to evaluate judgment. Correspondence is being empirically correct while coherence is being logically correct. Human judgment tends to fall short on both, but it has gotten us this far. Hammond always decried that psychological experiments were often poorly designed as measures, but complimented Tetlock  on his use of correspondence to judge political forecasting expertise. Experts were found wanting although they were better when the forecasting environment provided regular, clear feedback and there were repeated opportunities to learn. According to the authors, Weiss & Shanteau suggested that, at a minimum, good judges (i.e., domain experts) should demonstrate consistency and
discrimination in their judgments. In other words, experts should make similar judgments if cases are alike, and dissimilar judgments when cases are unalike.  Mellers et al suggest that consistency and discrimination are silver standards that could be useful. (As an aside, I would suggest that Ken Hammond would likely have had little use for these. Coherence is logical consistency and correspondence is empirical discrimination.)

Continue reading


This post is based on a comment paper: “Honest People Tend to Use Less–Not More—Profanity:  Comment on Feldman et al.’s (2017) Study,” that appeared in Social Psychological and Personality Science 1-5 and was written by R. E. de Vries, B. E. Hilbig, Ingo Zettler, P. D. Dunlop, D. Holtrop, K. Lee, and M. C. Ashton. Why would honesty suddenly be important  with respect to decision making when I have largely ignored it in the past? You will have to figure that out for yourself. It reminded me that most of our decision making machinery is based on relative differences. We compare, but we are not so good at absolutes. Thus, when you get a relentless fearless liar, the relative differences are widened and this is likely to spread out what seems to be a reasonable decision.

Continue reading


This post is based on a paper: “Learning from experience in nonlinear environments: Evidence from a competition scenario,” authored by Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73. It is not a new topic, but adds to the evidence of our nonlinear shortcomings.

In 1980, Brehmer questioned whether people can learn from experience – more specifically, whether they can learn to make appropriate inferential judgments in probabilistic environments outside the psychological laboratory. His assessment was quite pessimistic. Other scholars have also highlighted difficulties in learning from experience. Klayman, for example, pointed out that in naturally occurring environments, feedback can be scarce, subject to distortion, and biased by lack of appropriate comparative data. Hogarth asked when experience-based judgments are accurate and introduced the concepts of kind and wicked learning environments (see post Learning, Feedback, and Intuition). In kind learning environments, people receive plentiful, accurate feedback on their judgments; but in wicked learning environments they don’t. Thus, Hogarth argued, a kind learning environment is a necessary condition for learning from experience whereas wicked learning environments lead to error. This paper explores the boundary conditions of learning to make inferential judgments from experience in kind environments. Such learning depends on both identifying relevant information and aggregating information appropriately. Moreover, for many tasks in the naturally occurring environment, people have prior beliefs about cues and how they should be aggregated.

Continue reading

Hogarth on Simulation

scm1This post is a contination of the previous blog post Hogarth on Description. Hogarth and Soyer suggest that the information humans use for probabilistic decision making has two distinct sources: description of the particulars of the situations involved and through experience of past instances. Most decision aiding has focused on exploring effects of different problem descriptions and, as has been shown, is important because human judgments and decisions are so sensitive to different aspects of descriptions. However, this very sensitivity is problematic in that different types of judgments and decisions seem to need different solutions. To find methods with more general application, Hogarth and Soyer suggest exploiting the well-recognized human ability to encode frequency information, by building a simulation model that can be used to generate “outcomes” through a process that they call “simulated experience”.

Simulated experience essentially allows a decision maker to live actively through a decision situation as opposed to being presented with a passive description. The authors note that the difference between resolving problems that have been described as opposed to experienced is related to Brunswik’s distinction between the use of cognition and perception. In the former, people can be quite accurate in their responses but they can also make large errors. I note that this is similar to Hammond’s correspondence and coherence. With perception and correspondence, they are unlikely to be highly accurate but errors are likely to be small. Simulation, perception, and correspondence tend to be robust.

Continue reading

Coherence from the default mode network

coherenbrainThis post starts with the paper “Brains striving for coherence: Long-term cumulative plot formation in the default mode network,” authored by K. Tylén, P. Christensen, A. Roepstorff, T. Lund, S. Østergaard, and M. Donald. The paper appeared in NeuroImage 121 (2015) 106–114.

People are  capable of navigating and keeping track of all the parallel social activities of everyday life even when confronted with interruptions or changes in the environment. Tylen et al suggest that even though these situations present themselves in series of interrupted segments often scattered over huge time periods, they tend to constitute perfectly well-formed and coherent experiences in conscious memory. However, the underlying  mechanisms of such long-term integration is not well understood. While brain activity is generally traceable within the short time frame of working memory, these integrative processes last for minutes, hours or even days.

Continue reading


superforecastingimagesThis post is a look at the book by Philip E Tetlock and Dan Gardner, Superforecasting– the Art and Science of Prediction.  Phil Tetlock is also the author of Expert Political Judgment: How Good Is It? How Can We Know?   In Superforecasting Tetlock blends discussion of the largely popular literature on decision making and his long duration scientific work on the ability of experts and others to predict future events.

In Expert Political Judgment: How Good Is It? How Can We Know? Tetlock found that the average expert did little better than guessing.  He also found that some did better. In Superforecasting he discusses the study of those who did better and how they did it.

Continue reading

Dark Room Problem- Minimizing Surprise

dark_roomThis post is based on the paper: “Free-energy minimization and the dark-room problem,” written by Karl Friston, Christopher Thornton and Andy Clark that appeared in Frontiers in Psychology in May 2012. Recent years have seen the emergence of an important new fundamental theory of brain function (Posts Embodied Prediction and Prediction Error Minimization). This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose over arching principle is the minimization of surprise (or, equivalently, the maximization of expectation). A puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. People do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.”

Continue reading

Embodied(Grounded) prediction(cognition)


clark514DJ8Bec6L._SX329_BO1,204,203,200_This post is based on a paper by Andy Clark: “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015). Andy Clark is a philosopher at the University of Edinburgh whose tastes trend toward the wild shirt. He is a very well educated philosopher in the brain sciences and a good teacher. The paper seems to put forward some major ideas for decision making even though that is not its focus. Hammond’s idea of the Cognitive Continuum is well accommodated. It also seems quite compatible with Parallel Constraint Satisfaction, but leaves room for Fast and Frugal Heuristics. It seems to provide a way to merge Parallel Constraint Satisfaction and Cognitive Niches. I do not really understand PCS well enough, but it seems potentially to add hierarchy to PCS and make it into a generative model that can introduce fresh constraint satisfaction variables and constraints as new components. If you have not read the post Prediction Machine, you should because the current post skips much background. It is also difficult to distinguish Embodied Prediction and Grounded Cognition. There are likely to be posts that follow on the same general topic.

Continue reading