Category Archives: Correspondence

Kenneth R Hammond

This post is based on selections from: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22. I am going to become more familiar with the work of the authors since they clearly share my admiration for  Hammond and were his colleagues. They also understand better than I how he fit into the discipline of judgment and decision making (The links take you to past Posts.). I merely cherry pick my opinion of his most significant contributions.

As a student of Egon Brunswik, Hammond advanced Brunswik’s theory of probabilistic functionalism and the idea of representative design. Hammond pioneered the use of Brunswik’s lens model as a framework for studying how individuals use information from the task environment to make judgments. Hammond introduced the lens model equation to the study of judgment processes, and used this to measure the utility of different forms of feedback in multiple-cue probability learning.

Hammond proposed cognitive continuum theory which states that quasirationality is an important middle-ground between intuition and analysis and that cognitive performance is dictated by the match between task properties and mode of cognition. Intuition (often also referred to as System 1, experiential, heuristic, and associative thinking) is generally considered to be an unconscious, implicit, automatic, holistic, fast process, with great capacity, requiring little cognitive effort. By contrast, analysis (often also referred to as System 2, rational, and rule-based thinking) is generally characterized as a conscious, explicit, controlled, deliberative, slow process that has limited capacity and is cognitively demanding.  For Hammond, quasirationality is distinct from rationality. It comprises different combinations of intuition and analysis, and so may sometimes lie closer to the intuitive end of the cognitive continuum and at other times closer to the analytic end. Brunswik  pointed to the adaptive nature of perception (and cognition). Dhami and Mumpower suggest that for Hammond, modes of cognition are determined by properties of the task (and/or expertise with the task). Task properties include, for example, the amount of information, its degree of redundancy, format, and order of presentation, as well as the decision maker’s familiarity with the task, opportunity for feedback, and extent of time pressure. The cognitive mode induced will depend on the number, nature and degree of task properties present.

Movement along the cognitive continuum is characterized as oscillatory or alternating, thus allowing different forms of compromise between intuition and analysis. Success on a task inhibits movement along the cognitive continuum (or change in cognitive mode) while failure stimulates it. In my opinion,  Glöckner and his colleagues have built upon Hammond’s work. Parallel constraint satisfaction theory suggests that intuition and analysis operate in an integrative fashion and in concert with Hammond’s idea of oscillation between the two. Glockner suggests that intuition makes the decisions through an iterative lens model type process, but sends analysis out for more information  when there is no clear winner.

Hammond returned to the themes of analysis and intuition and the cognitive continuum in his last book entitled Beyond Rationality: The Search for Wisdom in a Troubled Time, published at age 92 in 2007. This is a frank look at the world that pulls few punches. At the heart of his argument is the proposition that the key to wisdom lies in being able to match modes of cognition to properties of the task.

In 1996, Hammond published a book entitled Human Judgment and Social Policy: Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice which attempted to understand the policy formation process. The book emphasized two key themes. The first theme was whether our decision making should be judged on coherence competence or on correspondence competence. The issue, according to Hammond, was whether in a policy context, it was more important to be rational (internally and logically consistent) or to be empirically accurate. Analysis is best judged with coherence, while intuition is best judged by accuracy. To achieve balance–quasirationality and eventually wisdom, the key lies in how we think about error, which was the second theme. Hammond  emphasized the duality of error. Brunswik demonstrated that the error distributions for intuitive and analytical processes were quite different. Intuitive processes led to distributions in which there were few precisely correct responses but also few large errors, whereas with analysis there were often many precisely correct responses but occasional large errors. According to Hammond, duality of error inevitably occurs whenever decisions must be made in the face of irreducible uncertainty, or uncertainty that cannot be reduced at the moment action is required. Thus, there are two potential mistakes that may arise — false positives (Type I errors) and false negatives (Type II errors)—whenever policy decisions involve dichotomous choices, such as whether to admit or reject college applications, claims for welfare benefits, and so on. Hammond argued that any policy problem involving irreducible uncertainty has the potential for dual error, and consequently unavoidable injustice in which mistakes are made that favor one group over another. He identified two tools of particular value for analyzing policy making in the face of irreducible environmental uncertainty and duality of error. These were Signal Detection Theory and the Taylor-Russell  paradigm. These concepts also applicable to best designing airplane instruments (See post Technology and the Ecological Hybrid.).


Consistency and Discrimination as Measures of Good Judgment

This post is based on a paper that appeared in Judgment and Decision Making, Vol. 12, No. 4, July 2017, pp. 369–381, “How generalizable is good judgment? A multi-task, multi-benchmark study,” authored by Barbara A. Mellers, Joshua D. Baker, Eva Chen, David R. Mandel, and Philip E. Tetlock.  Tetlock is a legend in decision making, and it is likely that he is an author because it is based on some of his past work and not because he was actively involved. Nevertheless, this paper, at least, provides an opportunity to go over some of the ideas in Superforecasting and expand upon them. Whoops! I was looking for an image to put on this post and found the one above. Mellers and Tetlock looked married and they are.  I imagine that she deserved more credit in Superforecasting, the Art and Science of Prediction. Even columnist David Brooks who I have derided in the past beat me to that fact. (

The authors note that Kenneth Hammond’s correspondence and coherence (Beyond Rationality) are the gold standards upon which to evaluate judgment. Correspondence is being empirically correct while coherence is being logically correct. Human judgment tends to fall short on both, but it has gotten us this far. Hammond always decried that psychological experiments were often poorly designed as measures, but complimented Tetlock  on his use of correspondence to judge political forecasting expertise. Experts were found wanting although they were better when the forecasting environment provided regular, clear feedback and there were repeated opportunities to learn. According to the authors, Weiss & Shanteau suggested that, at a minimum, good judges (i.e., domain experts) should demonstrate consistency and
discrimination in their judgments. In other words, experts should make similar judgments if cases are alike, and dissimilar judgments when cases are unalike.  Mellers et al suggest that consistency and discrimination are silver standards that could be useful. (As an aside, I would suggest that Ken Hammond would likely have had little use for these. Coherence is logical consistency and correspondence is empirical discrimination.)

Continue reading

Hogarth on Simulation

scm1This post is a contination of the previous blog post Hogarth on Description. Hogarth and Soyer suggest that the information humans use for probabilistic decision making has two distinct sources: description of the particulars of the situations involved and through experience of past instances. Most decision aiding has focused on exploring effects of different problem descriptions and, as has been shown, is important because human judgments and decisions are so sensitive to different aspects of descriptions. However, this very sensitivity is problematic in that different types of judgments and decisions seem to need different solutions. To find methods with more general application, Hogarth and Soyer suggest exploiting the well-recognized human ability to encode frequency information, by building a simulation model that can be used to generate “outcomes” through a process that they call “simulated experience”.

Simulated experience essentially allows a decision maker to live actively through a decision situation as opposed to being presented with a passive description. The authors note that the difference between resolving problems that have been described as opposed to experienced is related to Brunswik’s distinction between the use of cognition and perception. In the former, people can be quite accurate in their responses but they can also make large errors. I note that this is similar to Hammond’s correspondence and coherence. With perception and correspondence, they are unlikely to be highly accurate but errors are likely to be small. Simulation, perception, and correspondence tend to be robust.

Continue reading

Heuristic and Linear Models

IMG_0184This post is based on a paper:  “Heuristic and Linear Models of Judgment: Matching Rules and Environments,” written by Robin M. Hogarth and Natalia Karelaia, Psychological Review 2007, Vol. 114, No. 3, 733–758  that predated Hogarth and Karelaia’s (What has Brunswik’s Lens Model Taught?) meta-analysis.  It includes the underpinnings for that study.

Two classes of models have dominated research on judgment and decision making over past decades. In one, explicit recognition is given to the limits of information processing, and people are modeled as using simplifying heuristics (Gigerenzer, Kahneman, Tversky school). In the other (Hammond school), it is assumed that people can integrate all the information at hand and that this is combined and weighted as if using an algebraic—typically linear—model.

Continue reading

Mass Hysteria to the Rescue

ebolaThis post is a reaction to the column by Bret Stephens that appeared in the October 21, 2014, Wall Street Journal, entitled: “What the Ebola Experts Miss.” The column starts out:

Of course we should ban all nonessential travel from Liberia, Guinea, Sierra Leone and any other country badly hit by the Ebola virus.


Continue reading

Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice

socialdownloadSometimes it is easy to figure out the subject of the next post.  Other times, nothing seems interesting.  In my opinion, no one has written on the subject of judgment and decision making in a more insightful and interesting way than Kenneth R. Hammond.  I have looked at two of his books in previous posts: Judgments Under Stress and Beyond Rationality (three posts).  For now I am going to cherry pick part of the epilogue of his book Human Judgment and Social Policy, Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice.

Continue reading

Fuzzy-Trace Theory- Experts and Future Direction


Experts are important in helping to make decisions about health and safety so their risk perception and decision processes are important to understand. Reyna gives as examples: the emergency room physician deciding about the risk of a patient for a heart attack, the meteorologist deciding about tornado risk, and a lawyer evaluating potential damage awards to decide to settle or go to trial.  Conventional wisdom is that experts apply precise analysis and numerical reasoning to achieve the best outcomes, while novices are more like to reason without analysis or numerical reasoning.

Continue reading

Atrial Fibrillation

afindexDr Thomas Tape wrote an article “Coherence and correspondence in medicine” that appeared in the March 2009 edition of Judgment and Decision Making.  As you might expect, Dr Tape is applying some of the ideas of Kenneth Hammond to medicine.  Tape notes that the distinction between coherence (making logical sense) and correspondence (being empirically correct) seldom appears in the medical literature.

Tape suggests that the field of medicine began with coherence approaches and has only recently adopted correspondence approaches at all. The original rationale for bloodletting was based on the idea that disease comes from an imbalance of humors.  This coherent argument was around for hundreds of years before being dispelled.  It seems surprising that patient outcome was not the indicator of choice, but even today with medical progress and sophisticated statistics, it is not so easy to tell what works.

Continue reading

Judgments Under Stress

hellodaveKen Hammond wrote a book, Judgments Under Stress, published in 2000.  He was clearly frustrated with how the field of psychology dealt with stress and used his book as a vehicle to change the discussion.  Hammond really wants to talk about constancy while stress is a constancy disruptor. Hammond’s mentor, Egon Brunswik, saw constancy as the essence of life.  Hammond asserts that the orientation of the organism is directed toward maintaining stable relations with the environment, and that disruption of those stable relations is the definition of stress.

Continue reading

Beyond Rationality Part 2

2012-11-30 12.18.30-1This is the second post about Ken Hammond’s book Beyond Rationality.

Correspondence Researchers

Egon Brunswik abandoned the stimulus-organism-response of the day for his environmental texture as far back as 1935. But he got little notice, especially with the advent of the computer in the 1960s.  Texture could not compete with physics and the computer. But by the end of the twentieth century, there was an emphasis on ecology and environmental texture. As Hammond notes, the times have caught up with Brunswik.  Of course, this emphasis goes back to Darwin’s “entangled bank”.  Darwin made the point that living organisms evolve and survive in entangled relationships among other evolving organisms, all coping with the fallible indicators that are others’ behaviors.

Continue reading