Sometimes it is easy to figure out the subject of the next post. Other times, nothing seems interesting. In my opinion, no one has written on the subject of judgment and decision making in a more insightful and interesting way than Kenneth R. Hammond. I have looked at two of his books in previous posts: Judgments Under Stress and Beyond Rationality (three posts). For now I am going to cherry pick part of the epilogue of his book Human Judgment and Social Policy, Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice.
In the introduction Ken Hammond asked questions and in the epilogue, he gave concise answers. I have excerpted the first two.
- How good is human judgment? The reader now knows that the answer to that question requires that we differentiate between coherence and correspondence judgments. Coherence judgments tend to be poor unless one is trained in the discipline of statistics. When these calculated judgments are made by experts, they are precisely correct or wildly incorrect. Correspondence judgments are very accurate in the realm of physical perception but become less so as the conceptual context of the task increases and as the uncertainty in the task system increases. They are robust and imprecise.
- How does one acquire competence in judgment? Coherence: Persons must be educated with respect to the principles of probability; they can be (more should be). Raw experience is a poor teacher with respect to these principles. Correspondence: Correspondence accuracy in relation to physical perception is part of our biological endowment; competence that depends on the use of multiple fallible indicators is generally slowly (and often painfully) acquired through experience with outcomes. It is doubtful that high competence and accuracy in social perception, in any absolute sense, can be achieved, because of the high uncertainty in the targets—the behavior of the people perceived. Modest success is the best to be hoped for.
Coherence competence requires logic and rationality and what psychologists now call numeracy, while correspondence competence requires empirical correctness-matching the facts. Wisdom requires both.
Many of us have been educated to some extent on false positives and false negatives and Type 1 and Type 2 error. We have seen the box:
We generally focus on one error and make changes in the belief that only one error is being affected. Only by increasing the accuracy of prediction can both kinds of error be reduced simultaneously.
When the degree of accuracy (which will be modest in most cases) has reached its maximum, then we must choose which error to tolerate.
The Taylor-Russell diagram below is excerpted from Hammond’s book. It applies the same concept to social policy decisions. It could be who gets accepted to an ivy league college; who gets welfare benefits, or who gets a drug. The ellipse includes everyone and its shape in the case below implies that most people are getting what they deserve. Now if the FDA is still examining ambiguous clinical trials on a drug that could save certain people’s lives, it will be asked to quickly approve the drug and thus move the vertical line to the left to reduce the injustice to those individuals (reducing the size of area 2). Unfortunately, this will increase the injustice to society (area 4). In other words, releasing the drug may harm many people who it will not help and may muddy up predictions so that the ellipse cannot be reduced in the future. Since uncertainty cannot be eliminated from decisions of any importance, injustice cannot be eliminated from social policy decisions.
From reading the answers in the excerpts above, you may think that Hammond sees understanding probability and statistics as the key. Actually, he sees them as necessary but not sufficient. In the first question above that Hammond answers, he notes: “When these calculated judgments are made by experts, they are precisely correct or wildly incorrect.” Those Triple A sub prime mortgage bond tranches were a classic example of wildly incorrect. Hammond discusses the Challenger disaster at length as another example of statistics not necessarily being helpful. I think Hammond might enjoy “Rational decision making: balancing RUN and JUMP modes of analysis,” by Betsch and Held that I discussed in Rationality Defined Again. He might also suggest that the laymen among us take Scott Page’s course: Model Thinking that just began last week.
Hammond, K.R. (1996) Human Judgment and Social Policy, Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice. Oxford University Press: New York.