This post examines a paper by Robin Hogarth and Natalia Karelaia that represents a mountain of work in preparing a meta analysis that includes about 250 studies using the lens model to study the way humans make decisions that are probabilistically related to cues. The studies were conducted over 5 decades. Hogarth’s work has been the subject of the posts Robin Hogarth on Expertise and Learning, Feedback, and Intuition. Hogarth suggests the examples of an analyst examining financial indicators to predict corporate bankruptcy, a manager using behavior in interviews to assess job candidates, or a physician looking at symptoms that indicate the severity of a disease. In all of these cases, the simple beauty of Brunswik’s lens model lies in recognizing that the person’s judgment and the criterion being predicted can be thought of as two separate functions of cues available in the environment of the decision. The accuracy of judgment therefore depends, first, on how predictable the criterion is on the basis of the cues and, second, the extent to which the function describing the person’s judgment matches its environmental counterpart.
This post examines a couple of applications of Signal Detection Theory. Both are technically beyond me, but the similarities in the applications seem instructive. In both articles, SDT is used to evaluate questionnaire type screening tools. This is not a big surprise since it is where most of us first saw applications of statistical hypothesis testing and those false positives and false negatives. One paper looks at BRCA genetic risk screening and the other depression screening. In both cases, the screening instruments do not propose to be gold standards, but only introductory screening. It might be the type of screening that internal medicine doctors might do. In both cases, there is the idea that pure probability based instruments are ineffective, due to the biases that most people carry with them. One paper utilizes a fast and frugal decision tree(FFT) and the other three risk categories to provide the gist as in fuzzy trace theory(FTT). This gives us promotion of two similar acronyms: FFT and FTT.
I first mentioned the global neuronal workspace in the post Toward a Culture of Neurons. It is also discussed in Consciousness, Confessions of a Romantic Reductionist. Stanislas Dehaene and his colleagues have done much to enhance and improve the GNW Model, but original credit goes to B.J. Baars who was the author of the 1997 book, In the Theater of Consciousness: The Workspace of the Mind. The GNW model relies upon a few simple assumptions. Its main premise is that conscious access is global information availability: what we subjectively experience as conscious access is the selection, amplification and global broadcasting, to many distant areas, of a single piece of information selected for its salience or relevance to current goals. Although today the relevance to judgment and decision making may be indirect, it certainly shares the limited capacity of consciousness and vast capacity of the unconscious with parallel constraint satisfaction theory.
I used material from Ken Hammond in his book Human Judgment and Social Policy, Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice in my previous post. In the book he makes the point, previously lost on me, that a key risk is never mentioned in discussions of the 1986 Challenger disaster–the risk of a false negative.
Sometimes it is easy to figure out the subject of the next post. Other times, nothing seems interesting. In my opinion, no one has written on the subject of judgment and decision making in a more insightful and interesting way than Kenneth R. Hammond. I have looked at two of his books in previous posts: Judgments Under Stress and Beyond Rationality (three posts). For now I am going to cherry pick part of the epilogue of his book Human Judgment and Social Policy, Irreducible Uncertainty, Inevitable Error, Unavoidable Injustice.
This is clearly an example of the blind trying to lead when sight is a real advantage. Glockner displays PCS1 and PCS2 in some figures in his January 2014 paper in the Journal of Judgment and Decision Making. Since I tend to look at the pictures, this got my interest. Was this some different model or some innovation? I have provided some narrative explanations of Parallel Constraint Satisfaction in earlier posts, but here I am going to look at the difference between PCS1 and PCS2. I am doing this by cobbling together explanations from a few of Glockner’s papers. This is a little dangerous since the experiments are different.
This post is based on a paper by several scientists at the Max Planck Institute for Research on Collective Goods. It is a merging of several things that I have been interested in over the years: social psychology, public good economics, city planning, and epidemiology (at least in a metaphoric sense). Politicians loved the simplicity of “broken windows,” and I was willing as a city planner to use it if it got more resources for what I wanted. Being tough on crime was an easier sell than normal city planning administration.