This post is based on a paper that appeared in Judgment and Decision Making, Vol. 12, No. 4, July 2017, pp. 369–381, “How generalizable is good judgment? A multi-task, multi-benchmark study,” authored by Barbara A. Mellers, Joshua D. Baker, Eva Chen, David R. Mandel, and Philip E. Tetlock. Tetlock is a legend in decision making, and it is likely that he is an author because it is based on some of his past work and not because he was actively involved. Nevertheless, this paper, at least, provides an opportunity to go over some of the ideas in Superforecasting and expand upon them. Whoops! I was looking for an image to put on this post and found the one above. Mellers and Tetlock looked married and they are. I imagine that she deserved more credit in Superforecasting, the Art and Science of Prediction. Even columnist David Brooks who I have derided in the past beat me to that fact. (http://www.nytimes.com/2013/03/22/opinion/brooks-forecasting-fox.html)
The authors note that Kenneth Hammond’s correspondence and coherence (Beyond Rationality) are the gold standards upon which to evaluate judgment. Correspondence is being empirically correct while coherence is being logically correct. Human judgment tends to fall short on both, but it has gotten us this far. Hammond always decried that psychological experiments were often poorly designed as measures, but complimented Tetlock on his use of correspondence to judge political forecasting expertise. Experts were found wanting although they were better when the forecasting environment provided regular, clear feedback and there were repeated opportunities to learn. According to the authors, Weiss & Shanteau suggested that, at a minimum, good judges (i.e., domain experts) should demonstrate consistency and
discrimination in their judgments. In other words, experts should make similar judgments if cases are alike, and dissimilar judgments when cases are unalike. Mellers et al suggest that consistency and discrimination are silver standards that could be useful. (As an aside, I would suggest that Ken Hammond would likely have had little use for these. Coherence is logical consistency and correspondence is empirical discrimination.)
Although I have had much respect for Dan Kahan’s work, I have had a little trouble with the Identity protective Cognition Thesis (ICT). The portion in bold in the quote below from “Motivated Numeracy and Enlightened Self-Government” has never rung true.
On matters like climate change, nuclear waste disposal, the financing of economic stimulus programs, and the like, an ordinary citizen pays no price for forming a perception of fact that is contrary to the best available empirical evidence: That individual’s personal beliefs and related actions—as consumer, voter, or public discussant—are too inconsequential to affect the level of risk he or anyone else faces or the outcome of any public policy debate. However, if he gets the ‘wrong answer” in relation to the one that is expected of members of his affinity group, the impact could be devastating: the loss of trust among peers, stigmatization within his community, and even the loss of economic opportunities.
Why should Thanksgiving be so painful if it were true? I do not even know what my friends think of these things. Now at some point issues like climate change become so politically tainted that you may avoid talking about them to not antagonize your friends, but that does not change my view. But now Kahan has a better explanation.
This post is based on a comment paper: “Honest People Tend to Use Less–Not More—Profanity: Comment on Feldman et al.’s (2017) Study,” that appeared in Social Psychological and Personality Science 1-5 and was written by R. E. de Vries, B. E. Hilbig, Ingo Zettler, P. D. Dunlop, D. Holtrop, K. Lee, and M. C. Ashton. Why would honesty suddenly be important with respect to decision making when I have largely ignored it in the past? You will have to figure that out for yourself. It reminded me that most of our decision making machinery is based on relative differences. We compare, but we are not so good at absolutes. Thus, when you get a relentless fearless liar, the relative differences are widened and this is likely to spread out what seems to be a reasonable decision.
This post is based on a paper: “The Two Settings of Kind and Wicked Learning Environments” written by Robin M. Hogarth, Tomás Lejarraga, and Emre Soyer that appeared in Current Directions in Psychological Science 2015, Vol. 24(5) 379–385. Hogarth created the idea of kind and wicked learning environments and it is discussed in his book Educating Intuition.
Hogarth et al state that inference involves two settings: In the first, information is acquired (learning); in the second, it is applied (predictions or choices). Kind learning environments involve close matches between the informational elements in the two settings and are a necessary condition for accurate inferences. Wicked learning environments involve mismatches.
This post is based on a draft dated July 10, 2015, “Learning in Dynamic Probabilistic Environments: A Parallel-constraint Satisfaction Network-model Approach,” written by Marc Jekel, Andreas Glöckner, & Arndt Bröder. The paper includes experiments that contrast Parallel Constraint Satisfaction with the Adaptive Toolbox Approach. I have chosen to look only at the update of the PCS model with learning. The authors develop an integrative model for decision making and learning by extending previous work on parallel constraint satisfaction networks with algorithms of backward error-propagation learning. The Parallel Constraint Satisfaction Theory for Decision Making and Learning (PCS-DM-L) conceptualizes decision making as process of coherence structuring in which learning is achieved by adjusting network weights from one decision to the next. PCS-DM-L predicts that individuals adapt to the environment by gradual changes in cue weighting.
This post is based on a paper: “Learning from experience in nonlinear environments: Evidence from a competition scenario,” authored by Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73. It is not a new topic, but adds to the evidence of our nonlinear shortcomings.
In 1980, Brehmer questioned whether people can learn from experience – more specifically, whether they can learn to make appropriate inferential judgments in probabilistic environments outside the psychological laboratory. His assessment was quite pessimistic. Other scholars have also highlighted difficulties in learning from experience. Klayman, for example, pointed out that in naturally occurring environments, feedback can be scarce, subject to distortion, and biased by lack of appropriate comparative data. Hogarth asked when experience-based judgments are accurate and introduced the concepts of kind and wicked learning environments (see post Learning, Feedback, and Intuition). In kind learning environments, people receive plentiful, accurate feedback on their judgments; but in wicked learning environments they don’t. Thus, Hogarth argued, a kind learning environment is a necessary condition for learning from experience whereas wicked learning environments lead to error. This paper explores the boundary conditions of learning to make inferential judgments from experience in kind environments. Such learning depends on both identifying relevant information and aggregating information appropriately. Moreover, for many tasks in the naturally occurring environment, people have prior beliefs about cues and how they should be aggregated.
This post is based on the paper: “Priors in perception: Top-down modulation, Bayesian
perceptual learning rate, and prediction error
minimization,” authored by Jakob Hohwy (see post Explaining Away) that appeared (or is scheduled to appear) in Consciousness and Cognition, 2017. Hohwy writes in an understandable manner and is so open that he posts papers even before they are complete of which this is an example. Hohwy pursues the idea of cognitive penetration – the notion that beliefs can determine perception.
Can ‘high level’ or ‘cognitive’ beliefs modulate perception? Hohwy methodically examines this question by trying to create the conditions under which it might work and not be trivial. For under standard Bayesian inference, the learning rate declines gradually as evidence is accumulated, and the prior updated to be ever more accurate. The more you already know the less you will learn from the world. In a changing world this is not optimal since when things in the environment change we should vary the learning rate. Hohwy provides this example. As the ambient light conditions improve, the learning rate for detecting a visible target should increase (since the samples and therefore the prediction error has better precision in better light). This means Bayesian perceptual inference needs a tool for regulating the learning rate. The inferential system should build expectations for the variability in lighting conditions throughout the day, so that the learning rate in visual detection tasks can be regulated up and down accordingly.
The human brain is thus hypothesized to build up a vast hierarchy of expectations that overall help regulate the learning rate and thereby optimize perceptual inference for a world that delivers changeable sensory input. Hohwy suggests that this makes the brain a hierarchical filter that takes the non-linear time series of sensory input and seeks to filter out regularities at different time scales. Considering the distributions in question to be normal or Gaussian, the brain is considered a hierarchical Gaussian filter or HGF .
This post is based on a paper: “Intuition and analytic processes in probabilistic reasoning: The role of time pressure,” authored by Sarah Furlan, Franca Agnoli, and Valerie F. Reyna. Valerie Reyna is, of course, the primary creator of fuzzy-trace theory. Reyna’s papers tend to do a good job of summing up the state of the decision making art and fitting in her ideas.
The authors note that although there are many points of disagreement, theorists generally agree that there are heuristic processes (Type 1) that are fast, automatic, unconscious, and require low effort. Many adult judgment biases are considered a consequence of these fast heuristic responses, also called default responses, because they are the first responses that come to mind. Type 1 processes are a central feature of intuitive thinking, requiring little cognitive effort or control. In contrast, analytic (Type 2) processes are considered slow, conscious, deliberate, and effortful, and they place demands on central working memory resources. Furlan, Agnoli, and Reyna assert that Type 2 processes are thought to be related to individual differences in cognitive capacity and Type 1 processes are thought to be independent of cognitive ability, a position challenged by the research presented in their paper. I was surprised by the given that intuitive abilities were unrelated to overall intelligence and cognitive abilities as set up by typical dual process theories.
This post is from Judgment and Decision Making, Vol. 11, No. 6, November 2016, pp. 601–610, and is based on the paper: “The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated,” written by Andreas Glöckner. Danziger, Levav and Avnaim-Pesso analyzed legal rulings of Israeli parole boards concerning the effect of serial order in which cases are presented within ruling sessions. DLA analyzed 1,112 legal rulings of Israeli parole boards that cover about 40% of the parole requests of the country. They assessed the effect of the serial order in which cases are presented within a ruling session and took advantage of the fact that the ruling boards work on the cases in three sessions per day, separated by a late morning snack and a lunch break. They found that the probability of a favorable decision drops from about 65% to 5% from the first ruling to the last ruling within each session. This is equivalent to an odds ratio of 35. The authors argue that these findings provide support for extraneous factors influencing judicial decisions and speculate that the effect might be driven by mental depletion. Glockner notes that the article has attracted attention and the supposed order effect is considerably cited in psychology.
David Brooks seems to be a fascination of mine. The New York Times columnist surprises me both in positive and negative ways. I only mention it when the surprise is negative. Below is an excerpt from his November 25, 2016, column.
And this is my problem with the cognitive sciences and the advice world generally. It’s built on the premise that we are chess masters who make decisions, for good or ill. But when it comes to the really major things we mostly follow our noses. What seems interesting, beautiful, curious and addicting?
Have you ever known anybody to turn away from anything they found compulsively engaging?
We don’t decide about life; we’re captured by life. In the major spheres, decision-making, when it happens at all, is downstream from curiosity and engagement. If we really want to understand and shape behavior, maybe we should look less at decision-making and more at curiosity. Why are you interested in the things you are interested in? Why are some people zealously seized, manically attentive and compulsively engaged?
Now that we know a bit more about decision-making, maybe the next frontier is desire. Maybe the next Kahneman and Tversky will help us understand what explains, fires and orders our loves.
I can imagine his frustration with the advice world and maybe with Kahneman and Tversky (see post Prospect Theory), but it appears that Brooks is only looking at the advice world. Brooks would benefit by looking at the work of Ken Hammond. The post Cognitive Continuum examines some of Hammond’s 1980 work. Hammond has those chess masters to whom Brooks refers as one extreme of the cognitive continuum. The post Intuition in J-DM looks at the work of Tilmann Betsch and Andreas Glockner in what is called Parallel Constraint Satisfaction theory.
Betsch and Glockner believe that information integration and output formation (choice, preference) is intuitive. Analysis involves directed search (looking for valid cues or asking an expert for advice), making sense of information, anticipating future events, etc. Thus, they see a judgment as a collaboration of intuition and analysis. The depth of analysis varies, but intuition is always working so preferences are formed even without intention. Limiting processing time and capacity constrains only input. Thus, once information is in the system, intuition will use that information irrespective of amount and capacity.
Curiosity might be considered the degree of dissonance we encounter in our automatic decision making that in effect tells us to analyze–find more information and examine it. We do mostly follow our noses, because it is adaptive. But it is also adaptive to be able to recognize change that is persistent and must be responded to. A parameter of the parallel constraint satisfaction model is the individual sensitivity to differences between cue validities. This implies that individuals respond differently to changing cue validities. Some change quickly when they perceive differences and others change at a glacial pace.
The post Rationality Defined Again: RUN & JUMP looks at the work of Tilmann Betsch and Carsten Held. Brooks in his opinion piece seems to be suggesting that analytic processing is pretty worthless. Betsch and Held have seen this before. They note that research on non-analytic processing has led some authors to conclude that intuition is superior to analysis or to at least promote it as such with the obvious example being Malcolm Gladwell in Blink. Such a notion, however, neglects the important role of decision context. The advantages and disadvantages of the different types of thought depend on the nature of the task. Moreover, the plea for a general superiority of intuition neglects the fact that analysis is capable of things that intuition is not. Consider, for example, the case of routine maintenance and deviation decisions. Routine decisions will lead to good results if prior experiences are representative for the task at hand. In a changing world, however, routines can become obsolete.
In the absence of analytic thought, adapting to changing contexts requires slow, repetitive learning. Upon encountering repeated failure, the individual’s behavioral tendencies will change. The virtue of deliberate analysis, Brooks’ chess mastering, lies in its power to quickly adapt to new situations without necessitating slow reinforcement learning. Whereas intuition is fast and holistic due to parallel processing, it is a slave to the pre-formed structure of knowledge as well as the representation of the decision problem. The relations among goals, situations, options and outcomes that result from prior knowledge provide the structural constraints under which intuitive processes operate. They can work very efficiently but, nevertheless, cannot change these constraint. The potential of analytic thought dwells in the power to change the structure of the representation of a decision problem.
I believe that Brooks realizes that analytic thought is one thing that distinguishes us from other creatures even though it does not seem to inform much of our decision making. The post Embodied(Grounded) prediction(cognition might also open a window for Brooks.