Now the confidence heuristic is not the only thing Trump takes advantage of, but we will leave those for another time. I will also avoid the question of whether or not Trump is actually confident. So what is the relationship of confidence and decision making? Daniel Kahneman in Thinking, Fast and Slow on page 13 describes:
a puzzling limitation of our mind: our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight.
This post is based on a draft dated July 10, 2015, “Learning in Dynamic Probabilistic Environments: A Parallel-constraint Satisfaction Network-model Approach,” written by Marc Jekel, Andreas Glöckner, & Arndt Bröder. The paper includes experiments that contrast Parallel Constraint Satisfaction with the Adaptive Toolbox Approach. I have chosen to look only at the update of the PCS model with learning. The authors develop an integrative model for decision making and learning by extending previous work on parallel constraint satisfaction networks with algorithms of backward error-propagation learning. The Parallel Constraint Satisfaction Theory for Decision Making and Learning (PCS-DM-L) conceptualizes decision making as process of coherence structuring in which learning is achieved by adjusting network weights from one decision to the next. PCS-DM-L predicts that individuals adapt to the environment by gradual changes in cue weighting.
This post is based on a paper: “Learning from experience in nonlinear environments: Evidence from a competition scenario,” authored by Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73. It is not a new topic, but adds to the evidence of our nonlinear shortcomings.
In 1980, Brehmer questioned whether people can learn from experience – more specifically, whether they can learn to make appropriate inferential judgments in probabilistic environments outside the psychological laboratory. His assessment was quite pessimistic. Other scholars have also highlighted difficulties in learning from experience. Klayman, for example, pointed out that in naturally occurring environments, feedback can be scarce, subject to distortion, and biased by lack of appropriate comparative data. Hogarth asked when experience-based judgments are accurate and introduced the concepts of kind and wicked learning environments (see post Learning, Feedback, and Intuition). In kind learning environments, people receive plentiful, accurate feedback on their judgments; but in wicked learning environments they don’t. Thus, Hogarth argued, a kind learning environment is a necessary condition for learning from experience whereas wicked learning environments lead to error. This paper explores the boundary conditions of learning to make inferential judgments from experience in kind environments. Such learning depends on both identifying relevant information and aggregating information appropriately. Moreover, for many tasks in the naturally occurring environment, people have prior beliefs about cues and how they should be aggregated.
This post is a contination of the previous blog post Hogarth on Description. Hogarth and Soyer suggest that the information humans use for probabilistic decision making has two distinct sources: description of the particulars of the situations involved and through experience of past instances. Most decision aiding has focused on exploring effects of different problem descriptions and, as has been shown, is important because human judgments and decisions are so sensitive to different aspects of descriptions. However, this very sensitivity is problematic in that different types of judgments and decisions seem to need different solutions. To find methods with more general application, Hogarth and Soyer suggest exploiting the well-recognized human ability to encode frequency information, by building a simulation model that can be used to generate “outcomes” through a process that they call “simulated experience”.
Simulated experience essentially allows a decision maker to live actively through a decision situation as opposed to being presented with a passive description. The authors note that the difference between resolving problems that have been described as opposed to experienced is related to Brunswik’s distinction between the use of cognition and perception. In the former, people can be quite accurate in their responses but they can also make large errors. I note that this is similar to Hammond’s correspondence and coherence. With perception and correspondence, they are unlikely to be highly accurate but errors are likely to be small. Simulation, perception, and correspondence tend to be robust.
This post is based on “Providing information for decision making: Contrasting description and simulation,” Journal of Applied Research in Memory and Cognition 4 (2015) 221–228, written by
Robin M. Hogarth and Emre Soyer. Hogarth and Soyer propose that providing information to help people make decisions can be likened to telling stories. First, the provider – or story teller – needs to know what he or she wants to say. Second, it is important to understand characteristics of the audience as this affects how information is interpreted. And third, the provider must match what is said to the needs of the audience. Finally, when it comes to decision making, the provider should not tell the audience what to do. Although Hogarth and Soyer do not mention it, good storytelling draws us into the descriptions so that we can “experience” the story. (see post 2009 Review of Judgment and Decision Making Research)
Hogarth and Soyer state that their interest in this issue was stimulated by a survey they conducted of how economists interpret the results of regression analysis. The economists were given the outcomes of the regression analysis in a typical, tabular format and the questions involved interpreting the probabilistic implications of specific actions given the estimation results. The participants had available all the information necessary to provide correct answers, but in general they failed to do so. They tended to ignore the uncertainty involved in predicting the dependent variable conditional on values of the independent variable. As such they vastly overestimated the predictive ability of the model. Another group of similar economists who only saw a bivariate scatterplot of the data were accurate in answering the same questions. These economists were not generally blinded by numbers as some in the population, but they still needed the visually presented frequency information.
This post is a look at the book by Philip E Tetlock and Dan Gardner, Superforecasting– the Art and Science of Prediction. Phil Tetlock is also the author of Expert Political Judgment: How Good Is It? How Can We Know? In Superforecasting Tetlock blends discussion of the largely popular literature on decision making and his long duration scientific work on the ability of experts and others to predict future events.
In Expert Political Judgment: How Good Is It? How Can We Know? Tetlock found that the average expert did little better than guessing. He also found that some did better. In Superforecasting he discusses the study of those who did better and how they did it.
This post is based on a paper: “Heuristic and Linear Models of Judgment: Matching Rules and Environments,” written by Robin M. Hogarth and Natalia Karelaia, Psychological Review 2007, Vol. 114, No. 3, 733–758 that predated Hogarth and Karelaia’s (What has Brunswik’s Lens Model Taught?) meta-analysis. It includes the underpinnings for that study.
Two classes of models have dominated research on judgment and decision making over past decades. In one, explicit recognition is given to the limits of information processing, and people are modeled as using simplifying heuristics (Gigerenzer, Kahneman, Tversky school). In the other (Hammond school), it is assumed that people can integrate all the information at hand and that this is combined and weighted as if using an algebraic—typically linear—model.
The last post Persistence looked at persistence as being partially determined by the distribution of the waiting times for the reward. A fat tailed distribution might rationally steer one toward giving up after a short waiting period. Robin Hogarth (Educating Intuition) has recently published a paper: “Ambiguous Incentives and the Persistence of Effort: Experimental Evidence” in the Journal of Economic Behavior & Organization, Volume 100, April 2014, page 1-19, with Marie Claire Villeval that looks at economic activities where the reward is mundane–money. It is more aimed at looking at what determines our persistence from the employers point of view, but I believe it could be more broadly applicable.
Hogarth and Villeval explore ambiguous situations where economic agents reap the benefits of engaging in an activity across time until – unknown to them – there is a shift (the regime change) in the underlying process and pursuing the activity is no longer profitable. The term regime shift was new to me in the context. For an old city planner, regime shift meant a new mayor or change in the form of government. Apparently the ecological term more or less runs with the old definition and means abrupt long lasting non-linear change. Hogarth has helped me understand that humans have made an evolutionary career out of understanding linear change or functions that are linear over the relevant range, while we tend to be weak at non-linear functions. How long will the investor continue to place new orders and does this depend on the regularity of his previous outcomes? How long will an employee keep working in the same firm if she no longer receives a bonus? How is the decision affected by preferences regarding risk and ambiguity and/or the regularity with which bonuses have been paid in the past?
This is a mild revolution for me. I was always irritated when someone suggested that someone should pull himself up by his bootstraps. This seemed quite impossible to me. But apparently even my computer is bootstrapping when it is booting. According to Wikipedia, bootstrapping usually refers to the starting of a self-sustaining process that is supposed to proceed without external input. In computer technology the term (usually shortened to booting) usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, especially the operating system which will then take care of loading other software as needed. ‘‘Bootstrapping’’ alludes to Baron Munchhausen, who claimed to have escaped from a swamp by pulling himself up by, depending on who tells the story, his own hair or bootstraps.
Standing in the shower preparing to dry off, I consider myself as at my most lucid condition. But as I dry myself with a big fluffy towel, I tend to move on to another place on the towel when my hands feel any moisture. Thus, although I believe that the dryness or the lack thereof of the side of the towel that my hands can feel is unrelated to the side that is drying off my body, I am usually thinking about something else so I still move the towel. I won’t even try to figure out the function for this, but it is clearly a fallible indicator. Hogarth reminded me of this when he noted our poorer performance when dealing with nonlinear relationships, “Human achievement is lower when there are nonlinearities in the ecology.” (What has Brunswik’s Lens Model Taught?). This reminds me of derivative financial instruments. My intuition cannot handle a straddled put (I think I made that up.) I can learn and feed it in, but give me 15 seconds to figure it out and my performance will be worse than chance. This is kind of a big deal if John von Neumann’s analogy is correct: that studying nonlinear relationships is similar to studying non elephants.