Category Archives: Feedback/ Learning

Confidence, Part III

In Confidence, Part II, the authors conclude that confidence is computed continuously, online, throughout the decision making process, thus lending support to models of the mind as a device that computes with probabilistic estimates and probability distributions.

 

The Embodied Mind

One such explanation is that of predictive processing/embodied mind. Andy Clark, Jacob Hohwy, and Karl Friston have all helped to weave together this concept. Our minds are blends of top down and bottom up processing where error messages and the effort to fix those errors makes it possible for us to engage the world. According to the embodied mind model, our minds do not just reside in our heads. Our bodies determine how we interact with the world and how we shape our world so that we can predict better. Our evolutionary limitations have much to do with how our minds work. One example provided by Andy Clark and Barbara is a robot without any brain imitating human walking nearly perfectly (video go to 2:40). Now how does this tie into confidence?  Confidence at a conscious level is the extent of our belief that our decisions are correct. But the same thing is going on as a fundamental part of perception and action. Estimating the certainty of our own prediction error signals of our own mental states and processes is as Clark notes:  “clearly a delicate and tricky business. For it is the prediction error signal that…gets to ‘carry the news’.”

Continue reading

Confidence, Part I

Confidence is defined as our degree of belief that a certain thought or action is correct. There is confidence in your own individual decisions or perceptions and then the between person confidence where you defer your own decision making to someone else.

Why am I thinking of confidence? An article by Cass Sunstein explains it well. The article appeared in Bloomberg, Politics & Policy, October 18, 2018, Bloomberg Opinion, “Donald Trump is Amazing. Here’s the Science to Prove It.”

Continue reading

A Nice Surprise

This post is based on a paper written by Andy Clark, author of Surfing Uncertainty (See Paper Predictive Processing for a fuller treatment.),A nice surprise? Predictive processing and the active pursuit of novelty,”  that appeared in Phenomenology and the Cognitive Sciences, pp. 1-14. DOI: 10.1007/s11097-017-9525-z. For me this is a chance to learn how Andy Clark has polished up his arguments since his book.  It also strikes me as connected to my recent posts on Curiosity and Creativity.

Clark and Friston (See post The Prediction Machine) depict human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory simulations. But we know that we are attracted to the unexpected. We humans often seem to actively seek out surprising events, deliberately seeking novel and exciting streams of sensory stimulation. So how does that square with the idea of minimizing prediction error.

Continue reading

Interoception and Theory of Mind

This post is based on the paper: “The role of interoceptive inference in theory of mind,” by
Sasha Ondobaka, James Kilner, and Karl Friston, Brain Cognition, 2017 Mar; 112: 64–68.

Understanding or inferring the intentions, feelings and beliefs of others is a hallmark of human social cognition often referred to as having a Theory of Mind.  ToM has been described as a cognitive ability to infer the intentions and beliefs of others, through processing of their physical appearance, clothes, bodily and facial expressions. Of course, the repertoire of hypotheses of our ToM is borrowed from the hypotheses that cause our own behavior.

But how can processing of internal visceral/autonomic information (interoception) contribute to the understanding of others’ intentions? The authors consider interoceptive inference as a special case of active inference. Friston (see post Prediction Error Minimization)  has theorized that the goal of the brain is to minimize prediction error and that this can be achieved both by changing predictions to match the observed data and, via action, changing the sensory input to match predictions.  When you drop the knife and then catch it with the other hand, you are using active inference.

Continue reading

Honesty

This post is based on a comment paper: “Honest People Tend to Use Less–Not More—Profanity:  Comment on Feldman et al.’s (2017) Study,” that appeared in Social Psychological and Personality Science 1-5 and was written by R. E. de Vries, B. E. Hilbig, Ingo Zettler, P. D. Dunlop, D. Holtrop, K. Lee, and M. C. Ashton. Why would honesty suddenly be important  with respect to decision making when I have largely ignored it in the past? You will have to figure that out for yourself. It reminded me that most of our decision making machinery is based on relative differences. We compare, but we are not so good at absolutes. Thus, when you get a relentless fearless liar, the relative differences are widened and this is likely to spread out what seems to be a reasonable decision.

Continue reading

Nonlinear

This post is based on a paper: “Learning from experience in nonlinear environments: Evidence from a competition scenario,” authored by Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73. It is not a new topic, but adds to the evidence of our nonlinear shortcomings.

In 1980, Brehmer questioned whether people can learn from experience – more specifically, whether they can learn to make appropriate inferential judgments in probabilistic environments outside the psychological laboratory. His assessment was quite pessimistic. Other scholars have also highlighted difficulties in learning from experience. Klayman, for example, pointed out that in naturally occurring environments, feedback can be scarce, subject to distortion, and biased by lack of appropriate comparative data. Hogarth asked when experience-based judgments are accurate and introduced the concepts of kind and wicked learning environments (see post Learning, Feedback, and Intuition). In kind learning environments, people receive plentiful, accurate feedback on their judgments; but in wicked learning environments they don’t. Thus, Hogarth argued, a kind learning environment is a necessary condition for learning from experience whereas wicked learning environments lead to error. This paper explores the boundary conditions of learning to make inferential judgments from experience in kind environments. Such learning depends on both identifying relevant information and aggregating information appropriately. Moreover, for many tasks in the naturally occurring environment, people have prior beliefs about cues and how they should be aggregated.

Continue reading

Hogarth on Simulation

scm1This post is a contination of the previous blog post Hogarth on Description. Hogarth and Soyer suggest that the information humans use for probabilistic decision making has two distinct sources: description of the particulars of the situations involved and through experience of past instances. Most decision aiding has focused on exploring effects of different problem descriptions and, as has been shown, is important because human judgments and decisions are so sensitive to different aspects of descriptions. However, this very sensitivity is problematic in that different types of judgments and decisions seem to need different solutions. To find methods with more general application, Hogarth and Soyer suggest exploiting the well-recognized human ability to encode frequency information, by building a simulation model that can be used to generate “outcomes” through a process that they call “simulated experience”.

Simulated experience essentially allows a decision maker to live actively through a decision situation as opposed to being presented with a passive description. The authors note that the difference between resolving problems that have been described as opposed to experienced is related to Brunswik’s distinction between the use of cognition and perception. In the former, people can be quite accurate in their responses but they can also make large errors. I note that this is similar to Hammond’s correspondence and coherence. With perception and correspondence, they are unlikely to be highly accurate but errors are likely to be small. Simulation, perception, and correspondence tend to be robust.

Continue reading

Superforecasting

superforecastingimagesThis post is a look at the book by Philip E Tetlock and Dan Gardner, Superforecasting– the Art and Science of Prediction.  Phil Tetlock is also the author of Expert Political Judgment: How Good Is It? How Can We Know?   In Superforecasting Tetlock blends discussion of the largely popular literature on decision making and his long duration scientific work on the ability of experts and others to predict future events.

In Expert Political Judgment: How Good Is It? How Can We Know? Tetlock found that the average expert did little better than guessing.  He also found that some did better. In Superforecasting he discusses the study of those who did better and how they did it.

Continue reading

Dark Room Problem- Minimizing Surprise

dark_roomThis post is based on the paper: “Free-energy minimization and the dark-room problem,” written by Karl Friston, Christopher Thornton and Andy Clark that appeared in Frontiers in Psychology in May 2012. Recent years have seen the emergence of an important new fundamental theory of brain function (Posts Embodied Prediction and Prediction Error Minimization). This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose over arching principle is the minimization of surprise (or, equivalently, the maximization of expectation). A puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. People do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.”

Continue reading

Embodied(Grounded) prediction(cognition)

 

clark514DJ8Bec6L._SX329_BO1,204,203,200_This post is based on a paper by Andy Clark: “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015). Andy Clark is a philosopher at the University of Edinburgh whose tastes trend toward the wild shirt. He is a very well educated philosopher in the brain sciences and a good teacher. The paper seems to put forward some major ideas for decision making even though that is not its focus. Hammond’s idea of the Cognitive Continuum is well accommodated. It also seems quite compatible with Parallel Constraint Satisfaction, but leaves room for Fast and Frugal Heuristics. It seems to provide a way to merge Parallel Constraint Satisfaction and Cognitive Niches. I do not really understand PCS well enough, but it seems potentially to add hierarchy to PCS and make it into a generative model that can introduce fresh constraint satisfaction variables and constraints as new components. If you have not read the post Prediction Machine, you should because the current post skips much background. It is also difficult to distinguish Embodied Prediction and Grounded Cognition. There are likely to be posts that follow on the same general topic.

Continue reading