Category Archives: Feedback/ Learning


This post is based on a comment paper: “Honest People Tend to Use Less–Not More—Profanity:  Comment on Feldman et al.’s (2017) Study,” that appeared in Social Psychological and Personality Science 1-5 and was written by R. E. de Vries, B. E. Hilbig, Ingo Zettler, P. D. Dunlop, D. Holtrop, K. Lee, and M. C. Ashton. Why would honesty suddenly be important  with respect to decision making when I have largely ignored it in the past? You will have to figure that out for yourself. It reminded me that most of our decision making machinery is based on relative differences. We compare, but we are not so good at absolutes. Thus, when you get a relentless fearless liar, the relative differences are widened and this is likely to spread out what seems to be a reasonable decision.

Continue reading


This post is based on a paper: “Learning from experience in nonlinear environments: Evidence from a competition scenario,” authored by Emre Soyer and Robin M. Hogarth, Cognitive Psychology 81 (2015) 48-73. It is not a new topic, but adds to the evidence of our nonlinear shortcomings.

In 1980, Brehmer questioned whether people can learn from experience – more specifically, whether they can learn to make appropriate inferential judgments in probabilistic environments outside the psychological laboratory. His assessment was quite pessimistic. Other scholars have also highlighted difficulties in learning from experience. Klayman, for example, pointed out that in naturally occurring environments, feedback can be scarce, subject to distortion, and biased by lack of appropriate comparative data. Hogarth asked when experience-based judgments are accurate and introduced the concepts of kind and wicked learning environments (see post Learning, Feedback, and Intuition). In kind learning environments, people receive plentiful, accurate feedback on their judgments; but in wicked learning environments they don’t. Thus, Hogarth argued, a kind learning environment is a necessary condition for learning from experience whereas wicked learning environments lead to error. This paper explores the boundary conditions of learning to make inferential judgments from experience in kind environments. Such learning depends on both identifying relevant information and aggregating information appropriately. Moreover, for many tasks in the naturally occurring environment, people have prior beliefs about cues and how they should be aggregated.

Continue reading

Hogarth on Simulation

scm1This post is a contination of the previous blog post Hogarth on Description. Hogarth and Soyer suggest that the information humans use for probabilistic decision making has two distinct sources: description of the particulars of the situations involved and through experience of past instances. Most decision aiding has focused on exploring effects of different problem descriptions and, as has been shown, is important because human judgments and decisions are so sensitive to different aspects of descriptions. However, this very sensitivity is problematic in that different types of judgments and decisions seem to need different solutions. To find methods with more general application, Hogarth and Soyer suggest exploiting the well-recognized human ability to encode frequency information, by building a simulation model that can be used to generate “outcomes” through a process that they call “simulated experience”.

Simulated experience essentially allows a decision maker to live actively through a decision situation as opposed to being presented with a passive description. The authors note that the difference between resolving problems that have been described as opposed to experienced is related to Brunswik’s distinction between the use of cognition and perception. In the former, people can be quite accurate in their responses but they can also make large errors. I note that this is similar to Hammond’s correspondence and coherence. With perception and correspondence, they are unlikely to be highly accurate but errors are likely to be small. Simulation, perception, and correspondence tend to be robust.

Continue reading


superforecastingimagesThis post is a look at the book by Philip E Tetlock and Dan Gardner, Superforecasting– the Art and Science of Prediction.  Phil Tetlock is also the author of Expert Political Judgment: How Good Is It? How Can We Know?   In Superforecasting Tetlock blends discussion of the largely popular literature on decision making and his long duration scientific work on the ability of experts and others to predict future events.

In Expert Political Judgment: How Good Is It? How Can We Know? Tetlock found that the average expert did little better than guessing.  He also found that some did better. In Superforecasting he discusses the study of those who did better and how they did it.

Continue reading

Dark Room Problem- Minimizing Surprise

dark_roomThis post is based on the paper: “Free-energy minimization and the dark-room problem,” written by Karl Friston, Christopher Thornton and Andy Clark that appeared in Frontiers in Psychology in May 2012. Recent years have seen the emergence of an important new fundamental theory of brain function (Posts Embodied Prediction and Prediction Error Minimization). This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose over arching principle is the minimization of surprise (or, equivalently, the maximization of expectation). A puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. People do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.”

Continue reading

Embodied(Grounded) prediction(cognition)


clark514DJ8Bec6L._SX329_BO1,204,203,200_This post is based on a paper by Andy Clark: “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015). Andy Clark is a philosopher at the University of Edinburgh whose tastes trend toward the wild shirt. He is a very well educated philosopher in the brain sciences and a good teacher. The paper seems to put forward some major ideas for decision making even though that is not its focus. Hammond’s idea of the Cognitive Continuum is well accommodated. It also seems quite compatible with Parallel Constraint Satisfaction, but leaves room for Fast and Frugal Heuristics. It seems to provide a way to merge Parallel Constraint Satisfaction and Cognitive Niches. I do not really understand PCS well enough, but it seems potentially to add hierarchy to PCS and make it into a generative model that can introduce fresh constraint satisfaction variables and constraints as new components. If you have not read the post Prediction Machine, you should because the current post skips much background. It is also difficult to distinguish Embodied Prediction and Grounded Cognition. There are likely to be posts that follow on the same general topic.

Continue reading

Does interaction matter in collective decision-making?

interactionF1.largeThis post is based on a paper: “Does interaction matter? Testing whether a confidence heuristic can replace interaction in collective decision-making.” The authors are
Dan Bang, Riccardo Fusaroli, Kristian Tylén, Karsten Olsen, Peter E. Latham, Jennifer Y.F. Lau, Andreas Roepstorff, Geraint Rees, Chris D. Frith, and Bahador Bahrami. The paper appeared in Consciousness and Cognition 26 (2014) 13–23.

The paper indicates that there is a growing interest in the mechanisms underlying the ‘‘two-heads-better-than-one’’ (2HBT1) effect, which refers to the ability of dyads to make more accurate decisions than either of their members. Bahrami’s 2010 study, using a perceptual task in which two observers had to detect a visual target, showed that two heads become better than one by sharing their ‘confidence’ (i.e., an internal estimate of the probability of being correct), thus allowing them to identify who is more likely to be correct in a given situation. This tendency to evaluate the reliability of information by the confidence with which it is expressed has been termed the ‘confidence heuristic’. I do not recall having seen the acronym 2HBT1 before, but it does recall the post Dialectical Bootstrapping in which one forms his own dyad, Bootstrapping where one uses expert judgment, and Scott Page’s work Diversity or Systematic Error? However, this is the first discussion of a confidence heuristic.

Continue reading

When to Quit

Hiking toward the snow

This post is based on the paper: “Multi-attribute utility models as cognitive search engines”, by Pantelis P. Analytis, Amit Kothiyal, and Konstantinos Katsikopoulos that appeared in Judgment and Decision Making, Vol. 9, No. 5, September 2014, pp. 403–419. This post does not look at persistence (post Persistence or delay (post Decision Delay) when you believe that you need more alternatives, but when to quit your search and stop within the available alternatives.

In optimal stopping problems, decision makers are assumed to search randomly to learn the utility of alternatives; in contrast, in one-shot multi-attribute utility optimization, decision makers are assumed to have perfect knowledge of utilities. The authors point out that these two contexts represent the boundaries of a continuum, of which the middle remains uncharted: How should people search intelligently when they possess imperfect information about the alternatives? They pose the example of trying to hire a new employee faced with several dozen applications listing their skills and credentials. You need interviews to determine each candidate’s potential. What is the best way to organize the interview process? First, you need to decide the order in which you will be inviting candidates. Then, after each interview you need to decide whether to make an offer to one of the interviewed candidates, thus stopping your search. The first problem is an ordering problem and the second a stopping problem. If credentials were adequate, you would not need an interview, and if credentials were worthless, you would invite people for interviews randomly.

Continue reading

Dehaene: Consciousness and Decision Making

consciousimagesI love Stanislas Dehaene’s experiments, his general ideas and his book:  Consciousness and the Brain:  Deciphering How the Brain Codes our Thoughts, Viking, New York 2014 is a great synthesis and with respect to the title, it is a fine book. However, with respect to how it deals with decision making, I am mostly disappointed.

Consciousness: Informer or Informer/Decider? Although Dehaene’s Global Neuronal Workspace Theory describes what we feel as consciousness as the global sharing of information, in the book he seems to promote the idea of consciousness as the decider as well as the informer. Dehaene writes:

“My picture of consciousness imples a natural division of labor. In the basement, an army of unconscious workers does the exhausting work, sifting through piles of data. Meanwhile, at the top, a select board of executives, examining only a brief of the situation, slowly makes conscious decisions…No one can act on mere probabilities–at some point, a dictatorial process is needed to collapse all uncertainties and decide….Consciousness may be the brain’s scale tipping device—collapsing all unconscious probabilities into a single conscious sample so that we can move on to further decisions.” p89

I like the informer part, but I like the parallel constraint satisfaction (post Parallel Constraint Satisfaction Theory) idea that consciousness is asked to get more information (information search and production) which the unconscious system turns into a decision. In my scenario the visual system seems to have priority to get to the conscious level, then other sensory systems, and then the other unconscious systems push the most difficult or interesting decisions they have at any particular time through to the conscious system. Maybe there is some sort of priority ranking. Clearly, most rather mundane decisions seem to break through to consciousness only occasionally. As a part of breaking through to consciousness, more of the modular systems are alerted to the issue and maybe information can come from inside or maybe we seek information from others or examine the environment. We get the new information and the wheels of the parallel constraint system start whirring again to see if the decision can be made. Now, I do see a cognitive continuum so that yes certain decisions may stay with the board of executives. Dehaene uses the example of multidigit arithmetic. For most of us, it seems to consist of a series of introspective steps that we can accurately report. For instance, to multiply 30 by 47, I might multiply 30 by 40 and get 1200 and then add it to 7 by 30 to get 1410. But for a numerical savants that could be done in the unconscious. Nevertheless, there are certain things where consciousness does seem to be where the decisions are made. Complex multi-step questions where the emotions are more or less uninvolved might be examples.

Maybe the interesting part is the sort of phase change between the unconscious and the conscious. There is a lot happening there. Dehaene says that consciousness is doing the collapsing, but it seems to me it is already done once it reaches consciousness. Maybe that is not an important argument.  One theory is that conscious perception occurs when the stimulus allows the accumulation of sufficient sensory evidence to reach a threshold, at which point the brain ‘decides’ whether it has seen anything, and what it is. The mechanisms of conscious access would then be comparable to those of other decisions, involving an accumulation toward a threshold — with the difference that conscious perception would correspond to a global high-level ‘decision to engage’ many of the brain’s internal resources. Dehaene mentions this in a paper that was discussed in the post A Theory of Consciousness.

Consciousness Gives Us the Power of a Sophisticated Serial Computer. Dehaene is a believer in the Bayesian unconscious. “A strict logic governs the brain’s unconscious circuits–they appear ideally organized to perform statistically accurate inferences concerning our sensory inputs.” Both the unconscious and conscious systems seem to work in a linear fashion (Brunswik’s Lens Model), but the conscious system can redirect.

Dehaene states:

“This seems to be a major function of consciousness:  to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire cycle may repeat a number  of times.  The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” p100

Dehaene and his colleagues have studied schizophrenics. They found a basic deficit of consciousness perception in schizophrenia. Words had to be presented for a longer time before schizophrenics reported conscious seeing. “Schizophrenics’ main problem seems to lie in the global integration of incoming information into a coherent whole.” Dehaene suggests that schizophrenics have a “global loss of top-down connectivity. This loss impairs capacity for conscious monitoring, top-down attention, working memory, and decision making. Apparently in schizophrenics, the prediction machine is not making enough predictions. With reduced top down messages, sensory inputs are never explained and error messages remain triggering multiple explanations. Schizophrenics thus see the need for complicated explanations that can lead to the far fetched interpretations of their surroundings that may express themselves as bizarre hallucinations and delusions.

Dehaene suggests that consciousness allows us to share information with others and that leads to better decisions. Dehaene’s most interesting idea is that our social abilities allow us to make decisions together and that these are better decisions. Although one can argue that language is imperfect and that much of it is used to transmit trivia and gossip, Dehaene provides evidence that our conversations are more than tabloids. This is a point that needed to be made to me. I was tending to believe that there was almost a direct tradeoff between cognitive skills and social skills and even though that tradeoff was adaptive, maybe it was close. Dehaene puts forth the argument that two heads are better than one and that consciousness makes this possible (This is also directly in line with Scott Page’s: The Difference — How the Power of Diversity Creates Better Groups, post Diversity or Systematic Error).

He cites the experiments of Iranian psychologist Bahador Bahrami. Bahrami had pairs of subjects examine two displays and were asked to decide on each trial whether the first or second contained a near threshold target image. The subjects initially made the decision independently and if they differed were asked to resolve the conflict through a brief discussion. As long as the abilities of the individuals were similar, pairing them yielded a significant improvement in accuracy. Nuances were not was shared to gain this, but simply a categorical answer (first or second display) and a judgment of confidence.

Dehaene suggests that Bayesian decision theory tells us that the very same decision rules should apply to our own thoughts and to those that we receive from others. In both cases, optimal decision making demands that each source of information, whether internal or external, should be weighted as accurately as possible, by an estimate of its reliability, before all the information is brought together into a single decision space. This sounds much like cue validities in Brunswik’s lens model or Parallel Constraint Satisfaction theory. According to Dehaene, once this workspace was opened to social inputs from other minds, we were able reap the benefits of a collective decision making algorithm: by comparing our knowledge with that of others, we achieve better decisions.







Explaining Away

explainingawayindexThis is the second of three posts about the brain having a singular purpose of prediction error minimization. PEM literature has many contributors. Karl Friston is probably the strongest idea man, but Andy Clark and Jakob Hohwy are more understandable. Hohwy’s papers include:  Hohwy, J. (2015). “The Neural Organ Explains the Mind”. In T. Metzinger & J. M. Windt (Eds). Open MIND: 19(T). Frankfurt am Main: MIND Group. Hohwy, J., Roepstorff, A., & Friston, K.(2008). “Predictive coding explains binocular rivalry: an epistemological review.” Cognition 108, 687-701.  Hohwy, J. (2012). “Attention and conscious perception in the hypothesis testing brain.” Frontiers in Psychology/Consciousness Research, April 2012, Volume 3, Article 96. Paton, B., Skewes, J., Firth, C., & Hohwy, J(2013). “Skull-bound perception and precision optimization through culture.” Commentary in Behavioral and Brain Sciences (2013) 36:3, p 42.

Both Clark and Hohwy use “explaining away” to illustrate the concept of cancelling out sensory prediction error. Perception thus involves “explaining away” the driving (incoming) sensory signal by matching it with a cascade of predictions pitched at a variety of spatial and temporal scales. These predictions reflect what the system already knows about the world (including the
body) and the uncertainties associated with its own processing. What we perceive depends heavily upon the set of priors that the brain brings to bear in its best attempt to predict the current sensory signal.

Continue reading