Tag Archives: Clark

Parallel Constraint Satisfaction Model updated with Learning

This post is based on a draft dated July 10, 2015, “Learning in Dynamic Probabilistic Environments: A Parallel-constraint Satisfaction Network-model Approach,” written by Marc Jekel, Andreas Glöckner, & Arndt Bröder. The paper includes experiments that contrast Parallel Constraint Satisfaction with the Adaptive Toolbox Approach. I have chosen to look only at the update of the PCS model with learning. The authors develop an integrative model for decision making and learning by extending previous work on parallel constraint satisfaction networks with algorithms of backward error-propagation learning. The Parallel Constraint Satisfaction Theory for Decision Making and Learning (PCS-DM-L) conceptualizes decision making as process of coherence structuring in which learning is achieved by adjusting network weights from one decision to the next. PCS-DM-L predicts that individuals adapt to the environment by gradual changes in cue weighting.

Continue reading

Predictive Processing and Anxiety and other Maladies

This post is based on a paper written by Fabienne Picard and Karl Friston, entitled: “Predictions, perceptions, and a sense of self,” that appeared in Neurology® 2014;83:1112–1118. Karl Friston is one of the prime authors of predictive processing and Fabienne Picard is a doctor known for studying epilepsy. The ideas here are not new or even new to this blog, but the paper and specifically the figure below provide a good summary of the ideas of predictive processing. Andy Clark’s Surfing Uncertainty is the place to go if the subject interests you.

Continue reading

Single Strategy Framework and the Process of Changing Weights


cloudindexThis post starts from the conclusion of the previous post that the evidence supports a single strategy framework, looks at Julian Marewski’s criticism, and then piles on with ideas on how weights can be changed in a single strategy framework.

Marewski provided a paper for the special issue of the Journal of Applied Research in Memory and Cognition (2015)  on “Modeling and Aiding Intuition in Organizational Decision Making”:  “Unveiling the Lady in Black: Modeling and Aiding Intuition,” authored by Ulrich Hoffrage and Julian N. Marewski. The paper gives the parallel constraint satisfaction model a not so subtle knock:

By exaggerating and simplifying features or traits, caricatures can aid perceiving the real thing. In reality, both magic costumes and chastity belts are degrees on a continuum. In fact, many theories are neither solely formal or verbal. Glöckner and Betsch’s connectionist model of intuitive decision making, for instance, explicitly rests on both math and verbal assumptions. Indeed, on its own, theorizing at formal or informal levels is neither “good” nor “bad”. Clearly, both levels of description have their own merits and, actually, also their own problems. Both can be interesting, informative, and insightful – like the work presented in the first three papers of this special issue, which we hope you enjoy as much as we do. And both can border re-description and tautology. This can happen when a theory does not attempt to model processes. Examples are mathematical equations with free parameters that carry no explanatory value, but that are given quasi-psychological, marketable labels (e.g., “risk aversion”).

Continue reading

Big Models

pope155570_600After the three years that I have pushed out other people’s ideas on judgement and decision making, at this moment, I can recall three huge ideas.

I continually look for commment on and expansion of these ideas, and I often do this in the most lazy of ways, I google them.  Recently I seemed to find the last two mentioned on the same page of a philosophy book. That was not actually true, but it did remind me of similarities that I could point out. The idea of a compensatory process where one changes his belief a little to match the current set of “facts” tracks well with the idea that we can get predictions correct by moving our hand to catch the ball so that it does not have to be thrown perfectly. Both clearly try to match up the environment and ourselves. The Parallel Constraint Satisfaction model minimizes dissonance while the Free Energy model minimizes surprise. Both dissonance and surprise can create instability. The Free Energy model is more universal than the Parallel Constraint Satisfaction model, while for decision making PCS is more precise. The Free Energy model also gives us the idea that heuristic models could fit within process models. All this points out what is obvious to us all.  We need the right model for the right job.

Continue reading

Dark Room Problem- Minimizing Surprise

dark_roomThis post is based on the paper: “Free-energy minimization and the dark-room problem,” written by Karl Friston, Christopher Thornton and Andy Clark that appeared in Frontiers in Psychology in May 2012. Recent years have seen the emergence of an important new fundamental theory of brain function (Posts Embodied Prediction and Prediction Error Minimization). This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose over arching principle is the minimization of surprise (or, equivalently, the maximization of expectation). A puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. People do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.”

Continue reading

The Mixed Instrumental Controller

mic_MG_5849This is more or less a continuation of the previous post based on Andy Clark’s “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015).   It further weighs in on the issue of changing strategies or changing weights (see post Revisiting Swiss Army Knife or Adaptive Tool Box). Clark has brought to my attention the terms model free and model based which seem to roughly equate to intuition/system 1 and analysis/system 2 respectively. With this translation, I am helped in trying to tie this into ideas like cognitive niches and parallel constraint satisfaction. Clark in a footnote:

Current thinking about switching between model-free and model based strategies places them squarely in the context of hierarchical inference, through the use of “Bayesian parameter averaging”. This essentially associates model-free schemes with simpler (less complex) lower levels of the hierarchy that may, at times, need to be contextualized
by (more complex) higher levels.

As humans, we have been able to use language, our social skills, and our understanding of hierarchy to extend our cognition.  Multiplication of large numbers is an example. We cannot remember enough numbers in our heads so we created a way to do any multiplication on paper or its equivalent if we can learn our multiplication tables. Clark cites the example of the way that learning to perform mental arithmetic has been scaffolded, in some cultures, by the deliberate use of an abacus. Experience with patterns thus made available helps to install appreciation of many complex arithmetical operations and relations. We structure (and repeatedly re-structure) our physical and social environments in ways that make available new knowledge and skills. Prediction-hungry brains, exposed in the course of embodied action to novel patterns of sensory stimulation, may thus acquire forms of knowledge that were genuinely out-of reach prior to such physical-manipulation-based re-tuning of the generative model. Action and perception thus work together to reduce prediction error against the more slowly evolving backdrop of a culturally distributed process that spawns a succession of designed environments whose impact on the development and unfolding of human thought and reason can hardly be overestimated. Continue reading

Embodied(Grounded) prediction(cognition)


clark514DJ8Bec6L._SX329_BO1,204,203,200_This post is based on a paper by Andy Clark: “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015). Andy Clark is a philosopher at the University of Edinburgh whose tastes trend toward the wild shirt. He is a very well educated philosopher in the brain sciences and a good teacher. The paper seems to put forward some major ideas for decision making even though that is not its focus. Hammond’s idea of the Cognitive Continuum is well accommodated. It also seems quite compatible with Parallel Constraint Satisfaction, but leaves room for Fast and Frugal Heuristics. It seems to provide a way to merge Parallel Constraint Satisfaction and Cognitive Niches. I do not really understand PCS well enough, but it seems potentially to add hierarchy to PCS and make it into a generative model that can introduce fresh constraint satisfaction variables and constraints as new components. If you have not read the post Prediction Machine, you should because the current post skips much background. It is also difficult to distinguish Embodied Prediction and Grounded Cognition. There are likely to be posts that follow on the same general topic.

Continue reading

Neurons and decision making

neuron1This blog has included about a dozen posts that mention neurons so I have decided to mine them for the most noteworthy ideas. This was tougher than I thought it would be.

Fun facts about neurons that impact decisions

Relative judgments

Since neurons encode changes in stimulation (rather than absolute levels), absolute judgments on any dimension are much more difficult than relative judgments. This lies at the root of Ernst Weber’s 1834 observation that detectable increases in visual or auditory signal intensity are proportional to the starting value, i.e., need to be larger for larger starting values. (from post First Half of 2009 JDM Research Summary)


There is a hierarchy of neurons and there are a lot of them. So it is quite likely that I have a neuron dedicated to Salma Hayek, etc.


Neural responses are noisy.  As an example, a radiologist may have tumor detecting neurons. These hypothetical tumor detectors will give noisy and variable responses. After one glance at a scan of a healthy lung, our hypothetical tumor detectors might fire 10 spikes per second. After a different glance at the same scan and under the same conditions, these neurons might fire 40 spikes per second. (from post Signal Detection Theory)

Neuronal Recycling

In Reading in the Brain, Dehaene introduces the idea of “neuronal recycling” whereby portions of our ventral visual system are turned over to reading and writing.  He says that after centuries of trial and error, writing systems evolved to a form adapted to our brain circuits. (from post Toward a Culture of Neurons)

Continue reading

Explaining Away

explainingawayindexThis is the second of three posts about the brain having a singular purpose of prediction error minimization. PEM literature has many contributors. Karl Friston is probably the strongest idea man, but Andy Clark and Jakob Hohwy are more understandable. Hohwy’s papers include:  Hohwy, J. (2015). “The Neural Organ Explains the Mind”. In T. Metzinger & J. M. Windt (Eds). Open MIND: 19(T). Frankfurt am Main: MIND Group. Hohwy, J., Roepstorff, A., & Friston, K.(2008). “Predictive coding explains binocular rivalry: an epistemological review.” Cognition 108, 687-701.  Hohwy, J. (2012). “Attention and conscious perception in the hypothesis testing brain.” Frontiers in Psychology/Consciousness Research, April 2012, Volume 3, Article 96. Paton, B., Skewes, J., Firth, C., & Hohwy, J(2013). “Skull-bound perception and precision optimization through culture.” Commentary in Behavioral and Brain Sciences (2013) 36:3, p 42.

Both Clark and Hohwy use “explaining away” to illustrate the concept of cancelling out sensory prediction error. Perception thus involves “explaining away” the driving (incoming) sensory signal by matching it with a cascade of predictions pitched at a variety of spatial and temporal scales. These predictions reflect what the system already knows about the world (including the
body) and the uncertainties associated with its own processing. What we perceive depends heavily upon the set of priors that the brain brings to bear in its best attempt to predict the current sensory signal.

Continue reading

The Prediction Machine

prDSCN1739-thomson-tide-machineThis post is derived from the paper, “Whatever next? Predictive brains, situated agents, and the future of cognitive science,” Behavioral and Brain Sciences (2013) 36:3, written by Andy Clark. I stumbled upon this paper and its commentary several weeks ago and have tried to figure out what to do with it. That has led me to other papers. In the next three posts, I will try to give the high points of this idea of PEM, prediction error minimization. It provides an overall background that is compatible with Parallel Constraint Satisfaction.

Clark suggests that the brain’s jobs are minimizing prediction error, selective sampling of sensory data, optimizing expected precisions, and minimizing complexity of internal models. To accomplish these tasks, the brain has evolved into a bundle of cells that support perception and action by attempting to match incoming sensory inputs with top-down expectations–predictions. This is done by using a hierarchical model that minimizes prediction error within a bidirectional cascade of cortical processing. This model maps on to perception, action, attention, and model selection, respectively (and dare I say judgment and decision making).

Continue reading