Category Archives: neurons/brain

Cognitive Penetration

This post is based on the paper: “Priors in perception: Top-down modulation, Bayesian
perceptual learning rate, and prediction error
minimization,” authored by Jakob Hohwy (see post Explaining Away) that appeared (or is scheduled to appear) in Consciousness and Cognition, 2017. Hohwy writes in an understandable manner and is so open that he posts papers even before they are complete of which this is an example. Hohwy pursues the idea of cognitive penetration – the notion that beliefs can determine perception.

Can ‘high level’ or ‘cognitive’ beliefs  modulate perception? Hohwy methodically examines this question by trying to create the conditions under which it might work and not be trivial. For under standard Bayesian inference,  the learning rate declines gradually as evidence is accumulated, and the prior updated to be ever more accurate. The more you already know the less you will learn from the world.  In a changing world this is not optimal since when things in the environment change we should vary the learning rate. Hohwy provides this example. As the ambient light conditions improve, the learning rate for detecting a visible target should increase (since the samples and therefore the prediction error has better precision in better light). This means Bayesian perceptual inference needs  a tool for regulating the learning rate. The inferential system should build expectations for the variability in lighting conditions throughout the day, so that the learning rate in visual detection tasks can be regulated up and down accordingly.

The human brain is thus hypothesized to build up a vast hierarchy of expectations that overall help regulate the learning rate and thereby optimize perceptual inference for a world that delivers changeable sensory input. Hohwy suggests that this makes the brain a hierarchical filter that takes the non-linear time series of sensory input and seeks to filter out regularities at different time scales. Considering the distributions in question to be normal or Gaussian, the brain is considered a hierarchical Gaussian filter or HGF .

Continue reading

Predictive Processing and Anxiety and other Maladies

This post is based on a paper written by Fabienne Picard and Karl Friston, entitled: “Predictions, perceptions, and a sense of self,” that appeared in Neurology® 2014;83:1112–1118. Karl Friston is one of the prime authors of predictive processing and Fabienne Picard is a doctor known for studying epilepsy. The ideas here are not new or even new to this blog, but the paper and specifically the figure below provide a good summary of the ideas of predictive processing. Andy Clark’s Surfing Uncertainty is the place to go if the subject interests you.

Continue reading

Are There Levels of Consciousness?

Global Workspace Theory - tutorialThis post examines the paper: “Are There Levels of Consciousness?” written by
Tim Bayne,  Jakob Hohwy, and Adrian M. Owen,  that appeared in Trends in Cognitive Sciences, June 2016, Vol. 20, No. 6. The paper is described as opinion and for me bridges ideas of predictive processing with some of the ideas of Stanislas Dehaene.  Jakob Hohwy is an important describer of predictive processing. The paper argues that the levels-based or continuum based framework for conceptualizing global states of consciousness is untenable and develops in its place a multidimensional account of global states.

Consciousness is typically taken to have two aspects: local states  and global states. Local states of consciousness include perceptual experiences of various kinds, imagery experiences, bodily sensations, affective experiences, and occurrent thoughts. In the science of consciousness local states are usually referred to as ‘conscious contents. By contrast, global states of consciousness are not typically distinguished from each other on the basis of the objects or features that are represented in experience. Instead, they are typically distinguished from each other on cognitive, behavioral, and physiological grounds. For example, the global state associated with alert wakefulness is distinguished from the global states that are associated with post-comatose conditions.

The authors suggest that to describe global states as levels of consciousness is to imply that consciousness comes in degrees, and that changes in a creature’s global state of consciousness can be represented as changes along a single dimension of analysis. Bayne, Hohwy, and Owen see two problems with this.  One person can be conscious of more objects and properties than another person, but to be conscious of more is not to be more conscious. A sighted person might be conscious of more than someone who is blind, but they are not more conscious than the blind person is. The second problem that they see with the level-based analysis of global states is that there is good reason to doubt whether all global states can be assigned a determinate ordering relative to each other. The authors provide the example of the relationship between the global conscious state associated with rapid eye movement (REM) sleep and that which is associated with light levels of sedation. They do not believe that one of these states must be absolutely ‘higher’ than the other. Perhaps states can be compared with each other only relative to certain dimensions of analysis: the global state associated with REM sleep might be higher than that associated with sedation on some dimensions of analysis, whereas the opposite might be the case on other dimensions of analysis (Figure 1A).


The authors recognize two clear dimensions, but suggest there are likely several more. The first is gating. In some global states the contents of consciousness appear to be gated in various ways, with the result that individuals are able to experience only a restricted range of contents. MCS patients, patients undergoing absence seizures, and mildly sedated individuals can consciously represent the low-level features of objects, but they are typically unable to represent the categories to which perceptual objects belong. Thus, the gating of conscious contents is likely to provide one dimension along which certain global states can be hierarchically organized. The second dimension of consciousness is often captured by saying that the contents of consciousness are globally available for the control of thought and action. However, there is good reason to think that it is compromised in a number of pathologies of consciousness. For example, patients undergoing absence seizures can engage in perceptual-driven motor responses even though their capacities for reasoning, executive processing, and memory consolidation are typically limited.  With respect to this dimension, the global state of consciousness associated with the EMCS is ‘higher’ than that which is associated with the MCS, for EMCS patients have access to a wider range of cognitive and behavioral consuming systems than MCS patients do.

Beyond the dimensions of gating of contents and the availability associated with consciousness, the authors  suggest there might there be a role for attention in structuring global states. There is also the question of the possibility of interaction between some of the dimensions that structure consciousness. Although some dimensions may be completely independent of each other, others are likely to modulate each other. For example, there might be interactions between the gating of contents and functionality such that consciousness cannot be high on the gating dimension but low on certain dimensions of functionality (Figure 1C).

This idea that global states of consciousness are best understood as regions in a multidimensional space seems to me a natural progression as we learn more about consciousness and its underpinnings. An example is the time when you are completely immersed in some task and you don’t notice time passing or who walked by. Your attention is completely focused and gated so that you are missing other things. It is not a higher level of consciousness, but a different level of consciousness.  The spotlight is focused on a smaller area. The light itself is not any brighter. At the same time, the argument that Bayne, Hohwy and Owen are making seems to be focused at very limited consciousness. Most of us just see a sleeping person as unconscious without an active global neuronal workspace. We do not see a person as conscious until some threshold or phase change occurs so that the light is brighter so that the availability is greater.  There must be some level of error coming back from our predictions. Several previous posts including Consciousness. Confessions of a Romantic Reductionist, The Global Neuronal Workspace, and Dehaene: Consciousness and Decision Making,  have looked at consciousness. This paper did not address the consciousness of other animals. It also did not address Intuition which is often considered unconscious in some ways since it is typically effortless as we perceive it. Global availability seems important to the idea. Of course, as you develop expertise, global availability is not so necessary for certain subjects. Auto-pilot can handle normal situations once you have expertise so maybe we all have different conscious realms since we have different expertise.

Frankly, I doubt that many would argue that consciousness has only a single dimension. Dehaene may ignore multiple dimensions, but I would suggest that he does this to make the idea more understandable to laymen.




Perception, Action and Utility: The Tangled Skein

dawdll-e1443732985775This is the first post in quite a while. I have been trying to consolidate and integrate my past inventory of posts into what I am calling papers. This has turned out to be time consuming and difficult since I really have to write something. As part of that effort, I have been reading Surfing Uncertainty– Prediction, Action, and the Embodied MInd authored by Andy Clark. (I note that this book is not designed for the popular press–it is quite challenging.) Clark refers to the extensive literature on decision making and pointed out a part of that literature unknown to me. With that recommendation, I sought out: “Perception, Action and Utility: The Tangled Skein,” (2012) in M. Rabinovich, K. Friston & P. Varona, editors, Principles of brain dynamics:  Global state interactions. Cambridge, MA:  MIT Press written by Samuel J. Gershman and Nathaniel D. Daw.

Gershman and Daw focus on two aspects of decision theory that have important implications for its implementation in the brain:
1. Decision theory implies a strong form of separation between probabilities and utilities. In
particular, the posterior must be computed before (and hence independently of) the expected
utility. This assumption is sometimes known as probabilistic sophistication. It means that I can state how much enjoyment I would derive from having a picnic in sunny weather, independently of my belief that it will be sunny tomorrow. This framework supports a sequentially staged view of the problem –perception guiding evaluation.
2. The mathematics that formalizes decision making under uncertainty, Bayes Theorem, generally assumes Gaussian or multinomial assumptions for distributions. Gershman and Daw note that these assumptions are not generally applicable to real-world decision-making
tasks, where distributions may not take any convenient parametric form. This means that if the brain is to perform the necessary calculations, it must employ some form of approximation.
Statistical decision theory, to be plausibly implemented in the brain, requires segregated representations of probability and utility, and a mechanism for performing approximate inference.

The full story, however, is not so simple. First, abundant evidence from vision indicates that reward modulation occurs at all levels of the visual hierarchy, including V1 and even before that in the lateral geniculate nucleus. Gershman and Daw suggest that the idea of far-downstream LIP (lateral intraparietal area) as a pure representation of posterior state probability is dubious. Indeed, other work varying rewarding outcomes for actions shows that neurons in LIP are indeed modulated by the probability and amount of reward expected for an action  probably better thought of as related to expected utility rather than state probability per se. Then recall that area LIP is only one synapse downstream from the instantaneous motion energy representation in MT. If it already represents expected utility there seems to be no candidate for an intermediate stage of pure probability representation.

A di fferent source of contrary evidence comes from behavioral economics. The classic Ellsberg
paradox revealed preferences in human choice behavior that are not probabilistically
sophisticated. The example given by Ellsberg involves drawing a ball from an urn containing 30 red balls and 60 black or yellow balls in an unknown proportion. Subjects are asked to choose between pairs of gambles (A vs. B or C vs. D) drawn from the following set:
Experimentally, subjects prefer A over B and D over C. The intuitive reasoning is that in gambles A and D, the probability of winning $100 is known (unambiguous), whereas in B and C it is unknown (ambiguous). There is no subjective probability distribution that can produce this pattern of preferences. This is widely regarded as violating the assumption of probability-utility segregation in statistical decision theory. (See post Allais and Ellsberg Paradoxes).

Gershman and Daw suggest two ways that the separation between probabilities and utilities might be weakened or abandoned:

A. Decision-making as probabilistic inference
The idea here is that by transforming the utility function appropriately, one can treat it as a probability density function parameterized by the action and hidden state. Consequently, maximizing the “probability” of utility with respect to action, while marginalizing the hidden state, is formally equivalent to maximizing the expected utility. Although this is more or less an algebraic maneuver, it has profound implications for the organization of decision-making circuitry in the brain. The insight is that what appear to be dedicated motivational and valuation circuits may instead be regarded as parallel applications of the same underlying computational mechanisms over eff ectively di fferent likelihood functions.

Karl Friston builds on this foundation to assert a much more provocative concept: that for biologically evolved organisms, the desired equilibrium is by de finition just the species’ evolved equilibrium state distribution. The mathematical equivalence rests on the evolutionary argument that hidden states with high prior probability also tend to have high utility. This situation arises through a combination of evolution and ontogenetic development, whereby the brain is immersed in a   “statistical bath” that prescribes the landscape of its prior distribution. Because agents who find themselves more often in congenial states are more likely to survive, they inherit (or develop) priors with modes located at the states of highest congeniality. Conversely, states that are surprising given your evolutionary niche, like being out of water, for a fi sh, are maladaptive and should be avoided. (See post Neuromodulation.)

B. The costs of representation and computation
Probabilistic computations make exorbitant demands on a limited resource, and in a real physiological and psychological sense, these demands incur a cost that debits the utility of action. According to Gershman and Daw, humans are “cognitive misers” who seek to avoid        e ffortful thought at every opportunity, and this e ffort diminishes the same neural signals that are excited by reward. For instance, one can study whether a rat who has learned to lever press for food while hungry will continue to do so when full; a full probabilistic representation over outcomes will adjust its expected utilities to the changed outcome value, whereas representing utilities only in expectation can preclude this and so predicts hapless working for unwanted food. The upshot of many such experiments is that the brain adopts both approaches, depending on circumstances. Circumstances elicit which approach can be explained by a sort of meta-optimization over the costs (e.g. extra computation) of maintaining the full representation relative to its benefi ts (better statistical accuracy).


Single Strategy Framework and the Process of Changing Weights


cloudindexThis post starts from the conclusion of the previous post that the evidence supports a single strategy framework, looks at Julian Marewski’s criticism, and then piles on with ideas on how weights can be changed in a single strategy framework.

Marewski provided a paper for the special issue of the Journal of Applied Research in Memory and Cognition (2015)  on “Modeling and Aiding Intuition in Organizational Decision Making”:  “Unveiling the Lady in Black: Modeling and Aiding Intuition,” authored by Ulrich Hoffrage and Julian N. Marewski. The paper gives the parallel constraint satisfaction model a not so subtle knock:

By exaggerating and simplifying features or traits, caricatures can aid perceiving the real thing. In reality, both magic costumes and chastity belts are degrees on a continuum. In fact, many theories are neither solely formal or verbal. Glöckner and Betsch’s connectionist model of intuitive decision making, for instance, explicitly rests on both math and verbal assumptions. Indeed, on its own, theorizing at formal or informal levels is neither “good” nor “bad”. Clearly, both levels of description have their own merits and, actually, also their own problems. Both can be interesting, informative, and insightful – like the work presented in the first three papers of this special issue, which we hope you enjoy as much as we do. And both can border re-description and tautology. This can happen when a theory does not attempt to model processes. Examples are mathematical equations with free parameters that carry no explanatory value, but that are given quasi-psychological, marketable labels (e.g., “risk aversion”).

Continue reading


neurocoverThis is the second post looking at Karl Friston’s review (“The Fantastic Organ” Brain 2013:136; 1328-1332) of Kandel’s The Age of Insight: the Quest to Understand the Unconscious in Art, Mind, and Brain, from Vienna 1900 to the Present. Kandel looks at how we make inferences about other people, ourselves and our emotional states. He combines the mirror neuron system with reflections in a mirror. Friston suggests that this captures the essence of ‘perspective taking’, which is unpacked in terms of second order representations (representations of representations) as they relate to theory of mind and how artists use reflections. Friston states:

It is self evident that if our brains entail generative models of our world, then much of the brain must be devoted to modelling entities that populate our world; namely, other people. In other words, we spend much of our time generating hypotheses and predictions about the behavior of people—including ourselves. As noted by Kandel ‘the brain also needs a model of itself’ (p. 406).

Continue reading

Art and Embodied Cognition

art1UntitledThis post is the first of two that look at a book review written by Karl Friston. Friston is the primary idea man behind embodied cognition (see post Embodied (grounded) Prediction (cognition) so far as I can tell. A book review is a chance to read his ideas in a little less formal and easier to understand environment. He reviews The Age of Insight: the Quest to Understand the Unconscious in Art, Mind, and Brain, from Vienna 1900 to the Present by Eric R. Kandel 2012.

Continue reading

Dark Room Problem- Minimizing Surprise

dark_roomThis post is based on the paper: “Free-energy minimization and the dark-room problem,” written by Karl Friston, Christopher Thornton and Andy Clark that appeared in Frontiers in Psychology in May 2012. Recent years have seen the emergence of an important new fundamental theory of brain function (Posts Embodied Prediction and Prediction Error Minimization). This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose over arching principle is the minimization of surprise (or, equivalently, the maximization of expectation). A puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. People do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.”

Continue reading

Embodied(Grounded) prediction(cognition)


clark514DJ8Bec6L._SX329_BO1,204,203,200_This post is based on a paper by Andy Clark: “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015). Andy Clark is a philosopher at the University of Edinburgh whose tastes trend toward the wild shirt. He is a very well educated philosopher in the brain sciences and a good teacher. The paper seems to put forward some major ideas for decision making even though that is not its focus. Hammond’s idea of the Cognitive Continuum is well accommodated. It also seems quite compatible with Parallel Constraint Satisfaction, but leaves room for Fast and Frugal Heuristics. It seems to provide a way to merge Parallel Constraint Satisfaction and Cognitive Niches. I do not really understand PCS well enough, but it seems potentially to add hierarchy to PCS and make it into a generative model that can introduce fresh constraint satisfaction variables and constraints as new components. If you have not read the post Prediction Machine, you should because the current post skips much background. It is also difficult to distinguish Embodied Prediction and Grounded Cognition. There are likely to be posts that follow on the same general topic.

Continue reading

Perceptual presence

affiche - pleine pageThis post joins several others in being only tangentially related to JDM. It is based on the paper: “The felt presence of other minds: predictive processing, counterfactual predictions, and mentalizing in autism,” that appears in 2015 Consciousness and Cognition. The authors are Colin J. Palmer, Anil K. Seth and Jakob Hohwy. (Post Prediction error minimization)

A central ingredient of social experience is that we represent the mental states of other people. This sense of others’ mental states is a part of our understanding and anticipation of their behavior, and molds our own behavior correspondingly. If our friend shows up to the restaurant with a grim face, we have a sense of her mood and adjust our greeting accordingly. If she glances at our empty glass while pouring herself some wine, we have a sense of her intentions and might move our glass closer. This is the concept of mentalizing.

Continue reading