This post is based on a paper by Andy Clark: “Embodied Prediction,” in T. Metzinger & J. M. Windt (Eds). Open MIND: 7(T). Frankfurt am Main: MIND Group (2015). Andy Clark is a philosopher at the University of Edinburgh whose tastes trend toward the wild shirt. He is a very well educated philosopher in the brain sciences and a good teacher. The paper seems to put forward some major ideas for decision making even though that is not its focus. Hammond’s idea of the Cognitive Continuum is well accommodated. It also seems quite compatible with Parallel Constraint Satisfaction, but leaves room for Fast and Frugal Heuristics. It seems to provide a way to merge Parallel Constraint Satisfaction and Cognitive Niches. I do not really understand PCS well enough, but it seems potentially to add hierarchy to PCS and make it into a generative model that can introduce fresh constraint satisfaction variables and constraints as new components. If you have not read the post Prediction Machine, you should because the current post skips much background. It is also difficult to distinguish Embodied Prediction and Grounded Cognition. There are likely to be posts that follow on the same general topic.
According to Clark, Predictive Processing-style solutions avoid the need to solve difficult optimality equations by using expectations and using the complex generative model fluidly accommodate signaling delays, sensory noise, and the many-one mapping between goals and motor programs. To accomplish all this Predictive Processing must shift much of the burden onto the acquisition of those prior “beliefs”—the multi-level, multimodal webs of probabilistic expectation that together drive perception and action. Fortunately, Clark notes, PP describes a biologically plausible architecture that is just about maximally well-suited to installing the requisite suites of prediction, through embodied interactions with the training environments that we encounter, perturb, and—at several slower timescales—actively construct.
An important feature of the full Predictive Processing account is that the impact of specific prediction error signals can be systematically varied according to their estimated certainty or “precision”. The precision of a specific prediction error is its inverse variance—the size of its error bars. Precision estimation thus has a kind of meta-representational feel, since we are estimating the uncertainty of our own representations of the world. These ongoing task and context-varying estimates alter the weighting on select prediction error units, so as to increase the impact of task-relevant, reliable information. One key effect of this is to allow the brain to vary the balance between sensory inputs and prior expectations at different levels in ways sensitive to task and context. High-precision prediction errors have greater weight, and thus play a larger role in driving processing and response. Applications of this strategy allow Predictive Processing to nest simple fast and frugal solutions with rich, knowledge based strategies as merely different expressions of a unified underlying web of processing.
Clark then brings up Gerd Gigerenzer’s gift to the popular press: the “outfielder’s problem”: running to catch a fly ball in baseball. Giving perception its standard role, the job of the visual system is to get information about the current position of the ball so as to allow a distinct “reasoning system” to project its future trajectory. Nature, however, seems to have found a more elegant and efficient solution. Clark gives Chapman credit for the solution that involves running in a way that seems to keep the ball moving at a constant speed through the visual field. As long as the fielder’s own movements cancel any apparent changes in the ball’s optical acceleration, he will end up in the location where the ball hits the ground. This solution, OAC (Optical Acceleration Cancellation), explains why fielders, when asked to stand still and simply predict where the ball will land, typically do rather badly. They are unable to predict the landing spot because OAC is a strategy that works by means of moment-by moment self-corrections that, crucially, involve the agent’s own movements. According to Clark, OAC is a case of fast, economical problem-solving. The use of data available in the optic flow enables the outfielder to sidestep the need to deploy a rich inner model to calculate the forward trajectory of the ball.
Clark is eloquent here:
Instead of using sensing to get enough information inside, past the visual bottleneck, so as to allow the reasoning system to “throw away the world” and solve the problem wholly internally, such strategies use the sensor as an open conduit allowing environmental magnitudes to exert a constant influence on behavior. Sensing is here depicted as the opening of a channel, with successful whole-system behavior emerging when activity in this channel is kept within a certain range. In such cases: The focus shifts from accurately representing an environment to continuously engaging that environment with a body so as to stabilize appropriate co-ordinated patterns of behavior.
Precision weighting provides a means of systematically varying the relative influence of different neural populations. The most familiar role of such manipulations is to vary the balance of influence between bottom up sensory information and top-down model based expectation. But another important role is the implementation of fluid and flexible forms of large-scale “gating” among neural populations. This works because very low-precision prediction errors will have little or no influence upon ongoing processing, and will fail to recruit higher-level representations. Thus, according to Clark, altering the distribution of precision weightings amounts to altering the “simplest circuit diagram” for current processing. This suggests a new angle upon the outfielder’s problem. Already-active neural predictions and simple, rapidly-processed perceptual cues must work together to determine a pattern of precision weightings for different prediction-error signals. This creates a pattern of effective connectivity (a temporary distributed circuit) and, within that circuit, it sets the balance between top down and bottom-up modes of influence. In the case at hand, however, efficiency demands selecting a circuit in which visual sensing is used to cancel the optical acceleration of the fly ball. This means giving high weighting to the prediction errors associated with cancelling the vertical acceleration of the ball’s optical projection, and not caring very much about anything else. Apt precision weightings here function to select what to predict at any given moment. They may thus select a pre-learned, fast, low-cost strategy for solving a problem, as task and context dictate. Contextually recruited patterns of precision weighting thus accomplish a form of set-selection or strategy switching.
Clark makes the point that the conceptualizations of System 1 with its fast, automatic response and System 2 with its slow effortful deliberative reasoning look increasingly shallow. These are now just convenient labels for different admixtures of resource and influence, each of which is recruited in the same general way as circumstances dictate.