Single Strategy Framework and the Process of Changing Weights

 

cloudindexThis post starts from the conclusion of the previous post that the evidence supports a single strategy framework, looks at Julian Marewski’s criticism, and then piles on with ideas on how weights can be changed in a single strategy framework.

Marewski provided a paper for the special issue of the Journal of Applied Research in Memory and Cognition (2015)  on “Modeling and Aiding Intuition in Organizational Decision Making”:  “Unveiling the Lady in Black: Modeling and Aiding Intuition,” authored by Ulrich Hoffrage and Julian N. Marewski. The paper gives the parallel constraint satisfaction model a not so subtle knock:

By exaggerating and simplifying features or traits, caricatures can aid perceiving the real thing. In reality, both magic costumes and chastity belts are degrees on a continuum. In fact, many theories are neither solely formal or verbal. Glöckner and Betsch’s connectionist model of intuitive decision making, for instance, explicitly rests on both math and verbal assumptions. Indeed, on its own, theorizing at formal or informal levels is neither “good” nor “bad”. Clearly, both levels of description have their own merits and, actually, also their own problems. Both can be interesting, informative, and insightful – like the work presented in the first three papers of this special issue, which we hope you enjoy as much as we do. And both can border re-description and tautology. This can happen when a theory does not attempt to model processes. Examples are mathematical equations with free parameters that carry no explanatory value, but that are given quasi-psychological, marketable labels (e.g., “risk aversion”).

Glockner and Betsch might not appreciate limiting their model to “intuitive decision making.” The reference to “risk aversion” is clearly knocking  Glockner and Pachur’s paper (“Cognitive models of risky choice: Parameter stability and predictive accuracy of prospect theory” see post  Cumulative Prospect Theory-Changing Parameters) where they created two parameters to reflect risk aversion: one reflecting sensitivity to probability differences and the other attractiveness of gambling.  I would assert that these parameters could have explanatory value. (I note that Marewski does not list that paper in the references so it is only my guess that it was the source of the commentary.)

Marewski and Glockner do not seem that far apart, but they seem to be looking at things from the opposite directions. Marewski wants to start with all the little rules and put them together to make the model. Glockner having seen the little rules is not so sure that they are rules, but they might actually be more or less artifacts, tries to use big rules to see if they can fit. They need each other. Einstein did not come up with the Theory of Special Relativity by putting together the pieces, but he needed the pieces. Marewski also more or less calls Parallel Constraint Satisfaction-decision making a tautology. I found a comment on “The Brains” Blog written by Sergio Graziosi on January 8, 2016, concerning Andy Clark’s December 17, 2015, post.

However, serious people won’t fling tautological mud against evolutionary theory (ET). Why not? Because ET starts from natural selection and builds a theory on top of it, the end result is mechanistic theory of extraordinary explanatory powers. In this context, the tautological core provides a solid, unassailable foundation: as Andy points out, the debate rightly focuses at the level of the mechanistic explanations produced, the tautological core is not and should not be questioned.

I believe that the comment is spot on both with respect to the Parallel Constraint Satisfaction Model and Andy Clark’s Predictive Processing. Andy Clark is a philosopher and is not in the same circle of researchers as Betsch, Glockner, Marewski, etc. but he seems to weigh in on the side of the single strategy framework and also suggests how parameter weights are set. In the blog “The Brains” on December 14, 15, 16, and 17, 2015, Clark states:

Is the human brain just a rag-bag of different tricks and stratagems, slowly accumulated over evolutionary time? For many years, I thought the answer to this question was most probably ‘yes’. Sure, brains were fantastic organs for adaptive success. But the idea that there might be just a few core principles whose operation lay at the heart of much neural processing was not one that had made it on to my personal hit-list.

Clark looks at the determination of weights through one of his great examples:

To get the flavor, consider the familiar (but actually rather surprising) ability of most humans to, on demand, see faces in the clouds. To replicate this, a multi-level prediction machine reduces the weighting on a select subset of ‘bottom-up’ sensory prediction errors. This is equivalent to increasing the weighting on your own top-down predictions (here, predictions of seeing face-like forms). By thus varying the balance between top-down prediction and the incoming sensory signal, you are enabled to ‘see’ face-forms despite the lack of precise face-information in the input stream.

This is variable weighting of the prediction error signal. Thus, according to Clark, altering the distribution of precision weightings amounts to altering the “simplest circuit diagram” for current processing. This suggests a new angle upon the outfielder’s problem. Already-active neural predictions and simple, rapidly-processed perceptual cues must work together to determine a pattern of precision weightings for different prediction-error signals. This creates a pattern of effective connectivity (a temporary distributed circuit) and, within that circuit, it sets the balance between top down and bottom-up modes of influence. In the case at hand, however, efficiency demands selecting a circuit in which visual sensing is used to cancel the optical acceleration of the fly ball. This means giving high weighting to the prediction errors associated with cancelling the vertical acceleration of the ball’s optical projection, and not caring very much about anything else. Apt precision weightings here function to select what to predict at any given moment.

Clark states that this mechanism (the so-called ‘precision-weighting’ of prediction error) provides a flexible means to vary the balance between top-down prediction and incoming sensory evidence, at every level of processing. Implemented by multiple means in the brain (such as slow dopaminergic modulation, and faster time-locked synchronies between neuronal populations), flexible precision-weighting makes these architectures fluid and content responsive (See posts Embodied(Grounded) Prediction(Cognition) and the Mixed Instrumental Controller).

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.