Big Models

pope155570_600After the three years that I have pushed out other people’s ideas on judgement and decision making, at this moment, I can recall three huge ideas.

I continually look for commment on and expansion of these ideas, and I often do this in the most lazy of ways, I google them.  Recently I seemed to find the last two mentioned on the same page of a philosophy book. That was not actually true, but it did remind me of similarities that I could point out. The idea of a compensatory process where one changes his belief a little to match the current set of “facts” tracks well with the idea that we can get predictions correct by moving our hand to catch the ball so that it does not have to be thrown perfectly. Both clearly try to match up the environment and ourselves. The Parallel Constraint Satisfaction model minimizes dissonance while the Free Energy model minimizes surprise. Both dissonance and surprise can create instability. The Free Energy model is more universal than the Parallel Constraint Satisfaction model, while for decision making PCS is more precise. The Free Energy model also gives us the idea that heuristic models could fit within process models. All this points out what is obvious to us all.  We need the right model for the right job.

This finally brings me all the way around to this paper:  “The empirical content of theories in judgment and decision making:  Shortcomings and remedies,” by Andreas Glöckner and Tilmann Betsch that appeared in Judgment and Decision Making, Vol. 6, No. 8, December 2011, pp. 711–721. I originally thought this was less than balanced and maybe too direct a shot at the adaptive toolbox (see post Bounded Rationality the Adaptive Toolbox) model. After seeing those more knowledgeable than I mention the paper, I believe that I was wrong.

Glockner and Betsch begin by noting that the rise of modern sciences began with the falsification of a theory that had been considered the indisputable truth for centuries: heliocentrism. Copernicus, Kepler, and Galileo provided empirical evidence that falsified the notion that earth is the center of the universe. Their work gave rise to the empirical sciences governed by the notion that a proposition has to stand the ground against reality in order to be accepted. Glockner and Betsch suggest that this may seem to be a truism, but that in the field of judgment and decision making they are observing a trend in the opposite direction. Many theories are weakly formulated. They do not come up with strong rules and are, at least to some extent, immune to critical testing.

Glockner and Betsch then suggest some standards for theory formulation:

Level of Universality. A theory has a high level of universality if the antecedent conditions
include as many situations as possible. An expected value (EV) model, for example, which predicts “when selecting between gambles people choose the gamble with the highest expected value”, has a higher universality than a priority heuristic (PH). PH has multiple antecedent conditions that all have to be fulfilled and therefore reduce universality. Specifically, PH predicts “if people choose between pairs of gambles, and if the ratio of the expected value of the
two gambles is below 1:2, and if neither gamble dominates the other then people chose the gamble with the higher minimum payoff”.

Degree of Precision. A theory’s degree of precision increases with the specificity of the predicted phenomenon. A theory EV+C that predicts that “when selecting between gambles, people choose the gamble with the higher expected value and their confidence will increase with the difference between the expected values of the gambles” is more specific than the EV model mentioned above. It describes the behavior more specifically and all findings that falsify EV also falsify EV+C, but not vice versa. For one dependent measure, a theory is more precise than another one if it allows fewer different outcomes.

Glockner and Betsch state that the scientific advantage of a new theory that is dominated on both universality and precision would be zero and the theory should be disregarded even prior to any empirical testing. On the other hand, a theory that dominates another theory on at least one of the two dimensions has “unique” empirical content which could constitute a scientific advantage if it holds in empirical testing. According to the authors: “The gist of the concept, however, is that the more a statement prohibits, the more it says about our world.”

Simplicity and parameters of a theory. It has been argued that—everything else being equal—simple theories should be preferred over complex ones. However, “simplicity” is a vague concept which can be understood in many different ways. The simplicity of a theory is always relative and it can be evaluated only in comparison to another competing theory.

Process models vs. outcome models. Many models in JDM predict choices or judgments only. These outcome models (also called “as-if models”, or paramorphic models) predict people’s choices or judgments, but they are silent concerning the cognitive operations that are used to reach them. Expected utility theory and weighted-linear theories for judgment are examples. Both are models with a high degree of universality and they are precise concerning their predictions for choices or judgments. Process models could, however, have a higher precision by making additional predictions on further dependent variables such as time, confidence, information search, and others.  Everything else being equal, the empirical content of a theory increases with the number of (non-equivalent) dependent variables, on which it makes falsifiable predictions. Process theories therefore per se yield potentially higher empirical content than outcome theories.

Universal outcome theories. Universal outcome theories are theories that are defined for a wide range of tasks. Universal outcome theories make predictions for choices or judgments but not for other outcome variables. The degree of precision is intermediate to high. For example, expected utility theories, cumulative prospect theory, and the transfer-of-attention-exchange model for risky choices belong to this class. The theory that claims the maximum of universality in this category is a generalized expected utility theory by Gary Becker. Precision is, however, relatively low in that neither the set of preferences nor the transformation function for utility is defined or easily measurable.

Single heuristic theories. Single heuristic theories are theories that consist of a single heuristic for which a specific application area is defined. The recognition heuristic and take the best heuristic among others, fall into this class. The application area defined for these heuristics is sometimes rather limited, and therefore universality of these models can be low. The theories can become tautological if their application area is defined by their empirically observable application or by non-measurable variables as described above. Single heuristic theories often make predictions concerning multiple dependent variables such as choices, decision time, confidence, and information search. Hence, their precision is potentially high. To achieve this degree of precision, however, existential statements and ambiguous quantifiers such as “some people use heuristic X” or “people may use heuristic X” must be avoided because they reduce precision and empirical content to zero.

Multiple heuristic theories. These kinds of theories define multiple strategies or heuristics that are applied adaptively. Models in this class are the contingency model, the adaptive decision maker approach, and the adaptive toolbox. Their (potential) level of universality is higher than for single heuristics because the theory’s scope is not limited to a certain decision domain. However, to have empirical content, a theory has to be defined as a fixed (or at least somehow limited) set of heuristics. The degree of precision of such a fixed-set theory can still be low if many heuristics are included and no clear selection criteria among the heuristics are defined.

Universal process theories. Universal process theories describe cognitive process. They often rely on one general mechanism that is applied very broadly to explain many seemingly unrelated phenomena. Considering such basic mechanisms often allows replacing multiple theories that explain phenomena on a more abstract level. Universal process theories have a high level of universality because they are applied not only to decision making, but also to phenomena of perception or memory. They allow us making predictions on many dependent variables and therefore can be very precise. One of the problems that has to be solved, however, is that universal process models sometimes have many free parameters which are hard to measure and might therefore decrease precision. Examples are sampling approaches, evidence accumulation models (see post Evidence Accumulation model), parallel constraint satisfaction models, cognitive architectures assuming production rules, and multi-trace memory models.

 

 

 

 

2 thoughts on “Big Models

  1. cam

    After years of following jdm research, what would your suggestions be with respect to making better decisions? It’s often said, to be discerning as to when to follow intuition and when to apply analysis, but is there any lessons you’d take away, either from the research or from personal experience, that go beyond that advice? A post on your advice for decision making, or salient points you’d take away after following the field for some time would be really helpful.

    Reply
    1. admin Post author

      I must say that it depends. Robin Hogarth provides, what seems to me, to be some of the most practical advice. You might check out the posts Heuristic and Linear Models and Nonlinear Ecology. Thanks for the interest

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.