Category Archives: Rational Choice Theories

Denver Bullet Study

This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that  arming teachers is a solution to anything. However,  going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in  Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.

Continue reading

Not that Irrational

This post is from Judgment and Decision Making, Vol. 11, No. 6, November 2016, pp. 601–610, and is based on the paper:  “The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated,” written by Andreas Glöckner. Danziger, Levav and Avnaim-Pesso analyzed legal rulings of Israeli parole boards concerning the effect of serial order in which cases are presented within ruling sessions. DLA analyzed 1,112 legal rulings of Israeli parole boards that cover about 40% of the parole requests of the country. They assessed the effect of the serial order in which cases are presented within a ruling session and took advantage of the fact that the ruling boards work on the cases in three sessions per day, separated by a late morning snack and a lunch break. They found that the probability of a favorable decision drops from about 65% to 5% from the first ruling to the last ruling within each session. This is equivalent to an odds ratio of 35. The authors argue that these findings provide support for extraneous factors influencing judicial decisions and speculate that the effect might be driven by mental depletion. Glockner notes that the article has attracted attention and the supposed order effect is considerably cited in psychology.

Continue reading

Big Models

pope155570_600After the three years that I have pushed out other people’s ideas on judgement and decision making, at this moment, I can recall three huge ideas.

I continually look for commment on and expansion of these ideas, and I often do this in the most lazy of ways, I google them.  Recently I seemed to find the last two mentioned on the same page of a philosophy book. That was not actually true, but it did remind me of similarities that I could point out. The idea of a compensatory process where one changes his belief a little to match the current set of “facts” tracks well with the idea that we can get predictions correct by moving our hand to catch the ball so that it does not have to be thrown perfectly. Both clearly try to match up the environment and ourselves. The Parallel Constraint Satisfaction model minimizes dissonance while the Free Energy model minimizes surprise. Both dissonance and surprise can create instability. The Free Energy model is more universal than the Parallel Constraint Satisfaction model, while for decision making PCS is more precise. The Free Energy model also gives us the idea that heuristic models could fit within process models. All this points out what is obvious to us all.  We need the right model for the right job.

Continue reading

Emotion and Risky Choice

brainhetwigUntitledThis post is based on the paper: “The Neural Basis of Risky Choice with Affective Outcomes,”  written by Renata S. Suter, Thorsten Pachur, Ralph Hertwig, Tor Endestad, and Guido Biele
that appeared in PLOS ONE journal.pone.0122475  April 1, 2015. The paper is similar to one discussed in the post Affect Gap that included Pachur and Hertwig although that paper did not use fMRI. Suter et al note that both normative and many descriptive theories of decision making under risk have typically investigated choices involving relatively affect-poor, monetary outcomes. This paper compared choice in relatively affect-poor, monetary lottery problems with choice in relatively affect-rich medical decision problems.

The paper is notable in that it not only examined behavioral differences between affect rich and affect poor risky choice, but also watched the brains of the people making the decisions with fMRI. The researchers assert that the traditional notion of a mechanism that assumes sensitivity to outcome and probability information and expectation maximization may not hold when options elicit relatively high levels of affect. Instead, qualitatively different strategies may be used in affect-rich versus affect-poor decisions. This is not much of a leap.

In order to examine the neural underpinnings of cognitive processing in affect-rich and affect poor decisions, the researchers asked participants to make choices between two options with relatively affect-rich outcomes (drugs that cause a side effect with some probability) as well as between two options with relatively affect-poor outcomes (lotteries that incur monetary losses with some probability). The monetary losses were matched to each individual’s subjective monetary evaluation of the side effects, permitting a within-subject comparison between affect-rich and affect-poor choices in otherwise monetarily equivalent problems. This was cleverly done. Specifically, participants were first asked to indicate the amount of money they considered equivalent to specific nonmonetary outcomes (here: side effects; Fig 1A). The monetary amounts indicated (willingness-to-pay; WTP) were then used to construct individualized lotteries in which either a side effect (affect-rich problem) or a monetary loss (affect-poor problem) occurred with some probability. For example, consider a participant who specified a WTP of $18 to avoid insomnia and $50 to avoid depression. In the affect-rich problem, she would be presented with a choice between drug A, leading to insomnia
with a probability of 15% (no side effects otherwise), and drug B, leading to depression
with a probability of 5% (no side effects otherwise). In the corresponding affect-poor problem,
she would be presented with a choice between lottery A, leading to a loss of $18 with a probability of 15% (nothing otherwise), and lottery B, leading to a loss of $50 with a probability of 5% (nothing otherwise). This paradigm allowed the authors to compare the decision mechanisms underlying affect-rich versus affect-poor risky choice on the basis of lottery problems that were equivalent in monetary terms (Fig 1A and 1B).

figure1hertwig

 

To assess whether experiencing a side effect was rated as evoking stronger negative affect than losing the equivalent monetary amount, Suter et al analyzed the ratings. Fig 2. presents them.

figure2hertwig
Were the differences in affect associated with different choices? Their findings showed that, despite the monetary equivalence between affect-rich and affect-poor problems, people reversed their preferences between the corresponding problems in 46.07%  of cases, on average. To examine the cognitive mechanisms underlying affect-rich and affect-poor choices, the researchers modeled them using Cumulative Prospect Theory (CPT). On average, CPT based on individually fitted parameters correctly described participants’ choices in 82.45% of affect-rich choices and in 90.42% of affect-poor choices. Modeling individuals’ choices using CPT, they found affect-rich choice was best described by a substantially more strongly curved weighting function than affect-poor choice, signaling that the psychological impact of probability information is diminished in the context of emotionally laden outcomes. Participants seemed to avoid the option associated with the worse side effects, irrespective of their probabilities, and therefore often ended up choosing the option with the lower expected value.

The neural testing was complicated and used extensive computational modeling analysis. Neuroimaging analyses further supported the hypothesis that choices between affect-rich options are based on qualitatively different cognitive processes than choices between affect poor options; the two triggered qualitatively different brain circuits. Affect-rich problems engage more affective processing, as indicated by stronger activation in the amygdala. The results suggested that affect-poor choice is based on calculative processes, whereas affect-rich choice involves emotional processing and autobiographical memories. When a choice elicits strong emotions, decision makers seem to focus instead on the potential outcomes and the memories attached to them.

According to Suter et al, on a theoretical level, models assuming expectation maximization (and implementing the weighting of some function of outcome by some function of probability) may fail to accurately predict people’s choices in the context of emotionally laden outcomes. Instead, alternative modeling frameworks (e.g., simplifying, lexicographic cognitive strategies) may be more appropriate. On a practical level, the researchers suggest that to the extent that people show strongly attenuated sensitivity to probability information (or even neglect it altogether) in decisions with affect-rich outcomes, different decision aids may be required to help them make good choices. For instance, professionals who communicate risks, such as doctors or policy makers, may need to pay special attention to refocusing people’s attention on the probabilities of (health) risks by illustrating those risks visually.

This paper does not present things in ways that I have seen often. It focuses on the most compensatory analytic strategies like prospect theory and says that these strategies do not reflect how we make decisions that are emotionally laden.  It suggests that simplifying lexicographic strategies may be more appropriate. Other studies that have used decision times and eye tracking instead of fMRI have made it clear that compensatory analytic strategies do not reflect actual decision making, although not as definitively. We also know it from our own experiences. However, from my understanding, this does not necessarily push us to lexicographic strategies. There are compensatory strategies like parallel constraint satisfaction that might also be the explanation. It may be that this is just part of the cognitive niches v. parallel constraint satisfaction or evidence accumulation decision models debate. Fuzzy trace theory is another candidate that is not a lexicographic strategy.

 

 

Signal Detection for Categorical Decisions

 

erev2This post looks at signal detection theory (SDT) once again. Ken Hammond helped me see the power of signal detection as a descriptive theory (post Irreducible Uncertainty..) The last year of news with respect to fatal encounters between the police and the public has made me think of signal detection again as quite relevant. I should note that Ken Hammond died in May 2015 and I am looking for his last paper  “Concepts from Aeronautical Engineering Can Lead to Advances in Social Psychology”.  This post is based on a paper: “Signal Detection by Human Observers: A Cutoff Reinforcement Learning Model of Categorization Decisions Under Uncertainty,” written by Ido Erev that appeared in the Journal of the American Psychological Association, 1998, Vol. 105, No. 2, 280-298. This paper is important, but dated.

Many common activities involve binary categorization decisions under uncertainty. The police must try to distinguish between the individuals who can and want to harm the public and/or the police from others.  A doctor has to decide whether or not he should do more tests to see if you may have cancer. According to Erev, the frequent performance of categorization decisions and the observation that they can have high survival value suggest that the cognitive processes that determine these decisions should be simple and adaptive. Thus, it could be hypothesized that one basic (simple and adaptive) model can be used to describe these processes within a wide set of situations.

Continue reading

Slippery slope hypocrites

hypocrisy7187159178399176This post looks at a paper, “Rational Hypocrisy: A Bayesian Analysis Based on Informal Argumentation and Slippery Slopes,” Cognitive Science 38 (2014) 1456–1467, written by Tage S. Rai and Keith J. Holyoak (posts Metaphor, Bidirectional Reasoning) that draws a connection between what may look like moral hypocrisy and the categories we select for cases with weak arguments by looking at the slippery slope argument. Moral hypocrisy is typically viewed as an ethical accusation: Someone is applying different moral standards to essentially identical cases, dishonestly claiming that one action is acceptable while otherwise equivalent actions are not. The authors provide the following example:

“I respect the jury’s verdict. But I have concluded that the prison sentence given to
Mr. Libby is excessive.” With these words, former President George W. Bush commuted
the sentence of I. Lewis “Scooter” Libby, Jr., for obstruction of justice and leaking the
identity of CIA operative Valerie Plame. Critics of the decision noted that Libby had actually received the minimum sentence allowable for his offense under the law and that many of Libby’s supporters, including the Bush administration, were actively pressing for mandatory minimum sentencing laws at a national level. Accordingly, critics of the decision saw it as a textbook case of moral hypocrisy: Different rules were being applied to Bush’s underling, Libby, than to everyone else in the United States.

The implicit assumption is that the hypocrite is being dishonest, or at least self deceptive, because the hypocrite must be aware (or should be aware) of the logical inconsistency and is therefore committing a falsehood. Rai and Holyoak have extended the analysis of Corner et al concerning slippery slope (post Slippery Slope) arguments to moral hypocrisy and suggest that the alleged hypocrite may be both honest and rational.

Continue reading

Slippery slope

slippery-slopeThis post is  based on: “The Slippery Slope Argument – Probability, Utility & Category Reappraisal,” written by Adam Corner, Ulrike Hahn, and Mike Oaksford and included in the 2006 Cognitive Science Conference Proceedings in the Cognitive Science Journal archive. The authors say that it is usually classified as a fallacy of reason, yet frequently used and widely accepted in applied domains such as politics, law and bioethics. They note that the slippery slope argument remains a controversial topic in the field of argumentation, and possesses the somewhat undignified status of “wrong but persuasive”. Having been a part of political and legal decisions, the slippery slope is ever present in my experience, although I never thought of it as a fallacy. Thinking about it though, it is what you tend to use when you think that you are going to lose the argument. I think it is interesting that every day people do not only use the technique, but they usually label it as a slippery slope which I guess shows what a powerful metaphor it is. I was also unaware that there is a “field of argumentation.”

Continue reading

Evidence Accumulation Model

evidencedownloadMy first notice of this model was in the Sollner et al paper. My quick search finds that this 2004 paper, “Evidence accumulation in decision making:  Unifying the “take the best” and the “rational” models,” by Michael D. Lee and Tarrant Cummins  is the sole exposition of the model.

A simple but common type of decision requires choosing which of two alternatives has the greater (or the lesser) value on some variable of interest. Examples of these forced-choice decisions range from the everyday (e.g., deciding whether a red or a green curry will taste
better for lunch), to the moderately important (e.g., deciding whether Madrid or Rome will provide the more enjoyable holiday), to the very important (e.g., deciding whether cutting the red or the black wire is more likely to lead to the destruction of the world).

Continue reading

Qualifying Broken Windows Theory in the Lab

brokendownloadThis post is based on a paper by several scientists at the Max Planck Institute for Research on Collective Goods. It is a merging of several things that I have been interested in over the years:  social psychology, public good economics, city planning, and epidemiology (at least in a metaphoric sense).  Politicians loved the simplicity of “broken windows,” and I was willing as a city planner to use it if it got more resources for what I wanted. Being tough on crime was an easier sell than normal city planning administration.

Continue reading

Relative Deprivation and Why Men Rebel

whymenrebelindexI loved the book Why Men Rebel by Ted Robert Gurr. I read it over forty years ago.  It gave me what seemed like a truth that I had not thought about before.  That truth was: “Don’t expect anyone to be happy based on some threshold level of consumption and attainment of goals.”  Our expectations are created by looking around and seeing what everyone else has and with media and communications, we all know what everyone else has.

This is the concept of relative deprivation. Relative deprivation is defined as our perception of discrepancy between our value expectations and our value capabilities. Value expectations are the goods and conditions of life to which people believe they are rightfully entitled. Value capabilities are the goods and conditions they think they are capable of getting and keeping.  This idea finally got through to me the weakness of the “rational” model–maximizing expected utility, and that there might be other concepts of rationality.  Posts like Everyone else is a Hypocrite, Feeling is for Doing, Cultural Evolution, and Human Kinds Perception emphasize this.  Gurr was interested in what spurred men to violence.  Recently, on the fortieth anniversary of the book, Gurr discussed his book.

Continue reading