Signal Detection for Categorical Decisions


erev2This post looks at signal detection theory (SDT) once again. Ken Hammond helped me see the power of signal detection as a descriptive theory (post Irreducible Uncertainty..) The last year of news with respect to fatal encounters between the police and the public has made me think of signal detection again as quite relevant. I should note that Ken Hammond died in May 2015 and I am looking for his last paper  “Concepts from Aeronautical Engineering Can Lead to Advances in Social Psychology”.  This post is based on a paper: “Signal Detection by Human Observers: A Cutoff Reinforcement Learning Model of Categorization Decisions Under Uncertainty,” written by Ido Erev that appeared in the Journal of the American Psychological Association, 1998, Vol. 105, No. 2, 280-298. This paper is important, but dated.

Many common activities involve binary categorization decisions under uncertainty. The police must try to distinguish between the individuals who can and want to harm the public and/or the police from others.  A doctor has to decide whether or not he should do more tests to see if you may have cancer. According to Erev, the frequent performance of categorization decisions and the observation that they can have high survival value suggest that the cognitive processes that determine these decisions should be simple and adaptive. Thus, it could be hypothesized that one basic (simple and adaptive) model can be used to describe these processes within a wide set of situations.

In its basic form, SDT addresses a binary categorization task under uncertainty in which a DM (observer/decision maker) is asked to decide how to label a stimulus (x) that may have come from one of two different sources, S1 (the noise distribution) or S2 (the signal distribution), with different probabilities. According to Erev, the theory is naturally decomposed into three cognitive game theoretic submodels, as follows.

1. Incentive and information structure. SDT assumes that the information perceived by the observer can be summarized by the likelihood ratio of the stimulus given the two sources, that is, P(xIS2)/P(xlS1). Because the two distributions overlap, the observer cannot be 100% accurate. Rather, four contingencies are possible: The observer can correctly label the stimulus or make one of two possible errors. Table 1 illustrates the common notations of the four contingencies. The exact incentive structure is determined by the utilities of Table l’s outcomes, the prior probabilities, P(S1) = 1 – P(S2); and the observed likelihood ratio. That is, six values–the utilities of the four outcomes, referred to as U(hit), U(miss), U(false alarm), and U(correct rejection), the prior probability P(S1 ), and the observed likelihood ratio P(xl S2)/P(x[ S1 )–are needed to calculate the expected utility of each response given the observed stimulus.

2. Strategy space. The available cognitive strategies are assumed to be cutoff strategies. In the binary case, in which only two responses (R1 [noise] and R2 [signal]) are possible, each strategy can be summarized by the rule “response R2 if and only if the likelihood ratio P(xl S2)/P(xIS 1 ) exceeds a certain cutoff.”

3. Decision rule. According to SDT’s ideal observer assumption, the observer is expected to select the cutoff that maximizes his or her expected utility. The optimal cutoff (the likelihood ratio ) can be calculated from the four contingencies and the prior probabilities.

Erev  examined the value of replacing this submodel with a reinforcement learning rule since basic SDT has been found to have weaknesses in describing actual probabilistic categorization. Erev refers to this revised signal detection model as the cutoff reinforcement learning (CRL)
signal detection model. The basic idea of the learning model is a cognitive interpretation of the law of effect: the assumption that the probability that a certain strategy will be adopted increases when this strategy is positively reinforced.

The CRL makes four assumptions which I will try to set out in non-mathematical terms:

1. The decision maker considers a finite number of cutoff points and they are equally spaced between the extremes.

2. The decision maker starts with a specific tendency (response strength or propensity)  to choose each of the possible cutoffs.

3. The cutoff is updated by a recency parameter, a generalization function, and a reinforcement function.

4. The final assumption states a relative propensities sum choice rule.

Erev’s research found the CRL signal detection model to much more accurate than basic SDT in describing behavior. Although  the current research was built on rather simple cognitive strategies (cutoff strategies), analysis of more complex choice situations would have to rely on the abstraction of more complex heuristics. According to Erev people tend to follow more than one heuristic in relatively similar situations. To address this difficulty, Payne, Bettman, and Johnson (1993) proposed the adaptive DM framework. Under this framework, DMs tend to follow adaptive rules that maximize accuracy and minimize effort. The cognitive game theoretic
can be used to model the process by which DMs become adaptive and can address situations in which other incentives (in addition to accuracy and cognitive effort) may be important.

Finally, the current results shed light on the apparent contradiction between the “heuristic and biases” and ecological  approaches to the study of human judgment and decision making. (See post Cognitive Niches) Whereas the heuristic and biases research demonstrates that human judgment can be approximated by a limited set of (typically) adaptive cognitive strategies (that can lead to biases), the ecological research demonstrates that, in certain “ecological” settings, people behave as if they are “frequentialist statisticians.” Clearly, both types of observations can be consistent with the current view. As the present results demonstrate, the fact that people learn among adaptive strategies does not imply that they will not be biased. Yet, given a certain
incentive structure, bias-free behavior is possible.