Kahneman and Klein on Expertise

kahnemanimagesDaniel Kahneman has been practically ignored in this blog.   His 2011 book:  Thinking, Fast and Slow, is well written and an excellent resource. I certainly do not hold winning the Nobel Prize for Economics against him. I do wish he was more like Ken Hammond and gave me more background and more perspective on what questions that are likely to be answered in the future or what research he believes is interesting, or especially why Gigerenzer is wrong. System 1 and System 2 seem outdated and Kahneman seems to just ignore research like that of Glockner and Betsch  Intuition in J/DM that sees the systems as more holistic.

I definitely will be devoting future posts to prospect theory and possibly some of his other ideas.  This post will address the paper written by Kahneman and Gary Klein, his one time archrival with respect to expertise.  “Conditions for Intuitive Expertise: A Failure to Disagree” appeared in the September 2009 issue of the American Psychologist. I have to respect both men for spending so much time and effort and coming up with something that is excellent. Kahneman is a proponent of heuristics and bias as explaining intuitive decision making while Klein is the proponent of naturalistic decision making.  Kahneman seems to look at wrong with decision making while Klein seems to look at what is right.

In an effort that spanned several years, they attempted to answer one basic question: Under what conditions are the intuitions of professionals worthy of trust?  They noted that their conclusions add up to a coherent view of expert intuition, but that they are not original and give credit to Shanteau, Hogarth, and Myers.

  • Their starting point is that intuitive judgments can arise from genuine skill—the focus of the Naturalistic Decision Making approach—but that they can also arise from inappropriate application of the heuristic processes on which students of the Heuristics and Bias tradition have focused.
  • Skilled judges are often unaware of the cues that guide them, and individuals whose intuitions are not skilled are even less likely to know where their judgments come from.
  • Subjective confidence is an unreliable indication of the validity of intuitive judgments
    and decisions.
  • The determination of whether intuitive judgments can be trusted requires an examination of the environment in which the judgment is made and of the opportunity that the judge has had to learn the regularities of that environment.
  • We describe task environments as “high-validity” if there are stable relationships between objectively identifiable cues and subsequent events or between cues and the outcomes of possible actions. Medicine and firefighting are practiced in environments of fairly high validity. In contrast, outcomes are effectively unpredictable in zero-validity environments. To a good approximation, predictions of the future value of individual stocks and long-term forecasts of political events are made in a zero-validity environment.
  • Validity and uncertainty are not incompatible. Some environments are both highly valid and substantially uncertain. Poker and warfare are examples. The best moves in such situations reliably increase the potential for success.
  • An environment of high validity is a necessary condition for the development of skilled intuitions. Other necessary conditions include adequate opportunities for learning the environment (prolonged practice and feedback that is both rapid and unequivocal). If an environment provides valid cues and good feedback, skill and expert intuition will eventually develop in individuals of sufficient talent.
  • Although true skill cannot develop in irregular or unpredictable environments, individuals will some-times make judgments and decisions that are successful by chance. These “lucky” individuals will be susceptible to an illusion of skill and to overconfidence. The financial industry is a rich source of examples.
  • The situation that we have labeled fractionation of skill is another source of overconfidence. Professionals who have expertise in some tasks are sometimes called upon to make judgments in areas in which they have no real skill. (For example, financial analysts may be skilled at evaluating the likely commercial success of a firm, but this skill does not extend to the judgment of whether the stock of that firm is underpriced.) It is difficult both for the professionals and for those who observe them to determine the boundaries of their true expertise.
  • We agree that the weak regularities available in low-validity situations can sometimes support the development of algorithms that do better than chance. These algorithms only achieve limited accuracy, but they outperform humans because of their advantage of consistency. However, the introduction of algorithms to replace human judgment is likely to evoke substantial resistance and sometimes has undesirable side effects.

Another conclusion that they both accept is that their approaches have built-in limitations. For historical and methodological reasons, Heuristics and Bias researchers generally find errors more interesting and instructive than correct performance; but a psychology of judgment and decision making that ignores intuitive skill is seriously limited. Because their intellectual attitudes developed in reaction to the Heuristics and Bias research, members of the Naturalistic Decision Making community have an aversion to the idea of bias; but a psychology of professional judgment that neglects predictable errors cannot be adequate.