I have found the AHQR (Agency for Health Care Quality and Research) to give generally good advice in its publications. This post is based on “Chapter 6. Clinical Reasoning, Decision Making, and Action: Thinking Critically and Clinically,” by Patricia Benner, Ronda G. Hughes, and Molly Sutphen in Patient Safety and Quality: An Evidence-Based Handbook for Nurses, 2008. I think of AHQR as the guideline or checklist people in the Gigerenzer tradition. The authors are important in nursing education and it would not surprise me if nursing educators had contributions to make to the area of judgment and decision making as a whole. Having had the occasion to see nurses in action, I have respect for their ability to know which rules to follow and which to ignore. However, the chapter was rather a disappointment, and it seems not a little padded. My hope that nurses had been studied in a generalizable manner, and that we could learn when to follow guidelines and directives and when to try something else was naive. Nevertheless, the chapter is not a total loss.
This post brings up the latest paper by Dan Kahan and his colleagues, Erica Dawson, Ellen Peters, and Paul Slovic: “Motivated Numeracy and Enlightened Self-Government,” Cultural Cognition Project, Working Paper No. 116. This paper strengthens the already strong arguments.
The experiment that was the subject of this paper was designed to test two opposing accounts of conflict over decision relevant science. The first—the Science Comprehension Thesis (“SCT”)—attributes such conflicts to the limited capacity of the public to understand the significance of valid empirical evidence. The second—the Identity-protective Cognition Thesis (“ICT”)—sees a particular recurring form of group conflict as disabling the capacities that individuals have to make sense of decision-relevant science: when policy-relevant facts become identified as symbols of membership in and loyalty to affinity groups that figure in important ways in individuals’ lives, they will be motivated to engage empirical evidence and other information in a manner that more reliably connects their beliefs to the positions that predominate in their particular groups than to the positions that are best supported by the evidence.
Keith Holyoak in his chapter in the The Oxford handbook of thinking and reasoning explains that analogy is closely related to metaphor and related forms of symbolic expression that arise in everyday language (e.g., “the evening of life,” “the idea blossomed”), in literature, the arts, and cultural practices such as ceremonies. Like analogy in general, metaphors are characterized by an asymmetry between target (conventionally termed “tenor”) and source (“vehicle”) domains (e.g., the target/tenor in “the evening of life” is life, which is understood in terms of the source/vehicle of time of day). In addition, a mapping (the “grounds” for the metaphor) connects the source and target, allowing the domains to interact to generate a new conceptualization. Metaphors are a special kind of analogy, in that the source and target domains are always semantically distant, and the two domains are often blended rather than simply mapped (e.g., in “the idea blossomed,” the target is directly described in terms of an action term derived from the source). In addition, metaphors are often combined with other symbolic “figures,” especially metonymy (substitution of an associated concept). For example, “sword” is a metonymic expression for weaponry, derived from its ancient association as the prototypical weapon; “Raising interest rates is the Federal Reserve Board’s sword in the battle against inflation” extends the metonymy into metaphor.
According to Keith Holyoak, the most important influence on analogy research in the cognitive-science tradition has been concerned with the representation of knowledge within computational systems. Holyoak credits philosopher Mary Hesse, who was in turn influenced by Aristotle’s discussions of analogy in scientific classification and Black’s interactionist view of metaphor. Hesse placed great stress on the purpose of analogy as a tool for scientific discovery and conceptual change, and on the close connections between causal relations and analogical mapping. In the 1970s, work in artificial intelligence and psychology focused on the representation of complex knowledge of the sort used in scientific reasoning, problem solving, story comprehension, and other tasks that require structured knowledge. A key aspect of structured knowledge is that elements can be flexibly bound into the roles of relations. For example, “dog bit man” and “man bit dog” have the same elements and the same relation, but the role bindings have been reversed, radically altering the overall meaning. How the mind and brain accomplish role binding is thus a central problem to be solved by any psychological theory that involves structured knowledge, including any theory of analogy.
This post is long in coming. It is surprising that analogy has not come up before in over 70 posts. I can certainly recall times when someone convinced me with a clever analogy, and then it turned out to be dead wrong. Nevertheless, we need analogy and we are vulnerable to analogy. It seems to me that we try to turn our love of stories, anecdotal evidence, into analogies sometimes to generally bad results. Keith Holyoak is one of the leaders in trying to understand how humans use analogy. This post and the next two posts will use his work in an article: “Thinking, Broad and Deep” a review of Surfaces and Essences, Analogy as the Fuel and Fire of Thinking by Douglas Hofstadter and Emmanuel Sander, May 2013, and his chapter in the Oxford Handbook of thinking and reasoning, 2012.