This post is derived from a review article: “The Role of Intuition in the Generation and Evaluation Stages of Creativity,” authored by Judit Pétervári, Magda Osman and Joydeep Bhattacharya that appeared in Frontiers of Psychology, September 2016 doi: 10.3389/fpsyg.2016.01420. It struck me that in all this blog’s posts, creativity had almost never come up. Then I threw it together with Edward O Wilson’s 2017 book: The Origins of Creativity, Liveright Publishing, New York. (See posts Evolution for Everyone and Cultural Evolution for more from Edward O. Wilson. He is the ant guy. He is interesting, understandable, and forthright.)
Creativity is notoriously difficult to capture by a single definition. Petervari et al suggest that creativity is a process that is broadly similar to problem solving, in which, for both, information is coordinated toward reaching a specific goal, and the information is organized in a novel, unexpected way. Problems which require creative solutions are ill-defined, primarily because there are multiple hypothetical solutions that would satisfy the goals. Wilson sees creativity beyond typical problem solving.
This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that arming teachers is a solution to anything. However, going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.
This post looks at a paper, “Rational Hypocrisy: A Bayesian Analysis Based on Informal Argumentation and Slippery Slopes,” Cognitive Science 38 (2014) 1456–1467, written by Tage S. Rai and Keith J. Holyoak (posts Metaphor, Bidirectional Reasoning) that draws a connection between what may look like moral hypocrisy and the categories we select for cases with weak arguments by looking at the slippery slope argument. Moral hypocrisy is typically viewed as an ethical accusation: Someone is applying different moral standards to essentially identical cases, dishonestly claiming that one action is acceptable while otherwise equivalent actions are not. The authors provide the following example:
“I respect the jury’s verdict. But I have concluded that the prison sentence given to
Mr. Libby is excessive.” With these words, former President George W. Bush commuted
the sentence of I. Lewis “Scooter” Libby, Jr., for obstruction of justice and leaking the
identity of CIA operative Valerie Plame. Critics of the decision noted that Libby had actually received the minimum sentence allowable for his offense under the law and that many of Libby’s supporters, including the Bush administration, were actively pressing for mandatory minimum sentencing laws at a national level. Accordingly, critics of the decision saw it as a textbook case of moral hypocrisy: Different rules were being applied to Bush’s underling, Libby, than to everyone else in the United States.
The implicit assumption is that the hypocrite is being dishonest, or at least self deceptive, because the hypocrite must be aware (or should be aware) of the logical inconsistency and is therefore committing a falsehood. Rai and Holyoak have extended the analysis of Corner et al concerning slippery slope (post Slippery Slope) arguments to moral hypocrisy and suggest that the alleged hypocrite may be both honest and rational.
This post is based on the paper: “The discovery and comparison of symbolic magnitudes,” written by Cognitive Psychology 71 (2014) 27–54 This is a little different from one of Brunswik’s ideas –how good we are at determining sizes in the environment. Those might be called perceptual magnitudes. Symbolic magnitudes seem to be ones taken from memory and the immediate context.
We have sophisticated abilities to learn and make judgments based on relative magnitude. Magnitude comparisons are critical in making choices (e.g., which of two products is more desirable?), making social evaluations (e.g., which person is friendlier?), and in many other forms of appraisal (e.g., who can run faster, this bear or me?). In the paper, the authors seek to explain where subjective magnitudes come from?
For a few types of symbolic comparisons, such as numerical magnitudes of digits, it may indeed be the case that each object has a pre-stored magnitude in long-term memory. The notion that magnitudes are pre-stored is implausible for the wide range of dimensions on which people can make symbolic comparisons, especially in the interpersonal and social realm (e.g., intelligence, friendliness, religiosity, conservatism). Magnitudes are more likely derived, context-dependent features that are computed as needed in response to a query.
This post is based on a doctoral dissertation: “Just do it! Guilt as a moral intuition to cooperate–A parallel constraint satisfaction approach,” written by Thomas Stemmler at the University of Wurzburg. Stemmler does a good job of fitting together some ideas that I have been unable to fit together. Ideas of Haidt, Glockner, Lerner, and Holyoak are notably connected. He conducted five experiments examining guilt and cooperation to test, in the most simple terms, the hypothesis that making moral judgments is closer to making an aesthetic judgment than to reasoning about the moral justifications of an action, and that moral intuitions come from moral emotions. The hypothesis is based on Jonathan Haidt’s idea that the role of reasoning is literally to provide reasons (or arguments) for the intuitively made judgment if there is a need to communicate it. Part of the hypothesis is also that emotional intuitions in moral decision-making are the result of compensatory information processing which follows principles of parallel constraint satisfaction (PCS). I am going to largely skip over the results of the experiments, but note that Stemmler believes that they support his hypothesis. He notes that guilt is only one emotion, but points out similarly confirming results for disgust.
This post is based on the paper presented at the 2013 Annual Conference of the of the Cognitive Science Society, “Justified True Belief Triggers False Recall of “Knowing”” by Derek Powell, Zachary Horne, Angel Pinillos, and Keith J. Holyoak. People’s beliefs are the primary drivers of their actions, yet these beliefs are often uncertain—the products of limited information about the world and interconnections between other (often uncertain) beliefs.
Keith Holyoak in his chapter in the The Oxford handbook of thinking and reasoning explains that analogy is closely related to metaphor and related forms of symbolic expression that arise in everyday language (e.g., “the evening of life,” “the idea blossomed”), in literature, the arts, and cultural practices such as ceremonies. Like analogy in general, metaphors are characterized by an asymmetry between target (conventionally termed “tenor”) and source (“vehicle”) domains (e.g., the target/tenor in “the evening of life” is life, which is understood in terms of the source/vehicle of time of day). In addition, a mapping (the “grounds” for the metaphor) connects the source and target, allowing the domains to interact to generate a new conceptualization. Metaphors are a special kind of analogy, in that the source and target domains are always semantically distant, and the two domains are often blended rather than simply mapped (e.g., in “the idea blossomed,” the target is directly described in terms of an action term derived from the source). In addition, metaphors are often combined with other symbolic “figures,” especially metonymy (substitution of an associated concept). For example, “sword” is a metonymic expression for weaponry, derived from its ancient association as the prototypical weapon; “Raising interest rates is the Federal Reserve Board’s sword in the battle against inflation” extends the metonymy into metaphor.
According to Keith Holyoak, the most important influence on analogy research in the cognitive-science tradition has been concerned with the representation of knowledge within computational systems. Holyoak credits philosopher Mary Hesse, who was in turn influenced by Aristotle’s discussions of analogy in scientific classification and Black’s interactionist view of metaphor. Hesse placed great stress on the purpose of analogy as a tool for scientific discovery and conceptual change, and on the close connections between causal relations and analogical mapping. In the 1970s, work in artificial intelligence and psychology focused on the representation of complex knowledge of the sort used in scientific reasoning, problem solving, story comprehension, and other tasks that require structured knowledge. A key aspect of structured knowledge is that elements can be flexibly bound into the roles of relations. For example, “dog bit man” and “man bit dog” have the same elements and the same relation, but the role bindings have been reversed, radically altering the overall meaning. How the mind and brain accomplish role binding is thus a central problem to be solved by any psychological theory that involves structured knowledge, including any theory of analogy.
This post is long in coming. It is surprising that analogy has not come up before in over 70 posts. I can certainly recall times when someone convinced me with a clever analogy, and then it turned out to be dead wrong. Nevertheless, we need analogy and we are vulnerable to analogy. It seems to me that we try to turn our love of stories, anecdotal evidence, into analogies sometimes to generally bad results. Keith Holyoak is one of the leaders in trying to understand how humans use analogy. This post and the next two posts will use his work in an article: “Thinking, Broad and Deep” a review of Surfaces and Essences,Analogy as the Fuel andFire of Thinking by Douglas Hofstadter and Emmanuel Sander, May 2013, and his chapter in the Oxford Handbook of thinking and reasoning, 2012.
The idea of bidirectional reasoning seems to have really got going by way of a 1999 paper entitled: “Bidirectional Reasoning in Decision Making by Constraint Satisfaction” written by Keith J. Holyoak and Dan Simon that was published in the Journal of Experimental Psychology. 1999, Vol 128, No. 1, pages 3-31.