This book, Nervous States – Democracy and the Decline of Reason, 2019, written by William Davies tries to explain the state we are in. The end of truth or the domination of feelings or the end of expertise all come to mind. People perceive that change is so fast that the slow knowledge developed by reason and learning is devalued, while instant knowledge that will be worthless tomorrow like that used by commodity, bond or stock trading networks is highly valued. Davies builds on Hayek and says many things that ring true. In three posts, I will present the main points of Davies’ book, argue with some of the points, and present what Davies says we can do about it. Devaluing reason is a big deal for decision making.
This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that arming teachers is a solution to anything. However, going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.
Why do almost all people tell the truth in ordinary everyday
life? […] The reason is, firstly because it is easier; for
lying demands invention, dissimulation, and a good memory
(Friedrich Nietzsche, page 54, Human, All Too Human: A Book for Free Spirits, 1878)
This post is based on the paper: ” ‘ I can see it in your eyes’: Biased Processing and Increased Arousal in Dishonest Responses,” authored by Guy Hochman, Andreas Glockner, Susan Fiedler, and Shahar Ayal, that appeared in the Journal of Behavioral Decision Making, December 2015.
This post looks at a paper, “Rational Hypocrisy: A Bayesian Analysis Based on Informal Argumentation and Slippery Slopes,” Cognitive Science 38 (2014) 1456–1467, written by Tage S. Rai and Keith J. Holyoak (posts Metaphor, Bidirectional Reasoning) that draws a connection between what may look like moral hypocrisy and the categories we select for cases with weak arguments by looking at the slippery slope argument. Moral hypocrisy is typically viewed as an ethical accusation: Someone is applying different moral standards to essentially identical cases, dishonestly claiming that one action is acceptable while otherwise equivalent actions are not. The authors provide the following example:
“I respect the jury’s verdict. But I have concluded that the prison sentence given to
Mr. Libby is excessive.” With these words, former President George W. Bush commuted
the sentence of I. Lewis “Scooter” Libby, Jr., for obstruction of justice and leaking the
identity of CIA operative Valerie Plame. Critics of the decision noted that Libby had actually received the minimum sentence allowable for his offense under the law and that many of Libby’s supporters, including the Bush administration, were actively pressing for mandatory minimum sentencing laws at a national level. Accordingly, critics of the decision saw it as a textbook case of moral hypocrisy: Different rules were being applied to Bush’s underling, Libby, than to everyone else in the United States.
The implicit assumption is that the hypocrite is being dishonest, or at least self deceptive, because the hypocrite must be aware (or should be aware) of the logical inconsistency and is therefore committing a falsehood. Rai and Holyoak have extended the analysis of Corner et al concerning slippery slope (post Slippery Slope) arguments to moral hypocrisy and suggest that the alleged hypocrite may be both honest and rational.
This post is the first after a few technical issues. Some of my decision making has been suboptimal, but we will keep trying. The post is based on a commentary, “Is Anything Sacred Anymore?” that appeared in Psychological Inquiry, 23: 155-161, 2012. The authors are Peter H Ditto, Brittany Liu, and Sean P Wojcik. The commentary examines the paper: “The Moral Dyad: A Fundamental Template Unifying Moral Judgment,” by Gray, Waytz, and Young, that appeared in the Psychological Inquiry: An International Journal for the Advancement of Psychological Theory, 23:2, 206-215. I have found commentary articles easier for me to understand since they have to examine two or more positions.
Ditto et al agree with Gray et al about the central role of mind perception in moral judgment and are intrigued by the idea that moral evaluation requires not just an intentional moral agent but also a suffering moral patient, and moreover that this dyadic structure of agent and patient, intention and suffering is the center of morality. They do not agree that interpersonal harm is the very meaning of morality, that no act can be morally offensive unless it is perceived to result in suffering.
This post is based on a doctoral dissertation: “Just do it! Guilt as a moral intuition to cooperate–A parallel constraint satisfaction approach,” written by Thomas Stemmler at the University of Wurzburg. Stemmler does a good job of fitting together some ideas that I have been unable to fit together. Ideas of Haidt, Glockner, Lerner, and Holyoak are notably connected. He conducted five experiments examining guilt and cooperation to test, in the most simple terms, the hypothesis that making moral judgments is closer to making an aesthetic judgment than to reasoning about the moral justifications of an action, and that moral intuitions come from moral emotions. The hypothesis is based on Jonathan Haidt’s idea that the role of reasoning is literally to provide reasons (or arguments) for the intuitively made judgment if there is a need to communicate it. Part of the hypothesis is also that emotional intuitions in moral decision-making are the result of compensatory information processing which follows principles of parallel constraint satisfaction (PCS). I am going to largely skip over the results of the experiments, but note that Stemmler believes that they support his hypothesis. He notes that guilt is only one emotion, but points out similarly confirming results for disgust.
This post is based on a paper by several scientists at the Max Planck Institute for Research on Collective Goods. It is a merging of several things that I have been interested in over the years: social psychology, public good economics, city planning, and epidemiology (at least in a metaphoric sense). Politicians loved the simplicity of “broken windows,” and I was willing as a city planner to use it if it got more resources for what I wanted. Being tough on crime was an easier sell than normal city planning administration.
This post is based on the paper: “Can We Trust Intuitive Jurors? Standards of Proof and the Probative Value of Evidence in Coherence-Based Reasoning,” written by Andreas Glöckner and Christoph Engel, Journal of Empirical Legal Studies, Volume 10, Issue 2, 230–252, June 2013. Standards of proof discussed in the article are not included in this post.
Glockner and Engel explain that Jury members have a difficult task. They have to make decisions based on multiple pieces of probabilistic evidence. These pieces of information are usually contradictory, essentially always incomplete, presented in multiple formats (making them hard to compare and integrate), and introduced by parties clearly intending to bias the jury. How do jury members then make meaningful decisions? Glockner and Engel suggest that there is mounting evidence that most people do not mathematically integrate evidence. Their behavior is better explained by sense making and constructing coherent stories from the evidence. Jurors attempt to create complete narratives from the pieces of evidence they hear.
Bruce Hood is an experimental psychologist and in Supersense he argues that beliefs in the supernatural are a consequence of reasoning processes about natural properties and events in our world. This includes a mind design for detecting patterns and inferring structures where there may be none. Our naive theories form the basis of our supernatural beliefs, and religion, culture and experience simply work to reinforce what we intuitively hold to be correct. As an example, one of these is the common belief that we can tell when someone is staring at us. Hood says that supernatural thinking is simply the natural consequence of failing to match our intuitions with the true reality of the world.
Kluge is a readable book written by psychologist Gary Marcus and designed for the popular audience. The way it is written makes it a good choice to outline the biases and other shortcomings of our minds. Of course, a kluge is old computer slang for patch or something that is not elegant, but still works. Possibly, I should contrast Kluge with In Pursuit of Elegance by Matthew May. Later. Marcus starts with memory.