This is the third and final post looking at William Davies book Nervous States–Democracy and the Decline of Reason. Davies provides some ideas for getting out of this mess at the end of the book. I believe that they are well thought out. First, Davies notes that there is one problem confronting humanity that may never go away, and which computers do nothing to alleviate: how to make promises. A promise made to a child or a public audience has a binding power. It can be broken, but the breaking of it is a breach that can leave deep emotional and cultural wounds. Davies states:
“Whether we like it or not, the starting point for this venture will be the same as it was for Hobbes: the modern state, issuing laws backed by sovereign power. It is difficult to conceive how promises can be made at scale, in a complex modern society, without the use of contracts, rights and statutes underpinned by sovereign law. Only law really has the ability to push back against the rapidly rising tide of digital algorithmic power. It remains possible to make legal demands on the owners and controllers of machines, regardless of how sophisticated those machines are.”
This is the second of three posts discussing William Davies’ book Nervous States–Democracy and the Decline of Reason. I pick a couple of areas to argue with some of the scenarios Davies presents.
Markets and Evolution
Davies discusses Hayek as the guy who believes in free markets above all else, and who has helped us reach this point of not agreeing on reality. When I read Hayek (The Road to Serfdom), he said to me that free markets with the right stable rules in place are the best system for everyone. Unfortunately, determining the right stable rules is difficult and the job of government. Hayek seems to have taken Adam Smith’s invisible hand and run with it. David Sloan Wilson in This View of Life- Completing the Darwinian Revolution makes clear that the invisible hand only works at one scale of a market (see posts Evolution for Everyone and Multilevel Selection Theory).
This book, Nervous States – Democracy and the Decline of Reason, 2019, written by William Davies tries to explain the state we are in. The end of truth or the domination of feelings or the end of expertise all come to mind. People perceive that change is so fast that the slow knowledge developed by reason and learning is devalued, while instant knowledge that will be worthless tomorrow like that used by commodity, bond or stock trading networks is highly valued. Davies builds on Hayek and says many things that ring true. In three posts, I will present the main points of Davies’ book, argue with some of the points, and present what Davies says we can do about it. Devaluing reason is a big deal for decision making.
This post is largely a continuation of the Kenneth R Hammond post, but one prompted by recent current events. My opinion on gun control is probably readily apparent. But if it is not, let me say that I go crazy when mental health is bandied about as the reason for our school shootings or when we hear that arming teachers is a solution to anything. However, going crazy or questioning the sincerity of people with whom you are arguing is not a good idea. Dan Kahan (See my posts Cultural Cognition or Curiosity or his blog Cultural Cognition) has some great ideas on this, but Ken Hammond actually had accomplishments and they could help guide all of us today. I should note also that I was unable to quickly find the original sources so I am relying completely on: “Kenneth R. Hammond’s contributions to the study of judgment and decision making,” written by Mandeep K. Dhami and Jeryl L. Mumpower that appeared in Judgment and Decision Making, Vol. 13, No. 1, January 2018, pp. 1–22.
Why do almost all people tell the truth in ordinary everyday
life? […] The reason is, firstly because it is easier; for
lying demands invention, dissimulation, and a good memory
(Friedrich Nietzsche, page 54, Human, All Too Human: A Book for Free Spirits, 1878)
“I just fired the head of the F.B.I. He was crazy, a real nut job,” Mr. Trump said, according to the document, which was read to The New York Times by an American official. “I faced great pressure because of Russia. That’s taken off.”
Mr. Trump added, “I’m not under investigation.” (Pres. Donald Trump, discussion with Russian diplomats, May 10, 2017).
This post is based on the paper: ” ‘ I can see it in your eyes’: Biased Processing and Increased Arousal in Dishonest Responses,” authored by Guy Hochman, Andreas Glockner, Susan Fiedler, and Shahar Ayal, that appeared in the Journal of Behavioral Decision Making, December 2015.
This post looks at a paper, “Rational Hypocrisy: A Bayesian Analysis Based on Informal Argumentation and Slippery Slopes,” Cognitive Science 38 (2014) 1456–1467, written by Tage S. Rai and Keith J. Holyoak (posts Metaphor, Bidirectional Reasoning) that draws a connection between what may look like moral hypocrisy and the categories we select for cases with weak arguments by looking at the slippery slope argument. Moral hypocrisy is typically viewed as an ethical accusation: Someone is applying different moral standards to essentially identical cases, dishonestly claiming that one action is acceptable while otherwise equivalent actions are not. The authors provide the following example:
“I respect the jury’s verdict. But I have concluded that the prison sentence given to
Mr. Libby is excessive.” With these words, former President George W. Bush commuted
the sentence of I. Lewis “Scooter” Libby, Jr., for obstruction of justice and leaking the
identity of CIA operative Valerie Plame. Critics of the decision noted that Libby had actually received the minimum sentence allowable for his offense under the law and that many of Libby’s supporters, including the Bush administration, were actively pressing for mandatory minimum sentencing laws at a national level. Accordingly, critics of the decision saw it as a textbook case of moral hypocrisy: Different rules were being applied to Bush’s underling, Libby, than to everyone else in the United States.
The implicit assumption is that the hypocrite is being dishonest, or at least self deceptive, because the hypocrite must be aware (or should be aware) of the logical inconsistency and is therefore committing a falsehood. Rai and Holyoak have extended the analysis of Corner et al concerning slippery slope (post Slippery Slope) arguments to moral hypocrisy and suggest that the alleged hypocrite may be both honest and rational.
This post is the first after a few technical issues. Some of my decision making has been suboptimal, but we will keep trying. The post is based on a commentary, “Is Anything Sacred Anymore?” that appeared in Psychological Inquiry, 23: 155-161, 2012. The authors are Peter H Ditto, Brittany Liu, and Sean P Wojcik. The commentary examines the paper: “The Moral Dyad: A Fundamental Template Unifying Moral Judgment,” by Gray, Waytz, and Young, that appeared in the Psychological Inquiry: An International Journal for the Advancement of Psychological Theory, 23:2, 206-215. I have found commentary articles easier for me to understand since they have to examine two or more positions.
Ditto et al agree with Gray et al about the central role of mind perception in moral judgment and are intrigued by the idea that moral evaluation requires not just an intentional moral agent but also a suffering moral patient, and moreover that this dyadic structure of agent and patient, intention and suffering is the center of morality. They do not agree that interpersonal harm is the very meaning of morality, that no act can be morally offensive unless it is perceived to result in suffering.
This post is based on a doctoral dissertation: “Just do it! Guilt as a moral intuition to cooperate–A parallel constraint satisfaction approach,” written by Thomas Stemmler at the University of Wurzburg. Stemmler does a good job of fitting together some ideas that I have been unable to fit together. Ideas of Haidt, Glockner, Lerner, and Holyoak are notably connected. He conducted five experiments examining guilt and cooperation to test, in the most simple terms, the hypothesis that making moral judgments is closer to making an aesthetic judgment than to reasoning about the moral justifications of an action, and that moral intuitions come from moral emotions. The hypothesis is based on Jonathan Haidt’s idea that the role of reasoning is literally to provide reasons (or arguments) for the intuitively made judgment if there is a need to communicate it. Part of the hypothesis is also that emotional intuitions in moral decision-making are the result of compensatory information processing which follows principles of parallel constraint satisfaction (PCS). I am going to largely skip over the results of the experiments, but note that Stemmler believes that they support his hypothesis. He notes that guilt is only one emotion, but points out similarly confirming results for disgust.
This post is based on a paper by several scientists at the Max Planck Institute for Research on Collective Goods. It is a merging of several things that I have been interested in over the years: social psychology, public good economics, city planning, and epidemiology (at least in a metaphoric sense). Politicians loved the simplicity of “broken windows,” and I was willing as a city planner to use it if it got more resources for what I wanted. Being tough on crime was an easier sell than normal city planning administration.
This post is based on the paper: “Can We Trust Intuitive Jurors? Standards of Proof and the Probative Value of Evidence in Coherence-Based Reasoning,” written by Andreas Glöckner and Christoph Engel, Journal of Empirical Legal Studies, Volume 10, Issue 2, 230–252, June 2013. Standards of proof discussed in the article are not included in this post.
Glockner and Engel explain that Jury members have a difficult task. They have to make decisions based on multiple pieces of probabilistic evidence. These pieces of information are usually contradictory, essentially always incomplete, presented in multiple formats (making them hard to compare and integrate), and introduced by parties clearly intending to bias the jury. How do jury members then make meaningful decisions? Glockner and Engel suggest that there is mounting evidence that most people do not mathematically integrate evidence. Their behavior is better explained by sense making and constructing coherent stories from the evidence. Jurors attempt to create complete narratives from the pieces of evidence they hear.