This is the third and final post looking at William Davies book Nervous States–Democracy and the Decline of Reason. Davies provides some ideas for getting out of this mess at the end of the book. I believe that they are well thought out. First, Davies notes that there is one problem confronting humanity that may never go away, and which computers do nothing to alleviate: how to make promises. A promise made to a child or a public audience has a binding power. It can be broken, but the breaking of it is a breach that can leave deep emotional and cultural wounds. Davies states:
“Whether we like it or not, the starting point for this venture will be the same as it was for Hobbes: the modern state, issuing laws backed by sovereign power. It is difficult to conceive how promises can be made at scale, in a complex modern society, without the use of contracts, rights and statutes underpinned by sovereign law. Only law really has the ability to push back against the rapidly rising tide of digital algorithmic power. It remains possible to make legal demands on the owners and controllers of machines, regardless of how sophisticated those machines are.”
This is the second of three posts discussing William Davies’ book Nervous States–Democracy and the Decline of Reason. I pick a couple of areas to argue with some of the scenarios Davies presents.
Markets and Evolution
Davies discusses Hayek as the guy who believes in free markets above all else, and who has helped us reach this point of not agreeing on reality. When I read Hayek (The Road to Serfdom), he said to me that free markets with the right stable rules in place are the best system for everyone. Unfortunately, determining the right stable rules is difficult and the job of government. Hayek seems to have taken Adam Smith’s invisible hand and run with it. David Sloan Wilson in This View of Life- Completing the Darwinian Revolution makes clear that the invisible hand only works at one scale of a market (see posts Evolution for Everyone and Multilevel Selection Theory).
This book, Nervous States – Democracy and the Decline of Reason, 2019, written by William Davies tries to explain the state we are in. The end of truth or the domination of feelings or the end of expertise all come to mind. People perceive that change is so fast that the slow knowledge developed by reason and learning is devalued, while instant knowledge that will be worthless tomorrow like that used by commodity, bond or stock trading networks is highly valued. Davies builds on Hayek and says many things that ring true. In three posts, I will present the main points of Davies’ book, argue with some of the points, and present what Davies says we can do about it. Devaluing reason is a big deal for decision making.
This post is inspired by The Attention Merchants, Tim Wu, Vintage Books, 2017, New York. Decision making is not a front line issue in the book, but it is also clear that we cannot control our decision making if we cannot control our attention. The book begins as a history of what has grabbed our attention from newspapers and posters to radio to television to computers and video games, to the internet and its vehicles including our present attention grabber, the cell phone. Of course, each attention platform has ultimately had to make money and advertising has been the dominant path chosen. Advertising is the villain only to the extent that it puts able resources into effectively capturing our attention. But we do not check our email so often or play video games so long due to advertising. There is definitely some behavioral conditioning going on. Wu mentions that video games can even: “induce a ‘flow state’, that form of contentment, of optimal experience, described by the cognitive scientist Mihaly Csikszentimihalyi, in which people feel ‘strong, alert, if effortless control, unselfconscious, and at the peak of their abilities.”
In Confidence, Part II, the authors conclude that confidence is computed continuously, online, throughout the decision making process, thus lending support to models of the mind as a device that computes with probabilistic estimates and probability distributions.
The Embodied Mind
One such explanation is that of predictive processing/embodied mind. Andy Clark, Jacob Hohwy, and Karl Friston have all helped to weave together this concept. Our minds are blends of top down and bottom up processing where error messages and the effort to fix those errors makes it possible for us to engage the world. According to the embodied mind model, our minds do not just reside in our heads. Our bodies determine how we interact with the world and how we shape our world so that we can predict better. Our evolutionary limitations have much to do with how our minds work. One example provided by Andy Clark and Barbara is a robot without any brain imitating human walking nearly perfectly (video go to 2:40). Now how does this tie into confidence? Confidence at a conscious level is the extent of our belief that our decisions are correct. But the same thing is going on as a fundamental part of perception and action. Estimating the certainty of our own prediction error signals of our own mental states and processes is as Clark notes: “clearly a delicate and tricky business. For it is the prediction error signal that…gets to ‘carry the news’.”
Now the confidence heuristic is not the only thing Trump takes advantage of, but we will leave those for another time. I will also avoid the question of whether or not Trump is actually confident. So what is the relationship of confidence and decision making? Daniel Kahneman in Thinking, Fast and Slow on page 13 describes:
a puzzling limitation of our mind: our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight.
Confidence is defined as our degree of belief that a certain thought or action is correct. There is confidence in your own individual decisions or perceptions and then the between person confidence where you defer your own decision making to someone else.
Why am I thinking of confidence? An article by Cass Sunstein explains it well. The article appeared in Bloomberg, Politics & Policy, October 18, 2018, Bloomberg Opinion, “Donald Trump is Amazing. Here’s the Science to Prove It.”
I discovered that I was a celiac a few months ago and accordingly I am on a gluten free diet. Compared to most conditions discovered in one’s late sixties, celiac disease seems almost inconsequential. However, it fits into the idea of prediction error minimization. In effect, the environment has changed and I need to change my predictions. Bread and beer are now bad. My automatic, intuitive prediction machine has not been getting it right. It is disorienting. I can no longer “See food, eat food.” I can change the environment at home, but in the wider world I need to be aware. My brain needs to dedicate perpetual, and at least for now, conscious effort to this cause. It is almost as if I became instantly even dumber. It makes me more self absorbed in social settings that involve food. Not known for my social skills, I have been a good listener, but now not so much. On my Dad’s 94th birthday, I ate a big piece of German chocolate cake, enjoyed it thoroughly, and then remembered that it was not allowed. In my particular case, I do not get sick or nauseated when I make such a mistake so my commitment is always under threat. This demands an even larger share of my brain to be compliant. My main incentive to comply is those photos of my scalloped small intestine. I note that I was diagnosed after years of trying to figure out my low ferritin levels. (It will be extremely disappointing if I find that my ferritin is still low.) Continue reading →
This post is based on a paper written by Andy Clark, author of Surfing Uncertainty (See Paper Predictive Processing for a fuller treatment.), “A nice surprise? Predictive processing and the active pursuit of novelty,” that appeared in Phenomenology and the Cognitive Sciences, pp. 1-14. DOI: 10.1007/s11097-017-9525-z. For me this is a chance to learn how Andy Clark has polished up his arguments since his book. It also strikes me as connected to my recent posts on Curiosity and Creativity.
Clark and Friston (See post The Prediction Machine) depict human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory simulations. But we know that we are attracted to the unexpected. We humans often seem to actively seek out surprising events, deliberately seeking novel and exciting streams of sensory stimulation. So how does that square with the idea of minimizing prediction error.
This post is based on the paper: “The role of interoceptive inference in theory of mind,” by
Sasha Ondobaka, James Kilner, and Karl Friston, Brain Cognition, 2017 Mar; 112: 64–68.
Understanding or inferring the intentions, feelings and beliefs of others is a hallmark of human social cognition often referred to as having a Theory of Mind. ToM has been described as a cognitive ability to infer the intentions and beliefs of others, through processing of their physical appearance, clothes, bodily and facial expressions. Of course, the repertoire of hypotheses of our ToM is borrowed from the hypotheses that cause our own behavior.
But how can processing of internal visceral/autonomic information (interoception) contribute to the understanding of others’ intentions? The authors consider interoceptive inference as a special case of active inference. Friston (see post Prediction Error Minimization) has theorized that the goal of the brain is to minimize prediction error and that this can be achieved both by changing predictions to match the observed data and, via action, changing the sensory input to match predictions. When you drop the knife and then catch it with the other hand, you are using active inference.