This post is based on the book, Elastic–Flexible Thinking in a Time of Change by Leonard Mlodinow, Pantheon Books, New York, 2018. Mlodinow is a physicist and worked with Stephen Hawking. His previous book Subliminal evidently gave him considerable access to interesting people like Seth MacFarlane. He mentions that Stephen Hawking’s pace of communicating was at best six words a minute with public presentations being done ahead of time. Mlodinow notes that this slowing of the pace of a conversation is actually quite helpful in forcing you to consider the words as opposed to thinking of what you are going to say while the other person is talking so that you can have an instant response.
This is the third and final post looking at William Davies book Nervous States–Democracy and the Decline of Reason. Davies provides some ideas for getting out of this mess at the end of the book. I believe that they are well thought out. First, Davies notes that there is one problem confronting humanity that may never go away, and which computers do nothing to alleviate: how to make promises. A promise made to a child or a public audience has a binding power. It can be broken, but the breaking of it is a breach that can leave deep emotional and cultural wounds. Davies states:
“Whether we like it or not, the starting point for this venture will be the same as it was for Hobbes: the modern state, issuing laws backed by sovereign power. It is difficult to conceive how promises can be made at scale, in a complex modern society, without the use of contracts, rights and statutes underpinned by sovereign law. Only law really has the ability to push back against the rapidly rising tide of digital algorithmic power. It remains possible to make legal demands on the owners and controllers of machines, regardless of how sophisticated those machines are.”
This is the second of three posts discussing William Davies’ book Nervous States–Democracy and the Decline of Reason. I pick a couple of areas to argue with some of the scenarios Davies presents.
Markets and Evolution
Davies discusses Hayek as the guy who believes in free markets above all else, and who has helped us reach this point of not agreeing on reality. When I read Hayek (The Road to Serfdom), he said to me that free markets with the right stable rules in place are the best system for everyone. Unfortunately, determining the right stable rules is difficult and the job of government. Hayek seems to have taken Adam Smith’s invisible hand and run with it. David Sloan Wilson in This View of Life- Completing the Darwinian Revolution makes clear that the invisible hand only works at one scale of a market (see posts Evolution for Everyone and Multilevel Selection Theory).
This book, Nervous States – Democracy and the Decline of Reason, 2019, written by William Davies tries to explain the state we are in. The end of truth or the domination of feelings or the end of expertise all come to mind. People perceive that change is so fast that the slow knowledge developed by reason and learning is devalued, while instant knowledge that will be worthless tomorrow like that used by commodity, bond or stock trading networks is highly valued. Davies builds on Hayek and says many things that ring true. In three posts, I will present the main points of Davies’ book, argue with some of the points, and present what Davies says we can do about it. Devaluing reason is a big deal for decision making.
In Confidence, Part II, the authors conclude that confidence is computed continuously, online, throughout the decision making process, thus lending support to models of the mind as a device that computes with probabilistic estimates and probability distributions.
The Embodied Mind
One such explanation is that of predictive processing/embodied mind. Andy Clark, Jacob Hohwy, and Karl Friston have all helped to weave together this concept. Our minds are blends of top down and bottom up processing where error messages and the effort to fix those errors makes it possible for us to engage the world. According to the embodied mind model, our minds do not just reside in our heads. Our bodies determine how we interact with the world and how we shape our world so that we can predict better. Our evolutionary limitations have much to do with how our minds work. One example provided by Andy Clark and Barbara Webb is a robot without any brain imitating human walking nearly perfectly (video go to 2:40). Now how does this tie into confidence? Confidence at a conscious level is the extent of our belief that our decisions are correct. But the same thing is going on as a fundamental part of perception and action. Estimating the certainty of our own prediction error signals of our own mental states and processes is as Clark notes: “clearly a delicate and tricky business. For it is the prediction error signal that…gets to ‘carry the news’.”
This post is derived from a review article: “The Role of Intuition in the Generation and Evaluation Stages of Creativity,” authored by Judit Pétervári, Magda Osman and Joydeep Bhattacharya that appeared in Frontiers of Psychology, September 2016 doi: 10.3389/fpsyg.2016.01420. It struck me that in all this blog’s posts, creativity had almost never come up. Then I threw it together with Edward O Wilson’s 2017 book: The Origins of Creativity, Liveright Publishing, New York. (See posts Evolution for Everyone and Cultural Evolution for more from Edward O. Wilson. He is the ant guy. He is interesting, understandable, and forthright.)
Creativity is notoriously difficult to capture by a single definition. Petervari et al suggest that creativity is a process that is broadly similar to problem solving, in which, for both, information is coordinated toward reaching a specific goal, and the information is organized in a novel, unexpected way. Problems which require creative solutions are ill-defined, primarily because there are multiple hypothetical solutions that would satisfy the goals. Wilson sees creativity beyond typical problem solving.
I love Stanislas Dehaene’s experiments, his general ideas and his book: Consciousness and the Brain: Deciphering How the Brain Codes our Thoughts, Viking, New York 2014 is a great synthesis and with respect to the title, it is a fine book. However, with respect to how it deals with decision making, I am mostly disappointed.
Consciousness: Informer or Informer/Decider? Although Dehaene’s Global Neuronal Workspace Theory describes what we feel as consciousness as the global sharing of information, in the book he seems to promote the idea of consciousness as the decider as well as the informer. Dehaene writes:
“My picture of consciousness imples a natural division of labor. In the basement, an army of unconscious workers does the exhausting work, sifting through piles of data. Meanwhile, at the top, a select board of executives, examining only a brief of the situation, slowly makes conscious decisions…No one can act on mere probabilities–at some point, a dictatorial process is needed to collapse all uncertainties and decide….Consciousness may be the brain’s scale tipping device—collapsing all unconscious probabilities into a single conscious sample so that we can move on to further decisions.” p89
I like the informer part, but I like the parallel constraint satisfaction (post Parallel Constraint Satisfaction Theory) idea that consciousness is asked to get more information (information search and production) which the unconscious system turns into a decision. In my scenario the visual system seems to have priority to get to the conscious level, then other sensory systems, and then the other unconscious systems push the most difficult or interesting decisions they have at any particular time through to the conscious system. Maybe there is some sort of priority ranking. Clearly, most rather mundane decisions seem to break through to consciousness only occasionally. As a part of breaking through to consciousness, more of the modular systems are alerted to the issue and maybe information can come from inside or maybe we seek information from others or examine the environment. We get the new information and the wheels of the parallel constraint system start whirring again to see if the decision can be made. Now, I do see a cognitive continuum so that yes certain decisions may stay with the board of executives. Dehaene uses the example of multidigit arithmetic. For most of us, it seems to consist of a series of introspective steps that we can accurately report. For instance, to multiply 30 by 47, I might multiply 30 by 40 and get 1200 and then add it to 7 by 30 to get 1410. But for a numerical savants that could be done in the unconscious. Nevertheless, there are certain things where consciousness does seem to be where the decisions are made. Complex multi-step questions where the emotions are more or less uninvolved might be examples.
Maybe the interesting part is the sort of phase change between the unconscious and the conscious. There is a lot happening there. Dehaene says that consciousness is doing the collapsing, but it seems to me it is already done once it reaches consciousness. Maybe that is not an important argument. One theory is that conscious perception occurs when the stimulus allows the accumulation of sufficient sensory evidence to reach a threshold, at which point the brain ‘decides’ whether it has seen anything, and what it is. The mechanisms of conscious access would then be comparable to those of other decisions, involving an accumulation toward a threshold — with the difference that conscious perception would correspond to a global high-level ‘decision to engage’ many of the brain’s internal resources. Dehaene mentions this in a paper that was discussed in the post A Theory of Consciousness.
Consciousness Gives Us the Power of a Sophisticated Serial Computer. Dehaene is a believer in the Bayesian unconscious. “A strict logic governs the brain’s unconscious circuits–they appear ideally organized to perform statistically accurate inferences concerning our sensory inputs.” Both the unconscious and conscious systems seem to work in a linear fashion (Brunswik’s Lens Model), but the conscious system can redirect.
“This seems to be a major function of consciousness: to collect the information from various processors, synthesize it, and then broadcast the result–a conscious symbol–to other, arbitrarily selected processors. These processors, in turn, apply their unconscious skills to this symbol, and the entire cycle may repeat a number of times. The outcome is a hybrid serial-parallel machine, in which stages of massively parallel computation are interleaved with a serial stage of conscious decision making and information routing.” p100
Dehaene and his colleagues have studied schizophrenics. They found a basic deficit of consciousness perception in schizophrenia. Words had to be presented for a longer time before schizophrenics reported conscious seeing. “Schizophrenics’ main problem seems to lie in the global integration of incoming information into a coherent whole.” Dehaene suggests that schizophrenics have a “global loss of top-down connectivity. This loss impairs capacity for conscious monitoring, top-down attention, working memory, and decision making. Apparently in schizophrenics, the prediction machine is not making enough predictions. With reduced top down messages, sensory inputs are never explained and error messages remain triggering multiple explanations. Schizophrenics thus see the need for complicated explanations that can lead to the far fetched interpretations of their surroundings that may express themselves as bizarre hallucinations and delusions.
Dehaene suggests that consciousness allows us to share information with others and that leads to better decisions. Dehaene’s most interesting idea is that our social abilities allow us to make decisions together and that these are better decisions. Although one can argue that language is imperfect and that much of it is used to transmit trivia and gossip, Dehaene provides evidence that our conversations are more than tabloids. This is a point that needed to be made to me. I was tending to believe that there was almost a direct tradeoff between cognitive skills and social skills and even though that tradeoff was adaptive, maybe it was close. Dehaene puts forth the argument that two heads are better than one and that consciousness makes this possible (This is also directly in line with Scott Page’s: The Difference — How the Power of Diversity Creates Better Groups, post Diversity or Systematic Error).
He cites the experiments of Iranian psychologist Bahador Bahrami. Bahrami had pairs of subjects examine two displays and were asked to decide on each trial whether the first or second contained a near threshold target image. The subjects initially made the decision independently and if they differed were asked to resolve the conflict through a brief discussion. As long as the abilities of the individuals were similar, pairing them yielded a significant improvement in accuracy. Nuances were not was shared to gain this, but simply a categorical answer (first or second display) and a judgment of confidence.
Dehaene suggests that Bayesian decision theory tells us that the very same decision rules should apply to our own thoughts and to those that we receive from others. In both cases, optimal decision making demands that each source of information, whether internal or external, should be weighted as accurately as possible, by an estimate of its reliability, before all the information is brought together into a single decision space. This sounds much like cue validities in Brunswik’s lens model or Parallel Constraint Satisfaction theory. According to Dehaene, once this workspace was opened to social inputs from other minds, we were able reap the benefits of a collective decision making algorithm: by comparing our knowledge with that of others, we achieve better decisions.
This post is derived from the paper, “Whatever next? Predictive brains, situated agents, and the future of cognitive science,” Behavioral and Brain Sciences (2013) 36:3, written by Andy Clark. I stumbled upon this paper and its commentary several weeks ago and have tried to figure out what to do with it. That has led me to other papers. In the next three posts, I will try to give the high points of this idea of PEM, prediction error minimization. It provides an overall background that is compatible with Parallel Constraint Satisfaction.
Clark suggests that the brain’s jobs are minimizing prediction error, selective sampling of sensory data, optimizing expected precisions, and minimizing complexity of internal models. To accomplish these tasks, the brain has evolved into a bundle of cells that support perception and action by attempting to match incoming sensory inputs with top-down expectations–predictions. This is done by using a hierarchical model that minimizes prediction error within a bidirectional cascade of cortical processing. This model maps on to perception, action, attention, and model selection, respectively (and dare I say judgment and decision making).
I occasionally like to go far afield from judgment and decision making, and here I go again. This post takes a look at Michio Kaku’s 2014 book, The Future of the Mind–The Scientific Quest To Understand, Enhance, And Empower The Mind, Doubleday, New York.
Decision models can sometimes seem very explanatory, but they seem so simple minded when I read in Kaku’s book that we have two separate centers of consciousness and that we may all have photographic memories.
The categories for this blog were taken from the table of contents of the 1988 version of Rational Choice in an Uncertain World The Psychology of Judgment and Decision Making edited by Reid Hastie and Robin Dawes. At this point, they need to be reorganized. For instance, the theory and models category, do I really know what a dual process model is? Is the cognitive continuum theory single process or dual process? Frankly, Kahneman’s System 1 and System 2 seem to constitute a weak dual process model concept. But does the difference between a dual process and a single process matter? Or is it a little like a multiple strategy or single strategy framework where even a unifying model can account for differences only by assuming different parameter values? And different parameter values constitute a structurally similar problem to strategy selection in a multiple strategies framework. (See post Automatic Decision Making) . Regardless, I am going to look at Ken Hammond’s cognitive continuum model from 1980. In followup posts, I anticipate working on dual process theories and maybe a nifty combination.
Hammond’s cognitive continuum theory proposes that different forms of cognition (intuitive, analytical, common sense) are situated in relation to one another along a continuum that places intuitive processing at one end and analytical processing at the other. The properties of reasoning (e.g., cognitive control, awareness of cognitive ability, speed of cognitive activity) vary in degree, and the structural features of the tasks that invoke reasoning processes also vary along the continuum, according to the degree of cognitive activity they are predicted to induce.