Category Archives: Expertise

Nervous States –Promises

 

This is the third and final post looking at William Davies book Nervous States–Democracy and the Decline of Reason. Davies provides some ideas for getting out of this mess at the end of the book. I believe that they are well thought out. First, Davies notes that there is one problem confronting humanity that may never go away, and which computers do nothing to alleviate: how to make promises. A promise made to a child or a public audience has a binding power. It can be broken, but the breaking of it is a breach that can leave deep emotional and cultural wounds. Davies states:

“Whether we like it or not, the starting point for this venture will be the same as it was for Hobbes:  the modern state, issuing laws backed by sovereign power.  It is difficult to conceive how promises can be made at scale, in a complex modern society, without the use of contracts, rights and statutes underpinned by sovereign law. Only law really has the ability to push back against the rapidly rising tide of digital algorithmic power. It remains possible to make legal demands on the owners and controllers of machines, regardless of how sophisticated those machines are.”

 

Continue reading

Nervous States: Counterpoint

This is the second of three posts discussing William Davies’ book Nervous States–Democracy and the Decline of Reason. I pick a couple of areas to argue with some of the scenarios Davies presents.

Markets and Evolution

Davies discusses Hayek as the guy who believes in free markets above all else, and who has helped us reach this point of not agreeing on reality.  When I read Hayek (The Road to Serfdom), he said to me that free markets with the right stable rules in place are the best system for everyone. Unfortunately, determining the right stable rules is difficult and the job of government.  Hayek seems to have taken Adam Smith’s invisible hand and run with it. David Sloan Wilson in This View of Life- Completing the Darwinian Revolution makes clear that the invisible hand only works at one scale of a market (see posts Evolution for Everyone and Multilevel Selection Theory).

Continue reading

Nervous States- Democracy and the Decline of Reason

This book, Nervous States – Democracy and the Decline of Reason, 2019, written by William Davies tries to explain the state we are in. The end of truth or the domination of feelings or the end of expertise all come to mind. People perceive that change is so fast that the slow knowledge developed by reason and learning is devalued, while instant knowledge that will be worthless tomorrow like that used by commodity, bond or stock trading networks is highly valued. Davies builds on Hayek and says many things that ring true. In three posts, I will present the main points of Davies’ book, argue with some of the points, and present what Davies says we can do about it.  Devaluing reason is a big deal for decision making.

Continue reading

Consistency and Discrimination as Measures of Good Judgment

This post is based on a paper that appeared in Judgment and Decision Making, Vol. 12, No. 4, July 2017, pp. 369–381, “How generalizable is good judgment? A multi-task, multi-benchmark study,” authored by Barbara A. Mellers, Joshua D. Baker, Eva Chen, David R. Mandel, and Philip E. Tetlock.  Tetlock is a legend in decision making, and it is likely that he is an author because it is based on some of his past work and not because he was actively involved. Nevertheless, this paper, at least, provides an opportunity to go over some of the ideas in Superforecasting and expand upon them. Whoops! I was looking for an image to put on this post and found the one above. Mellers and Tetlock looked married and they are.  I imagine that she deserved more credit in Superforecasting, the Art and Science of Prediction. Even columnist David Brooks who I have derided in the past beat me to that fact. (http://www.nytimes.com/2013/03/22/opinion/brooks-forecasting-fox.html)

The authors note that Kenneth Hammond’s correspondence and coherence (Beyond Rationality) are the gold standards upon which to evaluate judgment. Correspondence is being empirically correct while coherence is being logically correct. Human judgment tends to fall short on both, but it has gotten us this far. Hammond always decried that psychological experiments were often poorly designed as measures, but complimented Tetlock  on his use of correspondence to judge political forecasting expertise. Experts were found wanting although they were better when the forecasting environment provided regular, clear feedback and there were repeated opportunities to learn. According to the authors, Weiss & Shanteau suggested that, at a minimum, good judges (i.e., domain experts) should demonstrate consistency and
discrimination in their judgments. In other words, experts should make similar judgments if cases are alike, and dissimilar judgments when cases are unalike.  Mellers et al suggest that consistency and discrimination are silver standards that could be useful. (As an aside, I would suggest that Ken Hammond would likely have had little use for these. Coherence is logical consistency and correspondence is empirical discrimination.)

Continue reading

Hogarth on Simulation

scm1This post is a contination of the previous blog post Hogarth on Description. Hogarth and Soyer suggest that the information humans use for probabilistic decision making has two distinct sources: description of the particulars of the situations involved and through experience of past instances. Most decision aiding has focused on exploring effects of different problem descriptions and, as has been shown, is important because human judgments and decisions are so sensitive to different aspects of descriptions. However, this very sensitivity is problematic in that different types of judgments and decisions seem to need different solutions. To find methods with more general application, Hogarth and Soyer suggest exploiting the well-recognized human ability to encode frequency information, by building a simulation model that can be used to generate “outcomes” through a process that they call “simulated experience”.

Simulated experience essentially allows a decision maker to live actively through a decision situation as opposed to being presented with a passive description. The authors note that the difference between resolving problems that have been described as opposed to experienced is related to Brunswik’s distinction between the use of cognition and perception. In the former, people can be quite accurate in their responses but they can also make large errors. I note that this is similar to Hammond’s correspondence and coherence. With perception and correspondence, they are unlikely to be highly accurate but errors are likely to be small. Simulation, perception, and correspondence tend to be robust.

Continue reading

Superforecasting

superforecastingimagesThis post is a look at the book by Philip E Tetlock and Dan Gardner, Superforecasting– the Art and Science of Prediction.  Phil Tetlock is also the author of Expert Political Judgment: How Good Is It? How Can We Know?   In Superforecasting Tetlock blends discussion of the largely popular literature on decision making and his long duration scientific work on the ability of experts and others to predict future events.

In Expert Political Judgment: How Good Is It? How Can We Know? Tetlock found that the average expert did little better than guessing.  He also found that some did better. In Superforecasting he discusses the study of those who did better and how they did it.

Continue reading

Medical Decisions–Risk Saavy

screeningLearnMoreThis post looks at the medical/health component of decision making as addressed in Gerd Gigerenzer’s new book, Risk Saavy, How to Make Good Decisions. First, Gigerenzer has contributed greatly to improving health decision making. This blog includes three consecutive posts on the Statistics of Health Decision Making based on Gigerenzer’s work.

He points out both the weaknesses of screening tests and our understanding of the results. We have to overcome our tendency to see linear relationships when they are nonlinear. Doctors are no different. The classic problem is an imperfect screening test for a relatively rare disease. You cannot think in fractions or percentages. You must think in absolute frequencies. Breast cancer screening is one example. Generally, it can catch about 90% of breast cancers and only about 9% test positive who do not have breast cancer. So if you have a positive test, that means chances are you have breast cancer. No! You cannot let your intuition get involved especially when the disease is more rare than the test’s mistakes. If we assume that 10 out of 1000 women have breast cancer, then 90% or 9 will be detected, but about 90 of the 1000 women will test positive who do not have disease. Thus only 9 of the 99 who test positive actually have breast cancer. I know this, but give me a new disease or a slightly different scenario and let a month pass, I will still be tempted to shortcut the absolute frequencies and get it wrong.

Continue reading

Gigerenzer — Risk Saavy

risksaavyindexGerd Gigerenzer has a 2014 book out entitled:  Risk Saavy, How to Make Good Decisions, that is a refinement of his past books for the popular press.  It is a little too facile, but it is worthwhile. Gigerenzer has taught me much, and he will likely continue. He is included in too many posts to provide the links here (you can search for them). My discussion of the book will be divided into two posts. This one will be a general look, while the next post will concentrate on Gigerenzer’s take on medical decision making.

As in many books like this, the notes provide insight. Gigerenzer points out his disagreements with Kahneman with respect to heuristics all being part of the unconscious system. As he notes heuristics, for instance the gaze heuristic, can be used consciously or unconsciously. This has been a major issue in my mind with Kahneman’s System 1 and System 2. Kahneman throws heuristics exclusively into the unconscious system. I also side with Gigerenzer over Kahneman, Ariely, and Thaler that the unconscious system is associated with bias. As Gigerenzer states: “A system that makes no errors is not intelligent.” He interestingly points out the use of the gaze heuristic by Sully Sullenberger to decide to not return to LaGuardia, but instead to land in the Hudson River.

Continue reading

Mass Hysteria to the Rescue

ebolaThis post is a reaction to the column by Bret Stephens that appeared in the October 21, 2014, Wall Street Journal, entitled: “What the Ebola Experts Miss.” The column starts out:

Of course we should ban all nonessential travel from Liberia, Guinea, Sierra Leone and any other country badly hit by the Ebola virus.

 

Continue reading

David Brooks on Expertise

logindexDavid Brooks has a way of irritating me. For some reason, he seems like a very serious person so I cannot dismiss him out of hand. But, on June 16, 2014, he wrote “The Structures of Growth—Learning is no Easy Task,”  in the New York Times, about certain human activities having logarithmic learning functions and others as having exponential functions. I realize that I am envious of his being able to push such sloppy work out the door to millions of readers. It is just a column, but read it for yourself.

His basis was a blog by Scott  H. Young in early 2013, who as far as I can tell made much less outlandish representations about learning or domains of growth. Young explains that anything that you try to improve will have a growth curve, and that it is a mistake to assume that it will be linear. Young says that athletic performance, productivity, and mastery of a complex skill tend to be logarithmic. Early progress on logarithmic growth activities can make you overconfident if you do not realize that the curve will soon flatten. He notes that exponential functions tend to be limited to ranges and apply to technological improvement, business growth, wealth, and rewards to talent.

Continue reading