This is a mild revolution for me. I was always irritated when someone suggested that someone should pull himself up by his bootstraps. This seemed quite impossible to me. But apparently even my computer is bootstrapping when it is booting. According to Wikipedia, bootstrapping usually refers to the starting of a self-sustaining process that is supposed to proceed without external input. In computer technology the term (usually shortened to booting) usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, especially the operating system which will then take care of loading other software as needed. ‘‘Bootstrapping’’ alludes to Baron Munchhausen, who claimed to have escaped from a swamp by pulling himself up by, depending on who tells the story, his own hair or bootstraps.
I first came upon the term as referenced in the post What has Brunswik’s Lens Model Taught? The authors noted the book Winning Decisions by J Edward Russo and Paul Schoemaker. Russo and Schoemaker provide the pyramid shown above. They note that the higher the method is in the pyramid, the more accurate, complex, and transparent it is. The higher approaches are used less frequently and for more important decisions than are lower ones. I would argue that certain types of decisions fit a certain level of the pyramid so that accuracy might not increase as you go up, but there is a general value to the pyramid of choice approaches. I would think that Checklists/Fast and Frugal Trees would be on the Heuristic Procedures level, while Bootstrapping is one step above as importance weighting. It is a variation of Benjamin Franklin’s weighted pros and cons list.
According to Russo, bootstrapping is creating a model based on expert judgment. By choosing the weights for your model by collecting the best judgment of experts, you can then use that model to outperform those experts. Thus, you are bootstrapping by pulling yourself to a higher level of performance by your own bootstraps. In the 1950s Paul Meehl was the most obvious originator of the idea that you could use the model of the experts to beat the experts.
Meehl along with Dawes and Faust wrote a paper that appeared in Science, “Clinical versus Actuarial Judgment.” in 1989. It is readable paper for the laymen and I will provide some of their high points. In the clinical method, the decision maker combines information in his head. In the actuarial or statistical method the human judge is eliminated and conclusions rest solely on empirically established relations between data and the condition or event of interest. Comparative studies have been uniformly in favor of the actuarial method. Even when given an information edge, the clinical judge still fails to surpass the actuarial method. When operating freely, clinicians seem to identify too many exceptions, that is, the actuarial conclusions correctly modified are outnumbered by those incorrectly modified. Maybe if clinicians were more conservative in overriding actuarial conclusions they might gain an advantage.
Why do the actuarial methods have an advantage? First, they are consistent since they do not experience fatigue, recent experience, or whatever flaky things humans suffer from. Those employing the clinical method often do not get feedback that would help them eliminate invalid variables. Humans also like to forget their invalid initial predictions, so they often do not learn from whatever happens to occur. Clinicians also tend to see a skewed cross section so they may estimate the base rate in the population wrong. Given the limited feedback when they are wrong, clinicians also tend to be overconfident of their judgments.
Meehl, Dawes, and Faust note that actuarial methods beat clinical methods, but often the results are still modest. Clearly, an actuarial method must be periodically reevaluated in the original setting, and cannot be expanded to a new setting without quality controls. Nevertheless, even 25 years ago when the paper was published, the authors were confident that the actuarial methods could be applied to predicting violent behavior and parole violation, diagnosis of disorders, and identification of effective treatment and that millions of dollars and a great deal of inadvertent harm could be saved. In the post Minimizing Diagnostic Error: The Importance of Follow-up & Feedback, specific measures are discussed without mentioning the term bootstrapping.
Hogarth and Karelaia found the application of bootstrapping— or replacing judges by their linear models—to be less advantageous when cues are highly correlated or equally weighted and when judges have some experience. (post What has Brunswik’s Lens Model Taught?)
In a 2009 article, Herzog and Hertwig propose that people can enhance the quality of their
quantitative judgments by averaging their first estimate with a second, dialectical estimate. So apparently they believe that after a dialogue with yourself you can improve quantitative judgments. They suggest that originating from the same person, a dialectical estimate has a different error than the first estimate to the extent that it is based on different knowledge and assumptions. They call this approach to boosting accuracy in quantitative estimation dialectical bootstrapping.
Russo, J & Schoemaker P (2002). Winning Decisions Getting it Right the First Time. Currency Doubleday: New York.
Herzog, S.M., & Hertwig, R. (2009). “The Wisdom of Many in One Mind Improving Individual Judgments With Dialectical Bootstrapping.” Psychological Science, Vol 20, No. 2. 231-237.
Dawes, R., Faust, D., Meehl, P.,(1989). “Clinical versus Actuarial Judgment.” Science 243, 1668-1673.