Emre Soyer and Robin Hogarth have written a new book, The Myth of Experience. Why We Learn the Wrong Lessons, and Ways to Correct Them. This book is aimed at a general audience although it has copious and detailed notes and an index that will allow for deeper looks. I have much respect for their past work both individually and together.
The key idea that they have developed elsewhere is that some learning environments are kind so that what you learn by experience is helpful–say riding a bike– while other environments are wicked and experience cannot be relied upon to make good decisions. Robin Hogarth’s Educating Intuition develops this (See posts: What has Brunswik’s Lens Model Taught? ‘ , Kind and Wicked Learning Environments)
XD or experience design was a new term for me although an obvious one. Clearly one reason it is difficult to learn from experience is that much of it is designed for us and not to aid our freedom of choice, but to influence our decisions. The influencers are emotions, options, and games.
My favorite example from the book began on p 164 and was credited to the: You are not so Smart blog of David McRaney. The experience was that of watching planes day after day returning from missions in World War II with obvious bullet holes. Apparently, there were a couple of normal reactions: reinforce the parts that had bullet holes or repair them, but Abraham Wald, a statistician, suggested that the places that did not have bullet holes were the places to be reinforced since those planes had not returned successfully. He realized that taking into account only the returning planes was missing the most serious failures.
Soyer and Hogarth in their normal straightforward manner succinctly provide their conclusions on the last three pages of the book.
- In less than kind learning environments, we cannot only rely on readily available experience. We need to look beyond personal observations and get a diversity of counterfactuals and insights. We cannot base decisions only on outcomes. Processes and contexts and degrees of randomness matter. Even in looking at just outcomes, we cannot look at only failures or successes if we are to get it right.
- We cannot focus on irrelevant and/or inappropriate details in our experience. If experience leads to a simple causal story it is more likely to be reliable. If experience is focused on a subset of issues, it is less likely to endure as helpful. Finally, if experiences involve alluring comforts, emotions, options, and/or games, we may be swayed from our personal morality or objectives.
- Experience can lead to flawed convictions. To avoid that we need to ask questions about the learning environment including:
- How has our experience been filtered or distorted.
- Do we believe the future will resemble the past for this type of experience?
- Does expertise in this type of decision usually rely on experience and intuition or on more formal methods of learning.
- What is missing or irrelevant in our experience?
Then we need to ask questions about the decision maker:
- Are you really convinced that experience taught you a certain lesson?
- Was it only personal experience? That makes it less reliable as a teacher.
- Do you think it taught you a lesson because it makes you correct in past decisions.
- Is your statistical literacy good enough to be certain about it?
One idea that is not specifically talked about in this book, but the authors have developed elsewhere, is the idea of linearity vs nonlinearity. It seems that the more linear a learning environment is the kinder it also is. (See posts Nonlinear and Nonlinear Ecology.) Even in environments that are not strictly nonlinear, but where decisions are broken into discrete steps where the order of the steps is uncertain, we tend to fail. That is one reason the idea of relative risk can be so deceiving. Doubling relative risk is not that important if it changes the chance of failure from 1 in 1000 to 2 in 1000.