What really makes us smart is not our ability to pull facts from documents or decipher statistical patterns in arrays of data. It’s our ability to make sense of things, to weave the knowledge we draw from observation and experience, from living, into a rich and fluid understanding of the world that we can then apply to any task or challenge.
The Glass Cage: Automation and Us   Nicholas Carr
What is this human intelligence that enables man to make complex decisions based on intuition and feelings without any formal consideration of options, probabilities or relative desirability of outcomes? Are we incredibly sophisticated, rule-based decision making machines? Is it even possible to get better at decision making or has evolution done all of the work already?
These fundamental questions continue to drive my interest in decision making. It’s a practical matter for me, since drug development requires making plans, spending money, and making innumerable decisions on the path of turning a chemical into a new medicine for clinical use all with near total ignorance of the chances of success.
To further On Deciding, Better, I’m delving into the origins of Decision Theory for the first time. I learned the concepts of making decisions under conditions of uncertainty from consultants and practitioners in the field. The tools of decision trees and simulation were presented as simple consequences of statistical principles. I worked through a basic textbook, Robert Clemens’ Making Hard Decisions: An Introduction to Decision Analysis and began writing about making decisions here at ODB.
After become expert, I realized that no one outside of the world of management consultanting and corporate strategy offices really used these tools. Eventually I came to a fundamental reimagining of decision making based on belief models and my training in Neuroscience. The tools of decision analysis, mathematical models, and simulation seemed to best characterized as tools to augment human imagination and understanding. They were useful fictions to guide thinking, merely simple representations of the complex world.
Working through the subject more deeply now, I realize there were similar lines of argument throughout the original works in the field. In Statistical Rethinking, the book I’m working through to learn Bayesian statistical methods, Richard McElreath provides a wonderful introduction to the Bayesian interpretation of probability. Decision theory is predicated on a specific interpretation of probability. Probability is seen as the subjective likelihood of a particular future event, whether it’s the outcome of a coin flip or the choice of nominee by a political party. These things happen only once, so probability is prediction.
There has been a decades long discussion about whether the probability of heads in a coin flip and the probability of a complex event in the real world can really be the same kind of probability. McElreath presents the formulation of Jimmy Savage, one of the foundational thinkers in the field. Savage proposed that there is a difference between the “small world” of the coin flip which can be accurately reduced to a simple mathematical model and the “large world”, where simple models don’t necessarily hold.
Where does the subjective judgement of probability made by the human brain fit into this scheme? The brain itself clearly is an unpredictable large world, but can contain within it small world models, both explicit and implicit, used for decision making. The brain can easily imagine the mechanics of the Bernoulli distribution. But the brain can also contain the mental model of neurological disease and the potential effect of a new medicine. Perhaps machine and other algorithmic, mathematical “small world” systems can never match the “large world” human decision making brain. If I had to guess why, I’d say it’s because the brain itself is part of the even larger real world.