The Devil’s Triangle: Fear, Uncertainty, Control

The great goal of Deciding Better is to escape the trap of fear, uncertainty and control.

Decision making is hard when the outcome is uncertain. What’s so bad about a little doubt? Joined to uncertainty are two interacting factors: Fear and Control.

Uncertainty provokes anxiety. When we don’t know how things will turn out, the emotion of fear comes into play. It gets us ready for fight or flight. This is anxiety, a neurochemically induced cognitive state. It’s a deep seated brain mechanism with great adaptive utility. A little fear can be a very good thing at the right time.

The problem is that we experience this fear constantly. Then we call it stress and anxiety. Our big brains help us see how uncertain the world really is. I talked about it the other day in a discussion of what makes decisions hard. Decisions aren’t only hard, they provoke fear because of the associated uncertainty.

So what’s scary about uncertainty? Ultimately, its having to face a loss of control. When we’re masters of our environment and in control, we know what to expect. Lose that certainty and we lose control. Causing anxiety and stress.

Deciding better must include embracing uncertainty without engaging the other two sides of this triangle of fear and control. At least not any more than necessary.

The more we understand about the world and its complexity, the more profound our appreciation of how unpredictable the world really is. We are never really in control of outcomes and we are truly powerless to bend the world to our will. We can powerfully influence the world through our actions, but we can’t control anything other than how we choose to act in the moment.

I believe this is at the core of why what Stephen Covey called the world’s “wisdom literature” emphasizes humility and releasing the illusion that we’re in control of the future. At the same time, Covey started with his First Habit, “Be Proactive” as a step in controlling ourselves rather than controlling the world.

Emergent Behavior of Links and Clicks

One of the most interesting chapters in Mark Bernstein’s The Tinderbox Way is on links- both in Tinderbox and on the Internet. Mark provides a personal and historical overview of the approaches and attitudes toward linking beginning with the early days of hypertext and leading up to our current environment.

Linking evolved, guided by the users of the net in a way suitable for navigation within and between sites for readers. It’s  now adapted and grown to enable search advertising and the social networking systems.

What’s interesting to me is how difficult it is to show the utility of linking in a Tinderbox document. One ends up pretty quickly with a spaghetti plot of links between boxes. Mark provides some illustrations that look interesting but don’t seem to mean much at all as a map. There’s actually a site that collects these pretty network pictures: Visual Complexity.

As I read Mark’s discussion, I was struck by the similarity between these links and the interconnections of metabolic pathways within a cell or the interconnections between neurons. Mapped, we see spaghetti. But there is an emergent behavior from the network that only arises from the functioning of those interactions. On the web perhaps these are communities of shared interest.

We need a large amount of computational power to visualize the emergent network. Its easier if its geographical:

Via GigaOm:

If there’s one thing you get when you have close to 600 million users the way Facebook does, it’s a lot of data about how they are all connected — and when you plot those inter-relationships based on location, as one of the company’s engineers found, you get a world map made up of social connections.

We’re used to seeing maps as geographical metaphor. Maps of meaning are not well developed as mental models. I submit that Google’s algorithms for advertising and ranking are providing semantic functions that are such maps. The actual movement of people through the network as measured by following user clicks across sites is another even more important map. The data is massive and difficult to display simply, but the emergent behavior can be detected and used.

Revisiting Searle’s Chinese Room

This is a retraction. I no longer think that John Searle’s Chinese Room is trivial. It is a powerful demonstration of the failure of materialism to provide an adequate explanation for consciousness.

The Chinese Room is Searle’s most famous argument against materialism. He asks us to imagine that we are in a sealed room, communicating by text with the outside. We have a manual that allows us to respond to questions in Chinese even though we have no knowledge of the language. Or if asked in English we respond in the usual way.

Thus, we’d be answering English or Chinese appropriately. The outside observer can’t distunguish how we’re coming up with replies. But inside, the two are totally different. One is done mechanically, by rote, the other is done with awareness and thought. This is analogous to the observation of a person, obviously. Is there a mind responding or just mechanical response without consciousness?

Materialism says that only the physical exists. But such a view cannot account for the difference between response by some one who understands and mechanical responses. This seemingly most scientific and rational approach fails to admit the simple fact- we know that there is such a thing as awareness and consciousness because we experience it constantly. Any theory of mind that fails to account for it is incomplete.

Dualism accounts for consciousness, but in its separation of mind from material, it loses all of its explanatory power and becomes unacceptable.

Here’s what I wrote in the comments to Aaron Swartz’s description of the argument:

Searle’s Chinese Room experiment is a trivial misdirection. He focuses on the man in the room matching symbols rather than the creator of the semantic and syntactic translation rules. That designer was conscious. The man in the room is working unconsciously. When I speak my mouth and vocal cords do the translation from nerve impulses to sound patterns but it is entirely unconscious. You have to follow the trail back into the brain where you get lost because consciousness is an emergent property of the neural networks, not a property of the machinery at all.

posted by James Vornov on March 15, 2007 #

I don’t actually remember whether I wrote that before or after I read Searle’s The Rediscovery of the Mind, but at some point I did come to agree with him. The simple way out of the problem is to admit that mind does indeed exist. As evidenced by my comment, I had already decided that mind was real and it was emergent from brain activity. Interestingly, using different terminology, I think that Searle’s points out the same irreducibility in the later book, The Mind.

Why Enrichment Designs Don’t Work in Clinical Trials

Last week I was discussing a clinical trial design with colleagues. This particular trial used an enrichment design. A few years ago I did some simulation work to show that you can’t pick patients to enroll in a clinical trial in order to improve the results.

People are probabilistic too.

The idea of and enrichment design is to winnow the overall patient group down to those individuals who are likely to respond to therapy. One way is to give all of the candidates a placebo and eliminate placebo responders. Another strategy is to give a test dose of drug and keep only those who respond. Either way, the patients that pass the screening test get to go on to a double blind test of active drug versus placebo.

Sounds like a great idea, but it doesn’t really work most of the time in practice. While this idea of screening out patients, it turns out that it mostly just excludes patients who are varying in their complaints over time. You can’t really  tell who are going to be better patients during the screening test. It turns out that most patients look different at one time point compared to any other.

The mistake that we make is in thinking that people can be categorized by simple inspection. We think of patients as responders or non-responders, an intrinsic characteristic they have or don’t have. Trying to screen out patients we don’t want falls into the trap of thinking that a single set of tests can successfully discriminate between classes.

The way I think of it is that we need relatively large clinical trials to prove the value of a modestly effective drug. So it seems odd to think that one could easily categorize patients themselves when tested. You can see this by looking at how well a test dose of a drug looking for drug responders would be able to enrich a patient population. Variability over time makes this impossible.

Let’s walk through an example. An imaginary trial of a drug to treat migraine attacks.

Lets say we know the truth and this candidate is in reality a pretty good treatment for a migraine attack. But the patient varies in headache severity and responsiveness to treatment.

Some headaches are mild and will resolve without treatment. That mild attack will act no differently whether the active drug or placebo was administered. Some headaches are very bad and even a really effective drug might not touch that kind of headache. So again the attack will be the same whether placebo or treatment is given.

And what about the headaches that are in between and could respond? Well if a drug worked half the time, then out of every two of those attacks, the active drug would show an effect where the placebo did not. The other half the time, it would look just like placebo again.

Add up these cases, there are four of them. For only one atttack did the active drug work where the placebo would fail. One out of 4 times, a 25% overall response rate. All just because in the same patient the headache and its response to drug changes. So if I did a test treatment to see if I had a responder, I would eliminate half of the responders because either they had a headache that was the the one too severe to respond or the one that happened not to respond that time.

Of course you’d eliminate some of the non-repsonders. But we know that even non-responders may have 1 in 4 headaches that are mild enough that they don’t need the treatment anyway. So you eliminate 75% of the non-responders with a test dose which is better than the 50% of responders that were eliminated. You’ve done better. How much better depends on the ratio of responders to non-responders in the population, a ratio that is completely unknown.

What’s nice is that while you can see the logic by reading the story I’ve told, a mental simulation, one can create an explicit mathematical model of the clinical trial and simulate running the trial hundreds of times. It turns out that there very few conditions where this kind of enrichment really works. I turns out its simpler and just as informative to see whether or not the drug is effective in the overall population without trying to prejudge who is a responder or not with a test dose.

The irony? This is exactly the opposite of clinical practice. In the real clinic, each patient is their own individual clinical trial, an “N of 1” as we say. N is the symbol for the number in a population. An individual is a population of one. N of 1. We treat the patient and over time judge whether or not they respond in a personal clinical trial. Not to see whether the drug works but whether the patient is a responder.  If they don’t, therapy is adjusted or changed. But in our migraine example, multiple headaches of various intensity would have to be treated so see the benefit.

Perhaps variability across a population is easily grasped. People are tall or short, have dark or light hair color. Variability within an individual over time is perhaps more subtle but just as important for over time.

Trust is Simplifying

The outrage directed toward the TSA reflects a breakdown in trust.

With terrorists trying to bring down planes, we don’t trust our fellow passengers. Every fresh attempt, even when not successful lowers that trust even further. The government and its TSA becomes the vehicle to demonstrate that lack of trust. As trust declines, surveillance increases. In a decade it’s gone from identity and magnetometer checks to direct body searches, either by technology or direct physical contact.

As discussed in the NYT today, there’s also a lack of trust between the government and the citizenry. We feel angry that government is being so intrusive and body searches seems to cross a personal limit for us. And the TSA doesn’t trust is to just go along and let them do their job.

The loss of trust in air travel creates hassle and uncertainty. Everything being carried onto a plane must be checked. Every person must be checked. No one is trusted in this system. Calls for more targeted surveillance are really calls for more trust of at least some individuals. After all, I know they can trust me. Its those suspicious looking young men I’m worried about. That would remove lots of hassle. Actually all of my hassle if they would trust me somehow.

Trust is a great simplifying principle. I trust my bank to keep my accounts private and secure. I trust other drivers on the road to stay in their lanes. As trust goes down, complexity goes way up. I have to worry about more and more because so much more could go wrong in so many unexpected ways.

I was introduced to the importance of trust in Francis Fukuyama’s book“Trust”
In it he looks across different cultures and describes the  structure of trust in each one and how it affects politics, economics and quality of life. Not surprisingly, the higher the level of trust, the better off people are. And one of his theses is that the U.S. with its frontier driven communitarianism, is one of the highest trust societies in the world.

Most simply, trust transform an uncertain potentially hazardous environment into a safe, reliable socially driven model. Its such a powerful simplifying principle that the desire to cooperate in a fair way is a deeply felt human quality, wired into our brains it seems.

Since I’m currently exploring ideas about extended cognition, lets turn the view 180 degrees. Usually we think of trusting in the external environment, looking for predictability. I think there’s an important aspect of self-trust that contributes to simplicity. If I can rely on myself to remember how to do something complex, I approach it with confidence.

That sense of mastery and self-confidence dispels fear just as trust in the world does.

On Packing Better

There is a difference between reducing complexity by deciding better and just artificially reducing choice through enforced “simplicity”. It is better, from a decision theory point of view, to have three shirts to choose from than to own only one shirt and lack choice.

With choice comes the chance for a better outcome. But don’t make the mistake of preserving choice instead of making choices.

I always think of packing as a great example of this. Better to decide well what to pack and travel light than to postpone choice and drag around too much for just in case scenarios. I see think is project planning. There are situations that call for robust plans with low failure probability and times for fast flexible plans that may need a trip back to the drawing board.

In the spirit of minimalism, I support the use of Folios

OTC Recommends: The Leather Document Folio | Off the Cuff: “”

True, folios have limited space and can never really compete with the functionality of a messenger bag or roomy elegance of a soft sided brief bag. You always have to hold it, or tuck it under your arm, and often there is no outside slash pocket for a paper or metro pass. But such limitations are to me a big part of their charm.

By necessity I am forced to shed most of the stuff I habitually carry around but never really use. It is simplification by requirement.

Part of the charm of the folio is enforcing the discipline to decide better. When appropriate.

The Challenge of the Blank Sheet of Paper

A clean sheet of paper.

The open road. A new programming language

All examples of limitless possibilities. And where decisions can’t be made because alternatives are not refined.

Here even values don’t help because the is simultaneously everything to choose from but nothing to do.

Create a plan?  Doodle and wait for direction from within or without?

The first principle of Deciding Better is to decide to decide. We’re making decisions all the time, whether we are aware of the choices or not. In order to decide better, its critical to begin to be conscious about decisions. And we know that decisions can only be made in the present. A choice is an action and actions by definition are events in the “now”. You can’t do anything in the past or the future and, by extension it’s impossible to make decide to do something in the future. It’s impossible to change a decision made in the past as well of course.

The blank sheet of paper challenges this approach. How can decisions be made when there are no choices on offer? A blank sheet of paper provides no list of alternatives. Decision Theory suggests that the proper procedure is to brainstorm to create a list of all possible alternatives and then use some value weighting system to choose the best of the alternatives. Am I supposed to list all of the possible things to write? Fiction, non-fiction, lists, drawings . . .  Drawings of what? Fish, birds, building, people, microbes, maps . . .

Way too many possibilities to enumerate. More buckets that I have at my disposal.

It’s well known that too many choices can be as much of a problem as too few choices. In fact we feel most comfortable when there is no choice at all. But at least give me clear alternatives. This in part I believe is behind the flight to simplicity we see these days. As a response to excess we reject complexity all together. Just simplify my life. Make it easy for me. Clear alternatives that represent real values.

Faced with the blank sheet of paper, I believe that the right place to look is in the opposite direction. Not at the paper but into the viewer of the paper.  Look within. The blank paper, the tool on the bench or the computer language has nothing to offer except the possibility of action. It is the actor, not the tool that needs simplification.

There has to be some model that is inside us that provides the list of possible actions to take with that blank sheet of paper. This is a reduction of complexity within ourselves which in principle is no different from reducing complexity in any other domain of making decisions, creating simplified models.

Extended Cognition

Wouldn’t be nice to extend your brain with technology? Improved memory,?More acute vision? The ability to see distant places without moving?

But don’t we actually do these things every day with our available technology?

The Path From Apple’s Newton to Evernote

The basic idea was really simple. We figured that no one is really fully satisfied with our normal brains, with our normal memory. Everyone wants a better brain. And a few years ago, it looked like technology was finally at a point where it would be viable to try to build a service to be your secondary brain – your external brain.

This in some sense what Andy Clark means by Extended Cognition.

We view ourselves as a mind limited within a body. Subjectively, we generally feel like we’re located behind our eyes, between our ears. This is the self perceives with the senses and controls the motor apparatus.

There are several hard questions about this sense of consciousness. What exactly is it? Where is it located? Does it really exist in a physical sense or is it just an illusion, a byproduct of a complex functioning brain? Is it unique to brains or could a computer possess it? Animals? Is it dependent on language?

In 1998, Andy Clark and David Chalmers proposed what I think turns out to be a new and useful perspective. Instead of the disembodied mind of Descarte’s dualism or the embodied mind of Lakoff’s neurobiological conception, they place the mind across both the brain and its extended environment. It had seemed to me over the last few years that there had to be some reality to the conceptual world. I think this was Plato’s intuition as well, but he didn’t have a good metaphor for understanding why, for example, mathematics is real. The embodied mind exists in a world where math works, so the metaphor of math is a useful mental model in the brain. But on reflection, it seems that these metaphors have a fuzzy boundary and aren’t purely interior. When I read and become absorbed in the text or listen to music and see the patterns of sound, I lose my sense of being located in my head. Flow, according to Csikszentmihalyi is that sense of immersion when the boundary of self dissolves.

Extending the location of consciousness, the mind, to include objects outside of the borders of the body leads to some interesting ways to look at clarifying values and making decisions. In essence, once the borders of in here and out there are made less absolute, then it becomes easier to understand how abstractions and concepts can be influential in the real world.

And it blurs the line between self and object- whether computer or notebook. Self and other people and organizations. A broader sense of identity.