The Devil’s Triangle: Fear, Uncertainty, Control

The great goal of Deciding Better is to escape the trap of fear, uncertainty and control.

Decision making is hard when the outcome is uncertain. What’s so bad about a little doubt? Joined to uncertainty are two interacting factors: Fear and Control.

Uncertainty provokes anxiety. When we don’t know how things will turn out, the emotion of fear comes into play. It gets us ready for fight or flight. This is anxiety, a neurochemically induced cognitive state. It’s a deep seated brain mechanism with great adaptive utility. A little fear can be a very good thing at the right time.

The problem is that we experience this fear constantly. Then we call it stress and anxiety. Our big brains help us see how uncertain the world really is. I talked about it the other day in a discussion of what makes decisions hard. Decisions aren’t only hard, they provoke fear because of the associated uncertainty.

So what’s scary about uncertainty? Ultimately, its having to face a loss of control. When we’re masters of our environment and in control, we know what to expect. Lose that certainty and we lose control. Causing anxiety and stress.

Deciding better must include embracing uncertainty without engaging the other two sides of this triangle of fear and control. At least not any more than necessary.

The more we understand about the world and its complexity, the more profound our appreciation of how unpredictable the world really is. We are never really in control of outcomes and we are truly powerless to bend the world to our will. We can powerfully influence the world through our actions, but we can’t control anything other than how we choose to act in the moment.

I believe this is at the core of why what Stephen Covey called the world’s “wisdom literature” emphasizes humility and releasing the illusion that we’re in control of the future. At the same time, Covey started with his First Habit, “Be Proactive” as a step in controlling ourselves rather than controlling the world.

Emergent Behavior of Links and Clicks

One of the most interesting chapters in Mark Bernstein’s The Tinderbox Way is on links- both in Tinderbox and on the Internet. Mark provides a personal and historical overview of the approaches and attitudes toward linking beginning with the early days of hypertext and leading up to our current environment.

Linking evolved, guided by the users of the net in a way suitable for navigation within and between sites for readers. It’s  now adapted and grown to enable search advertising and the social networking systems.

What’s interesting to me is how difficult it is to show the utility of linking in a Tinderbox document. One ends up pretty quickly with a spaghetti plot of links between boxes. Mark provides some illustrations that look interesting but don’t seem to mean much at all as a map. There’s actually a site that collects these pretty network pictures: Visual Complexity.

As I read Mark’s discussion, I was struck by the similarity between these links and the interconnections of metabolic pathways within a cell or the interconnections between neurons. Mapped, we see spaghetti. But there is an emergent behavior from the network that only arises from the functioning of those interactions. On the web perhaps these are communities of shared interest.

We need a large amount of computational power to visualize the emergent network. Its easier if its geographical:

Via GigaOm:

If there’s one thing you get when you have close to 600 million users the way Facebook does, it’s a lot of data about how they are all connected — and when you plot those inter-relationships based on location, as one of the company’s engineers found, you get a world map made up of social connections.

We’re used to seeing maps as geographical metaphor. Maps of meaning are not well developed as mental models. I submit that Google’s algorithms for advertising and ranking are providing semantic functions that are such maps. The actual movement of people through the network as measured by following user clicks across sites is another even more important map. The data is massive and difficult to display simply, but the emergent behavior can be detected and used.

What makes decisions hard?

Lets start out with the simplest possible definition of a decision. In as situation with multiple possible courses of action, the choice is the behavior performed.

Animals do things that are remarkably purposeful and directed even with simple nervous systems. I think especially of animals that alter their environment to suit their own purposes. People build buildings, birds build nests, ants, bees and termites cooperate to build large and complex communal nests.

Its not hard for ants and birds to choose how to build these structures. They seem to do it based on internal rules that are pre-built into to the nervous system. I imagine there must be good and bad places to build an anthill, but the colony isn’t particularly bothered by the decision. They just get to work as group without blueprint. There is uncertainty about the final quality of the structure, but it doesn’t make deciding hard.

An architect has much harder decisions to make in choosing where to build a house, what kind of house should be built and how it should be built. In some of these decisions that need to be made, the number of potential pathways is large, but not all of the options are available. The home buyer wants a colonial, not a modern house. The range of potential choices is immediately restricted. Alignment of structures along north-south lines has well established rules and limits choice more. There are building codes that force choices.

The limitations on house building arise from bias, established practice- knowledge of what will happen depending on choices made. Its crystal clear that only layouts for colonials will yield colonial houses. You’ll never end up with a modernist cube.

But there are tons of hard decisions here as well. A single or two zone heating system? Well there are differences and cost and potential comfort. The cost is clear, but the benefits are much more uncertain. How will the areas of the house be used? Maybe three zones are really needed. Should the floor-plan be modified for energy efficiency? Maybe a heat-pump for some areas and area systems for others? And gas, electric? Hydrothermal?

Picking just one detail, we can wander out into a decision space where nothing is clear. Trading off cost and value is subjective and ultimate benefit hard to predict. Now start looking at interactions of this one decision with all of the others that need to be made, decisions get even harder. How many windows, insulation types create structural decisions that need to be made.

Decision making is hard because the choices are complex, the results of particular choices are uncertain and may have unintended consequences later on that we never even thought about.

Decisions making is hard for us compared to ants and birds because of our ability to contemplate the complexity and imagine a future we can’t control.

Revisiting Searle’s Chinese Room

This is a retraction. I no longer think that John Searle’s Chinese Room is trivial. It is a powerful demonstration of the failure of materialism to provide an adequate explanation for consciousness.

The Chinese Room is Searle’s most famous argument against materialism. He asks us to imagine that we are in a sealed room, communicating by text with the outside. We have a manual that allows us to respond to questions in Chinese even though we have no knowledge of the language. Or if asked in English we respond in the usual way.

Thus, we’d be answering English or Chinese appropriately. The outside observer can’t distunguish how we’re coming up with replies. But inside, the two are totally different. One is done mechanically, by rote, the other is done with awareness and thought. This is analogous to the observation of a person, obviously. Is there a mind responding or just mechanical response without consciousness?

Materialism says that only the physical exists. But such a view cannot account for the difference between response by some one who understands and mechanical responses. This seemingly most scientific and rational approach fails to admit the simple fact- we know that there is such a thing as awareness and consciousness because we experience it constantly. Any theory of mind that fails to account for it is incomplete.

Dualism accounts for consciousness, but in its separation of mind from material, it loses all of its explanatory power and becomes unacceptable.

Here’s what I wrote in the comments to Aaron Swartz’s description of the argument:

Searle’s Chinese Room experiment is a trivial misdirection. He focuses on the man in the room matching symbols rather than the creator of the semantic and syntactic translation rules. That designer was conscious. The man in the room is working unconsciously. When I speak my mouth and vocal cords do the translation from nerve impulses to sound patterns but it is entirely unconscious. You have to follow the trail back into the brain where you get lost because consciousness is an emergent property of the neural networks, not a property of the machinery at all.

posted by James Vornov on March 15, 2007 #

I don’t actually remember whether I wrote that before or after I read Searle’s The Rediscovery of the Mind, but at some point I did come to agree with him. The simple way out of the problem is to admit that mind does indeed exist. As evidenced by my comment, I had already decided that mind was real and it was emergent from brain activity. Interestingly, using different terminology, I think that Searle’s points out the same irreducibility in the later book, The Mind.

Clipping Curves

Leaf with reflection, originally uploaded by jjvornov.

My primary photographic mentor is Vincent Versace. His book “Welcome to Oz” is a relative short book, written in an unusual style that more workshop than manual, but it is full of techniques that permit manipulation of light within photographs.

I’ve had many influences over the years both as models of how to pursue this art and as technical inspirations. Vince is pretty accessible through his Flickr group in particular.

I’ve been struggling to suppress light in photographs. His advice has been to clip the light end of the curve in photoshop. And he’s right. It achieves the goal of lowering contrast and killing the brightest highlights. In this image I burshed back the darkness over the central leaf to have it emerge from the darker background.

Thanks Vince.

Why Enrichment Designs Don’t Work in Clinical Trials

Last week I was discussing a clinical trial design with colleagues. This particular trial used an enrichment design. A few years ago I did some simulation work to show that you can’t pick patients to enroll in a clinical trial in order to improve the results.

People are probabilistic too.

The idea of and enrichment design is to winnow the overall patient group down to those individuals who are likely to respond to therapy. One way is to give all of the candidates a placebo and eliminate placebo responders. Another strategy is to give a test dose of drug and keep only those who respond. Either way, the patients that pass the screening test get to go on to a double blind test of active drug versus placebo.

Sounds like a great idea, but it doesn’t really work most of the time in practice. While this idea of screening out patients, it turns out that it mostly just excludes patients who are varying in their complaints over time. You can’t really  tell who are going to be better patients during the screening test. It turns out that most patients look different at one time point compared to any other.

The mistake that we make is in thinking that people can be categorized by simple inspection. We think of patients as responders or non-responders, an intrinsic characteristic they have or don’t have. Trying to screen out patients we don’t want falls into the trap of thinking that a single set of tests can successfully discriminate between classes.

The way I think of it is that we need relatively large clinical trials to prove the value of a modestly effective drug. So it seems odd to think that one could easily categorize patients themselves when tested. You can see this by looking at how well a test dose of a drug looking for drug responders would be able to enrich a patient population. Variability over time makes this impossible.

Let’s walk through an example. An imaginary trial of a drug to treat migraine attacks.

Lets say we know the truth and this candidate is in reality a pretty good treatment for a migraine attack. But the patient varies in headache severity and responsiveness to treatment.

Some headaches are mild and will resolve without treatment. That mild attack will act no differently whether the active drug or placebo was administered. Some headaches are very bad and even a really effective drug might not touch that kind of headache. So again the attack will be the same whether placebo or treatment is given.

And what about the headaches that are in between and could respond? Well if a drug worked half the time, then out of every two of those attacks, the active drug would show an effect where the placebo did not. The other half the time, it would look just like placebo again.

Add up these cases, there are four of them. For only one atttack did the active drug work where the placebo would fail. One out of 4 times, a 25% overall response rate. All just because in the same patient the headache and its response to drug changes. So if I did a test treatment to see if I had a responder, I would eliminate half of the responders because either they had a headache that was the the one too severe to respond or the one that happened not to respond that time.

Of course you’d eliminate some of the non-repsonders. But we know that even non-responders may have 1 in 4 headaches that are mild enough that they don’t need the treatment anyway. So you eliminate 75% of the non-responders with a test dose which is better than the 50% of responders that were eliminated. You’ve done better. How much better depends on the ratio of responders to non-responders in the population, a ratio that is completely unknown.

What’s nice is that while you can see the logic by reading the story I’ve told, a mental simulation, one can create an explicit mathematical model of the clinical trial and simulate running the trial hundreds of times. It turns out that there very few conditions where this kind of enrichment really works. I turns out its simpler and just as informative to see whether or not the drug is effective in the overall population without trying to prejudge who is a responder or not with a test dose.

The irony? This is exactly the opposite of clinical practice. In the real clinic, each patient is their own individual clinical trial, an “N of 1” as we say. N is the symbol for the number in a population. An individual is a population of one. N of 1. We treat the patient and over time judge whether or not they respond in a personal clinical trial. Not to see whether the drug works but whether the patient is a responder.  If they don’t, therapy is adjusted or changed. But in our migraine example, multiple headaches of various intensity would have to be treated so see the benefit.

Perhaps variability across a population is easily grasped. People are tall or short, have dark or light hair color. Variability within an individual over time is perhaps more subtle but just as important for over time.

Topaz InFocus



In the Mud, originally uploaded by jjvornov.

InFocus is a Photoshop PlugIn that uses deconvolution to sharpen images by refocusing. Different from edge methods like unsharp masking. Back in my microscopy days, these methods were just starting to come into use, often with the use of multiple focus planes for virtual confocal microscopy.

This is not the best example as a photo, since the water is causing blur in this photo, but a good test of how this PlugIn works to recover detail.

The Mind of a Cat

The other day, provoked by reading Iain Bank’s latest SF novel: Surface Detail, thought that perhaps one of the practical applications of philosphical mediaiton on the nature of mind was the nagging question of whether a machine could ever be conscious or self aware like  me and you.

On Twitter, Marc Bernstein of Eastgate and Tinderbox fame asked the obvious question of how one would ever know a machine was self aware. A very good question because the nature of subjective experience is that it is accessable only to one mind, the one expereiencing it.

Now when it comes to other people, I can never experience what its like to be  them subjectively. Yet I make a very strong assumption that they are experiencing a mind pretty much exactly the way that I do.

The reason I make the assumption that other people share subjective awareness is by analogy. While I can read descriptions written by others or directly query my family and friends about their subjective experience, why should I trust them? I trust them because the look and act just like I do. So its a pragmatic assumption that they experience the same cognitive function as me.

A machine could be self aware and try to convince me, but it will be a very hard sell because of the lack of analogous processes. It may or may not be a true claim by the machine intelligence. I just don’t know what it would take to convince me.

Looking in an entirely different direction provides further insight into the power of analogy. We look to animals as models of our own cognition. In my own current field of drug development, we use a large toolbox of animal cognition models to test new drugs. We test drugs on animal behaviors that reflect target internal human states. For example, drugs to improve memory in patients with Alzheimer’s Disease are examined in rats swimming in water mazes where they have to remember the right way to go. We can’t read a rat a story and ask recall questions, so behavioral tests are substituted.

While we know that these animal models of human cognition have a variable track record in predicting drug effects in human disease, the philosophical point is that we rely on animals because of analogy to the human brain. Similarly, I think that by analogy, we assume that animals, mammals at least, see, hear, taste, smell, and touch much as we do.

My cats may be without computers, words and music, but I believe they are conscious, experiencing minds. When we look at each other, there’s some one home on both sides. My technological props put me way ahead as a successful organism.

When Common Sense Fails

I’m afraid of people who’s position is simply that we need some common sense in Washington. Good old common sense conservatism is likely to lead to worse or at least different problems than we currently face.

I’m a great fan of common sense and decisions “made from the gut”. When I was using the formal techniques of Decision Analysis or working with very talented modeling and simulation experts, everyone always realized that there was a gut check that had to be made before accepting the output of a model.

After all, it was not all that uncommon that an error crept into the modeling at some stage leading to the completely wrong conclusion.  Call it what you will, reality testing or sense checking, no one would follow the analytic techniques blindly. Kind of like letting the GPS unit tell you to drive off the road into a lake or the forest.

More subtly though, one realizes how much bias creeps into these rational analytic decision tools. After all, if we didn’t like the outcome of a simulation there were parameters to fiddle with that might produce “more sensible results”. More troubling was the realization that mistaken but favorable outcomes were not going to questioned. In fact if an error was detected, the error would be defended vigorously trying to preserve a mistaken but desirable belief about the world and the outcome of particular decisions.

As I left the world of analytic decision tools and focused more on mental models I realized of course that our own metaphors for the world had these biases, but often completely hidden to us. In a physiological modeling and simulation analysis at least the underlying data can be examined and all of the model assumptions are explicit. If you understand the methods well enough the biases can be identified and perhaps addressed.

The beliefs we hold about the world aren’t so accessible to us. For example, other people are experienced as mental models of other brains. By analogy with our own thoughts and language use we believe we can understand what some one else is telling us. After all their language is run through the language systems in our brains, transferring thought from one brain to another through the medium of speech phonemes. The sounds themselves are meaningless. Its the process of transfer that is meaning.

Optimism, hopefulness are biases. Prejudice and expectations are biases. They color perception and influence decision making.

Clearly if we have an incorrect model of some one else we can make poor decisions. If my model of that car salesman is that he’s my buddy with my best interests at heart I will probably suffer a financial loss compared to a model that sees him purely as the intermediary with the larger organization that is the auto dealership.

So lets be careful about elevating “common sense” to a status of the ultimate truth. There’s a populism in the US today that wants to ignore the complexities of economics and large inter-dependent systems (banks, global trade, heath care, public assistance) and simply rely on common sense.

I’m convinced that simplifying assumptions are always necessary in models. In fact models that can’t be understood intuitively because of complexity or emergence are not as useful as models that can be internalized as intuition. That’s a big part of what real expertise is all about.

But simplifying must be pragmatic, that is proven to work in the real world across some set of conditions. Simplification that is ideologically driven because some principle or other “must be true” is ideology not pragmatism and is likely to fail. And failure is commonly through unintended consequences.