Image: Tiles Waiting



Tiles Waiting, originally uploaded by jjvornov.

Like many others, I’m finding the 13 inch MacBook Air a very capable substitute for my 15 inch MacBook Pro.

The two main obstacles both relate to file storage. I’ve been running a dual Aperture Library strategy for a few years now. I have a project library on the internal hard drive and a large reference library on an external FW800 disk. There’s also the slower USB drives used for Vaults. The Air lacks FireWire, so I need to use USB 2.0 to access the big library. But it works well enough to transfer a project onto the internal SSD drive in the Air. It just takes some planning and time.

The other obstacle is saving Photoshop files on the Air. Its a lot slower than the MBP. This is presumably memory and processor dependent. SInce I work in bulk with Aperture it’s not a big workflow issue, just the only big noticeable step backward in moving from one system to the other. Filter and layer speed on the 2GB Air is perfectly fine.

It’s been suggested that the SSD helps the photo workflow because it provides fast virtual memory. This may be true. In the past I’ve always had Macs hang because of paging memory to disk, a situation improved by adding memory to the Mac. An SSD may be a more cost effective way of dealing with this compared to actual RAM

The LImits of Reductionism

Reductionism can be powerful.

Through careful study, a component of a system can be identified and its role in the function of the system defined. Manipulation of that component can be shown to affect the system in a predictable way. It’s often possible to generalize- the heart is a pump in mice, cats and elephants. At a molecular level in the brain, the established role of CREB in aplysia neuronal function predicts role of CREB in mouse hippocampus. Human hippocampus? Well we can’t know because the experiments can’t be done, but the body of available evidence generates a strong belief that it does. The scientific fact that KREB is involved in human learning and memory

These scientific theories are not facts, they are statements with a probability of being true and a complementary probability of being false. Scientists recognize this at least implicitly because the method of science is to collect additional data that will either falsify or support the theory. This data will either increase or decrease belief in the truth of the theory. Sometimes theories are completely abandoned. Our earlier belief turns out to have been unwarrented.This is straightforward pragmatism.

It’s a major mistake to ignore the probability element in scientific theory. We don’t even need to consider how reliable the data really is, the nature of these complex systems makes their significance uncertain. The influence of one system component may vary in hard to predict ways because of changes in other components.

And because statements about complex systems are not true or false, we must make decisions based on belief, the probability of the statement being true. Since just about everything we deal with is a complex system, this impossibility of knowing the future is everywhere. Uncertainty can be found in the very structure of the world.

Image: Umbrian Wall



Umbrian Wall, originally uploaded by jjvornov.

These flat, often bisected texture and contrast studies have been a constant in my images since I picked up my Minolta SRT200 in 1979. I’ve always struggled with how easily I can make these. These days I look at an image like this and simply appreciate how the image recreates the pleasure I personally have being in a receptive state looking at the world around me. Its a re-creation, a re-experiencing of a place and time now distant.

Image: In the Frantoio



In the Frantoio, originally uploaded by jjvornov.

We’re too snow and cold bound in Baltimore for new image capture, so I’m wandering through Umbria again with these D300 images from 2009. The Oz 2.0 techniques are giving me a second chance to see and enhance the light in the captures.

In Defense of Reductionism

Chaos Theory was developed by exploring dynamical systems in computers. Its worthwhile considering this idea of a system itself in our exploration of deciding better.

Systems theory is an approach to studying a collection of parts by considering them as a whole. Each part has its role to play and some how interacts with other parts of the system. Consideration of systems is vital to understand where uncertainty comes from in our world or in simple clockwork universes like computer programs.

There’s is a much stronger tradition in empirical science of reductionism, understanding the functioning of a system by taking it apart to study its component pieces. The contribution of the part is considered by examining the interaction of that part with other parts of the system. If the functioning of those parts is understood, an overall picture of the system is built up. For example, the biochemical processes of a cell can be understood by analyzing the metabolites and how they are processed by cellular enzymes.

Reductionism has proven an extraordinarily powerful way to understand the world. For the most part, when an enzyme is blocked in a cell, its product disappears and its precursor builds up exactly as expected. You don’t need to know much about the function of the metabolites or the habitat and behavior of the animal. The function of these components is likely to be the same in locust, rat, cat and man.

A powerful reductionist approach is to study simple systems. The study of memory in the brain of humans or even rats remains much too complicated to explain as a system. It was possible to trace the system to particular brain areas like the hippocampus by studying brain injury in man and experimental lesions in rats. But after establishing that a rat without its hippocampal formations can’t remember how to rerun a maze, how can you figure out the circuitry within the hippocampus that stores that memory? And even if you do, how do you trace that function in the full functioning of a rat in a maze with its visual input and motor output?

Eric Kandel took the approach of finding a much simpler system to study memory, choosing the sea slug Aplysia as an experimental model. This classic reductionist approach provided important insights into how connections between neurons are changed by activity and eventually many of the same mechanisms were found to be operating in the rat brain. Eventually manipulation of these mechanisms in rats demonstrated that they were critical for memory formation.

Reductionism often works well in science. It shows that a component or mechanism in one system serves a similar purpose in another system even though these systems may be too complex themselves to understand fully. This can serve as valuable information if it turns out that manipulation of this one particular component has a consistent effect on the functioning of the overall system.

Complexity and the Edge of Chaos Revisited

I’ve just finished re-reading Mitchel Wladrup’s Complexity: The Emerging Science at the Edge of Order and Chaos to followup on the Chaos discussion. About halfway through I realized that the book is now 20 years old. Perhaps because it was written so close to the founding of the Santa Fe Institute and was based primarily on interviews with key figures in that exciting flowering of ideas it still provides a vivid read.

I was struck by how little impact these big ideas seem to have had on the usual way of seeing the world. Perhaps chaos, complexity and emergence have entered the language, but dreams of improved prediction tools or appreciation of principles like unintended consequences don’t seem to have been achieved. There was a feeling when the book was written that we were on the verge of new forms of artificial intelligence and new approaches to economics that would help us understand the interconnectedness of the global economy. We have Google and mobile devices like the iPad. Sadly, it seems like more of the same only faster and in more places.

I had hoped that tools of decision theory, modeling and simulation would change difficult research and development projects like drug development. In my current job I get a pretty fair overview of the industry on a daily basis and can report that little has changed.

Insights from behavioral economics and advances in cognitive science that have had less impact one we way we see the world. There’s a constant stream of media reports about the science, but little evidence that these fundamental insights are informing our discussions about human behavior and ethics.

My original impulse when I started writing On Deciding . . . Better in late 1999 was to be at least one voice discussing what I thought were important implications of decision theory and Bayesian approaches to probability theory. Over the years, I’ve explored the sources of uncertainty in the world and most recently the emerging insights of Cognitive Neuroscience. I admit that mostly I write for myself, to get ideas out into better organized form and critically review them for myself.

My view of the value of writing and publishing on the net hasn’t changed in the last decade. I have a free, universally accessible publishing platform for my ideas. I’ve been fortunate over the years to actually have kindred spirits interested enough to read and comment on my efforts. I’ve been further encouraged over the last few months finding how Twitter as a microblogging environment provides a new venue to widen that circle of interaction like a virtual interdisciplinary conference.

The world of ideas is still vibrant. Its bigger and noisier than it was in 1992 or at the founding of the Santa Fe Institute in 1984. Certainly its bigger than the world of physics was at the time of either Einstein or Newton. However, I’m brave enough to suggest that like our world, those worlds also were ruled by a power law dictating the impact of ideas.

Making Decisions Under Conditions of Chaos

In 1961, Edward Lorenz discovered chaos in the clockwork universe.

Lorenz was running a computer simulation of the atmosphere to help forecast the weather. He wanted to rerun just part of one sequence, so instead of starting at the beginning, he started the run in the middle. In order to start in the middle, he used the output of the program at its midpoint as a new starting point, expecting to get the same result in half the time.

Unexpectedly, even though he was using a computer, following strict deterministic rules, the second run started with the same values as before but produced a completely different result. It was as if uncertainty and variability had some how crept into the orderly, deterministic world of his computer program.

As it turned out, the numbers used from the middle of the run were not quite the same as the ones used when the program was at that same point the first time because of rounding or truncation errors. The resulting theory, Chaos Theory, described how for certain kinds of computer programs, small changes in initial conditions could result in large changes later on. These systems change over time where each condition leads to the next. This dependence on initial conditions has been immortalized as “the butterfly effect” where a small change in initial conditions- the wind from a butterfly’s wings in China, can have a large effect later on- rain in New York.

This sensitivity to the exact values of parameters in the present makes it very hard to know future values in the future. As its been formalized mathematically, chaos theory applies to “dynamical system” which simply is a system that changes over time according to some rule. The system starts in some initial state at the beginning. For our purposes, think of it as now, time zero. Rules are applied and the system changes to its new state- wind is blowing, temperature is changing, etc based on rules applied to the initial state of the atmosphere. The rules are applied to the new state to produce the next state, etc.

Chaos may not have been the best word to describe this principle though. To me it suggests complete unpredictability. Most real or mathematically interesting dynamical systems don’t blow up like that into complete unpredictability. Using the weather, for example, even if the butterfly or small differences in ocean surface temperature makes it impossible to know whether the temperature in TImes Square in New York will be 34 degrees or 37 degrees on February 7th, either one is a likely value to be found in the system at that time in that place. Measuring a temperature of 95 degrees F in New York in February is impossible or nearly so.

Dynamical systems like the weather often show recurrent behavior, returning to similar but non-identical states over and over as the rules are applied. Following the values over time describes a path that wanders over values, returning after some time to the same neighborhood. Not exactly the same place because it started in a slightly different place than the last time around, but in the same neighborhood. Just unpredictably in the same neighborhood.

This returns us to the distinction between knowing the future and predicting it. The future state of chaotic system can be known because small changes in initial conditions result in large changes in result. But those large changes recur in a predictable range of values. A chaotic system can be predicted even though its future state can’t be known. When it comes to the TImes Square temperature, climate data tells us what range the chaotic values move within from season’s cycle to season’s cycle. In drug development, the chaotic system of taking a pill every day and a measuring drug levels in the blood allows prediction as the range of likely values, but because initial conditions change and cause large, unpredictable effects one can’t know in advance whether today’s measure will be high or low. Its almost never the average, it varies around the average.

It’s important to see how important prediction is for making decisions when the future is unknown. Because the uncertain future is orderly we actually know a lot about it; we don’t know it in all of its particulars. We must make decisions knowing what range of possibilities the future can assume. Chaos Theory suggests that this kind of uncertainty is in the very nature of the world because of the behavior of dynamical systems, any rules dictate how a system changes over time.

The Difference Between Predicting and Knowing

There’s a difference between predicting and knowing the future.

Predicting decreases uncertainty but does not eliminate it. We’ll call that level of certainty “knowing the future”. Predicting involves beliefs about the future state of the world and should be probabilistic, dealing in likelihoods of events, often describing a range of possible outomes.

I can make an excellent, accurate prediction about the card that will drawn from a deck. It will be one of the 52 cards in the deck.

Trivial? Not really. I’ve used my knowledge of the nature of card decks to constrain the possible range of outcomes to only those that are possible outcomes.

With more information about the particular deck I’d might be able to narrow the odds of various cards being drawn. For example armed with the exact order of cards in the deck and a historical dataset describing how often each card in each deck position is chosen, I could actually know which card is the single most likely to be drawn. Perhaps if we studied how people tend to draw cards we’d find that the center 25% got 60% of the draws, increasing the probability of those cards being selected over the top and bottom of the deck. Probably I’d be able to provide a list of the cards from most likely to be chosen through least likely.

Just because I know which card is most likely, it doesn’t mean that if the mostly likely card is not drawn in any particular trial that my prediction is wrong or inaccurate. The prediction only allowed me to strengthen my belief in some cards being drawn over other cards.

In selling any prediction service, If I only get one chance to test my ability to predict there’s no way to prove why I happened to be right or wrong. If my knowledge of the order of the deck and human behavior in selecting cards improved my ability to predict the card selected from 1 in 52 to 1 in 20, I’d be more than twice as good at picking the card to be drawn in advance, butits still overwhelming probable that my guess on any particular trial will be wrong.

It makes for a lousy magic trick but a very good way to make better decisions.

Image: The Glamour TIle



The Glamour Tile, originally uploaded by jjvornov.

I’ve probably photographed my front walk more than any other subject in the last few years. There’s a nice combination of textures in the brick and garden elements particularly when there’s been rain or snow.

Today the world here is buried under a few inches of snow with more on the way tonight. If I want to do some more practice with coach Vince (Versace), I’ll need to use some archive images.