Image: Edge of the Ocean 2



Edge of the Ocean 2, originally uploaded by jjvornov.

Another take on the subject.

These were taken looking down on the break from the Malibu Fishing Pier. The Tamron zoom is racked out all the way.

One of the tricks here is using FocalPoint to create blur that makes it look like the perspective is closer, like using a tilt shift lens to create images that look miniature.

This is one of the fundamentals of Vincent Versace’s “cinematic” approach. Capture the image knowing what you can do in post processing, but preserve everything in the capture that you’ll need for the further manipulations. On film, you needed the tilt shift lens. In the digital realm, one can easily simulate the effect.

Another principle is to add these effects enough to get the brain to look, but not beyond believability. This is really a cinematic principle. We want suspension of disbelief, not disbelief itself.

image: Edge of the Ocean



Edge of the Ocean, originally uploaded by jjvornov.

I had a few hours to capture some images during a trip this week to LA. This is the ocean at Malibu.

As a workflow evolves, image capture should too. At the time of capture the are the basics of exposure, focus and composition. The post processing starts at capture. When I started capturing some simple wave on sand, I knew that I’d be working with a contrast in tone and texture between green ocean and warm sand.

Approaching Complexity

The whole is greater than the sum of the parts.

This is the essence of a complex adaptive system. Any system that is straightforward enough to be a simple adding up of the effects of each part really isn’t worth contemplating as a system. It’s a collection of independent agents. A stack of checkers of different thicknesses are such a linear system. Stack them up and the height simply adds up in a linear way.

Once the components start acting on each other and themselves, behavior becomes complex and increasingly difficult to predict based on knowledge of the components and their connections. This is not due to ignorance. Collection of more and more data doesn’t help at all. There is some aspect of the whole that is not just the linear addition of the parts.

Once a system is made of connected components that have inputs and outputs, that are processing information, its behavior can become difficult to predict with precision. It could be a mechanical system like a thermostat connected to a heating system, a computer program with subroutines, nerve cells connected in brain circuits, stock traders in a market, the atmosphere are all complex adaptive systems. The mechanical and computer level examples are the most useful for study because they are clearly in the mechanistic, Newtonian, deterministic world and yet their future state cannot be known.

The difference between an additive system and a complex system is in the relationships. Negative and positive feedback create unexpected behaviors in the system. Small effects in one component produce large effects elsewhere because of the nature of the connections which are not simply proportional but non-linear instead.

We’re surrounded by complex systems.Arguably simple linear systems are exceptions and may be idealized simple models rather than real functioning systems out in the world. As thinkers, we study simple systems or simplify the complex into idealized simple systems because they are easy to deal with in a deterministic and reductionistic manner.

We are ignorant of the state of the past and the future. That creates uncertainty. Because of complexity, even if we had perfect knowledge, we’d still be unable to know the future.

Image: Tiles Waiting



Tiles Waiting, originally uploaded by jjvornov.

Like many others, I’m finding the 13 inch MacBook Air a very capable substitute for my 15 inch MacBook Pro.

The two main obstacles both relate to file storage. I’ve been running a dual Aperture Library strategy for a few years now. I have a project library on the internal hard drive and a large reference library on an external FW800 disk. There’s also the slower USB drives used for Vaults. The Air lacks FireWire, so I need to use USB 2.0 to access the big library. But it works well enough to transfer a project onto the internal SSD drive in the Air. It just takes some planning and time.

The other obstacle is saving Photoshop files on the Air. Its a lot slower than the MBP. This is presumably memory and processor dependent. SInce I work in bulk with Aperture it’s not a big workflow issue, just the only big noticeable step backward in moving from one system to the other. Filter and layer speed on the 2GB Air is perfectly fine.

It’s been suggested that the SSD helps the photo workflow because it provides fast virtual memory. This may be true. In the past I’ve always had Macs hang because of paging memory to disk, a situation improved by adding memory to the Mac. An SSD may be a more cost effective way of dealing with this compared to actual RAM

The LImits of Reductionism

Reductionism can be powerful.

Through careful study, a component of a system can be identified and its role in the function of the system defined. Manipulation of that component can be shown to affect the system in a predictable way. It’s often possible to generalize- the heart is a pump in mice, cats and elephants. At a molecular level in the brain, the established role of CREB in aplysia neuronal function predicts role of CREB in mouse hippocampus. Human hippocampus? Well we can’t know because the experiments can’t be done, but the body of available evidence generates a strong belief that it does. The scientific fact that KREB is involved in human learning and memory

These scientific theories are not facts, they are statements with a probability of being true and a complementary probability of being false. Scientists recognize this at least implicitly because the method of science is to collect additional data that will either falsify or support the theory. This data will either increase or decrease belief in the truth of the theory. Sometimes theories are completely abandoned. Our earlier belief turns out to have been unwarrented.This is straightforward pragmatism.

It’s a major mistake to ignore the probability element in scientific theory. We don’t even need to consider how reliable the data really is, the nature of these complex systems makes their significance uncertain. The influence of one system component may vary in hard to predict ways because of changes in other components.

And because statements about complex systems are not true or false, we must make decisions based on belief, the probability of the statement being true. Since just about everything we deal with is a complex system, this impossibility of knowing the future is everywhere. Uncertainty can be found in the very structure of the world.

Image: Umbrian Wall



Umbrian Wall, originally uploaded by jjvornov.

These flat, often bisected texture and contrast studies have been a constant in my images since I picked up my Minolta SRT200 in 1979. I’ve always struggled with how easily I can make these. These days I look at an image like this and simply appreciate how the image recreates the pleasure I personally have being in a receptive state looking at the world around me. Its a re-creation, a re-experiencing of a place and time now distant.

Image: In the Frantoio



In the Frantoio, originally uploaded by jjvornov.

We’re too snow and cold bound in Baltimore for new image capture, so I’m wandering through Umbria again with these D300 images from 2009. The Oz 2.0 techniques are giving me a second chance to see and enhance the light in the captures.

In Defense of Reductionism

Chaos Theory was developed by exploring dynamical systems in computers. Its worthwhile considering this idea of a system itself in our exploration of deciding better.

Systems theory is an approach to studying a collection of parts by considering them as a whole. Each part has its role to play and some how interacts with other parts of the system. Consideration of systems is vital to understand where uncertainty comes from in our world or in simple clockwork universes like computer programs.

There’s is a much stronger tradition in empirical science of reductionism, understanding the functioning of a system by taking it apart to study its component pieces. The contribution of the part is considered by examining the interaction of that part with other parts of the system. If the functioning of those parts is understood, an overall picture of the system is built up. For example, the biochemical processes of a cell can be understood by analyzing the metabolites and how they are processed by cellular enzymes.

Reductionism has proven an extraordinarily powerful way to understand the world. For the most part, when an enzyme is blocked in a cell, its product disappears and its precursor builds up exactly as expected. You don’t need to know much about the function of the metabolites or the habitat and behavior of the animal. The function of these components is likely to be the same in locust, rat, cat and man.

A powerful reductionist approach is to study simple systems. The study of memory in the brain of humans or even rats remains much too complicated to explain as a system. It was possible to trace the system to particular brain areas like the hippocampus by studying brain injury in man and experimental lesions in rats. But after establishing that a rat without its hippocampal formations can’t remember how to rerun a maze, how can you figure out the circuitry within the hippocampus that stores that memory? And even if you do, how do you trace that function in the full functioning of a rat in a maze with its visual input and motor output?

Eric Kandel took the approach of finding a much simpler system to study memory, choosing the sea slug Aplysia as an experimental model. This classic reductionist approach provided important insights into how connections between neurons are changed by activity and eventually many of the same mechanisms were found to be operating in the rat brain. Eventually manipulation of these mechanisms in rats demonstrated that they were critical for memory formation.

Reductionism often works well in science. It shows that a component or mechanism in one system serves a similar purpose in another system even though these systems may be too complex themselves to understand fully. This can serve as valuable information if it turns out that manipulation of this one particular component has a consistent effect on the functioning of the overall system.

Complexity and the Edge of Chaos Revisited

I’ve just finished re-reading Mitchel Wladrup’s Complexity: The Emerging Science at the Edge of Order and Chaos to followup on the Chaos discussion. About halfway through I realized that the book is now 20 years old. Perhaps because it was written so close to the founding of the Santa Fe Institute and was based primarily on interviews with key figures in that exciting flowering of ideas it still provides a vivid read.

I was struck by how little impact these big ideas seem to have had on the usual way of seeing the world. Perhaps chaos, complexity and emergence have entered the language, but dreams of improved prediction tools or appreciation of principles like unintended consequences don’t seem to have been achieved. There was a feeling when the book was written that we were on the verge of new forms of artificial intelligence and new approaches to economics that would help us understand the interconnectedness of the global economy. We have Google and mobile devices like the iPad. Sadly, it seems like more of the same only faster and in more places.

I had hoped that tools of decision theory, modeling and simulation would change difficult research and development projects like drug development. In my current job I get a pretty fair overview of the industry on a daily basis and can report that little has changed.

Insights from behavioral economics and advances in cognitive science that have had less impact one we way we see the world. There’s a constant stream of media reports about the science, but little evidence that these fundamental insights are informing our discussions about human behavior and ethics.

My original impulse when I started writing On Deciding . . . Better in late 1999 was to be at least one voice discussing what I thought were important implications of decision theory and Bayesian approaches to probability theory. Over the years, I’ve explored the sources of uncertainty in the world and most recently the emerging insights of Cognitive Neuroscience. I admit that mostly I write for myself, to get ideas out into better organized form and critically review them for myself.

My view of the value of writing and publishing on the net hasn’t changed in the last decade. I have a free, universally accessible publishing platform for my ideas. I’ve been fortunate over the years to actually have kindred spirits interested enough to read and comment on my efforts. I’ve been further encouraged over the last few months finding how Twitter as a microblogging environment provides a new venue to widen that circle of interaction like a virtual interdisciplinary conference.

The world of ideas is still vibrant. Its bigger and noisier than it was in 1992 or at the founding of the Santa Fe Institute in 1984. Certainly its bigger than the world of physics was at the time of either Einstein or Newton. However, I’m brave enough to suggest that like our world, those worlds also were ruled by a power law dictating the impact of ideas.

Making Decisions Under Conditions of Chaos

In 1961, Edward Lorenz discovered chaos in the clockwork universe.

Lorenz was running a computer simulation of the atmosphere to help forecast the weather. He wanted to rerun just part of one sequence, so instead of starting at the beginning, he started the run in the middle. In order to start in the middle, he used the output of the program at its midpoint as a new starting point, expecting to get the same result in half the time.

Unexpectedly, even though he was using a computer, following strict deterministic rules, the second run started with the same values as before but produced a completely different result. It was as if uncertainty and variability had some how crept into the orderly, deterministic world of his computer program.

As it turned out, the numbers used from the middle of the run were not quite the same as the ones used when the program was at that same point the first time because of rounding or truncation errors. The resulting theory, Chaos Theory, described how for certain kinds of computer programs, small changes in initial conditions could result in large changes later on. These systems change over time where each condition leads to the next. This dependence on initial conditions has been immortalized as “the butterfly effect” where a small change in initial conditions- the wind from a butterfly’s wings in China, can have a large effect later on- rain in New York.

This sensitivity to the exact values of parameters in the present makes it very hard to know future values in the future. As its been formalized mathematically, chaos theory applies to “dynamical system” which simply is a system that changes over time according to some rule. The system starts in some initial state at the beginning. For our purposes, think of it as now, time zero. Rules are applied and the system changes to its new state- wind is blowing, temperature is changing, etc based on rules applied to the initial state of the atmosphere. The rules are applied to the new state to produce the next state, etc.

Chaos may not have been the best word to describe this principle though. To me it suggests complete unpredictability. Most real or mathematically interesting dynamical systems don’t blow up like that into complete unpredictability. Using the weather, for example, even if the butterfly or small differences in ocean surface temperature makes it impossible to know whether the temperature in TImes Square in New York will be 34 degrees or 37 degrees on February 7th, either one is a likely value to be found in the system at that time in that place. Measuring a temperature of 95 degrees F in New York in February is impossible or nearly so.

Dynamical systems like the weather often show recurrent behavior, returning to similar but non-identical states over and over as the rules are applied. Following the values over time describes a path that wanders over values, returning after some time to the same neighborhood. Not exactly the same place because it started in a slightly different place than the last time around, but in the same neighborhood. Just unpredictably in the same neighborhood.

This returns us to the distinction between knowing the future and predicting it. The future state of chaotic system can be known because small changes in initial conditions result in large changes in result. But those large changes recur in a predictable range of values. A chaotic system can be predicted even though its future state can’t be known. When it comes to the TImes Square temperature, climate data tells us what range the chaotic values move within from season’s cycle to season’s cycle. In drug development, the chaotic system of taking a pill every day and a measuring drug levels in the blood allows prediction as the range of likely values, but because initial conditions change and cause large, unpredictable effects one can’t know in advance whether today’s measure will be high or low. Its almost never the average, it varies around the average.

It’s important to see how important prediction is for making decisions when the future is unknown. Because the uncertain future is orderly we actually know a lot about it; we don’t know it in all of its particulars. We must make decisions knowing what range of possibilities the future can assume. Chaos Theory suggests that this kind of uncertainty is in the very nature of the world because of the behavior of dynamical systems, any rules dictate how a system changes over time.