Author: James Vornov
Topaz InFocus
InFocus is a Photoshop PlugIn that uses deconvolution to sharpen images by refocusing. Different from edge methods like unsharp masking. Back in my microscopy days, these methods were just starting to come into use, often with the use of multiple focus planes for virtual confocal microscopy.
This is not the best example as a photo, since the water is causing blur in this photo, but a good test of how this PlugIn works to recover detail.
The Mind of a Cat
The other day, provoked by reading Iain Bank’s latest SF novel: Surface Detail, thought that perhaps one of the practical applications of philosphical mediaiton on the nature of mind was the nagging question of whether a machine could ever be conscious or self aware like  me and you.
On Twitter, Marc Bernstein of Eastgate and Tinderbox fame asked the obvious question of how one would ever know a machine was self aware. A very good question because the nature of subjective experience is that it is accessable only to one mind, the one expereiencing it.
Now when it comes to other people, I can never experience what its like to be  them subjectively. Yet I make a very strong assumption that they are experiencing a mind pretty much exactly the way that I do.
The reason I make the assumption that other people share subjective awareness is by analogy. While I can read descriptions written by others or directly query my family and friends about their subjective experience, why should I trust them? I trust them because the look and act just like I do. So its a pragmatic assumption that they experience the same cognitive function as me.
A machine could be self aware and try to convince me, but it will be a very hard sell because of the lack of analogous processes. It may or may not be a true claim by the machine intelligence. I just don’t know what it would take to convince me.
Looking in an entirely different direction provides further insight into the power of analogy. We look to animals as models of our own cognition. In my own current field of drug development, we use a large toolbox of animal cognition models to test new drugs. We test drugs on animal behaviors that reflect target internal human states. For example, drugs to improve memory in patients with Alzheimer’s Disease are examined in rats swimming in water mazes where they have to remember the right way to go. We can’t read a rat a story and ask recall questions, so behavioral tests are substituted.
While we know that these animal models of human cognition have a variable track record in predicting drug effects in human disease, the philosophical point is that we rely on animals because of analogy to the human brain. Similarly, I think that by analogy, we assume that animals, mammals at least, see, hear, taste, smell, and touch much as we do.
My cats may be without computers, words and music, but I believe they are conscious, experiencing minds. When we look at each other, there’s some one home on both sides. My technological props put me way ahead as a successful organism.
When Common Sense Fails
I’m afraid of people who’s position is simply that we need some common sense in Washington. Good old common sense conservatism is likely to lead to worse or at least different problems than we currently face.
I’m a great fan of common sense and decisions “made from the gut”. When I was using the formal techniques of Decision Analysis or working with very talented modeling and simulation experts, everyone always realized that there was a gut check that had to be made before accepting the output of a model.
After all, it was not all that uncommon that an error crept into the modeling at some stage leading to the completely wrong conclusion. Â Call it what you will, reality testing or sense checking, no one would follow the analytic techniques blindly. Kind of like letting the GPS unit tell you to drive off the road into a lake or the forest.
More subtly though, one realizes how much bias creeps into these rational analytic decision tools. After all, if we didn’t like the outcome of a simulation there were parameters to fiddle with that might produce “more sensible results”. More troubling was the realization that mistaken but favorable outcomes were not going to questioned. In fact if an error was detected, the error would be defended vigorously trying to preserve a mistaken but desirable belief about the world and the outcome of particular decisions.
As I left the world of analytic decision tools and focused more on mental models I realized of course that our own metaphors for the world had these biases, but often completely hidden to us. In a physiological modeling and simulation analysis at least the underlying data can be examined and all of the model assumptions are explicit. If you understand the methods well enough the biases can be identified and perhaps addressed.
The beliefs we hold about the world aren’t so accessible to us. For example, other people are experienced as mental models of other brains. By analogy with our own thoughts and language use we believe we can understand what some one else is telling us. After all their language is run through the language systems in our brains, transferring thought from one brain to another through the medium of speech phonemes. The sounds themselves are meaningless. Its the process of transfer that is meaning.
Optimism, hopefulness are biases. Prejudice and expectations are biases. They color perception and influence decision making.
Clearly if we have an incorrect model of some one else we can make poor decisions. If my model of that car salesman is that he’s my buddy with my best interests at heart I will probably suffer a financial loss compared to a model that sees him purely as the intermediary with the larger organization that is the auto dealership.
So lets be careful about elevating “common sense” to a status of the ultimate truth. There’s a populism in the US today that wants to ignore the complexities of economics and large inter-dependent systems (banks, global trade, heath care, public assistance) and simply rely on common sense.
I’m convinced that simplifying assumptions are always necessary in models. In fact models that can’t be understood intuitively because of complexity or emergence are not as useful as models that can be internalized as intuition. That’s a big part of what real expertise is all about.
But simplifying must be pragmatic, that is proven to work in the real world across some set of conditions. Simplification that is ideologically driven because some principle or other “must be true” is ideology not pragmatism and is likely to fail. And failure is commonly through unintended consequences.
Unique
Take Two
The Astounding Quality of the iPhone 4 Camera
I knew from the first days with my iPhone 4 that I wasn’t going to need a small camera for snaps because of its quality.
Imagine my surprise when I discovered that the Yellowstone image I posted yesterday showed up on Flickr as geotagged. Why? It was an iPhone image.
I ran some noise reduction on the image before posting because of pattern noise in the trees at the upper third of the image but I thought that was from aggressive post processing of shadows. Unusual for the Nikon D300.
Just astounding really.
Digging Deeper Holes
Making decisions always limits future options. Choosing one of two forks in the road precludes taking the other fork without added costs of backtracking and starting over. Moving into the future, the decision space is always changing. In some ways it collapses because choices not made disappear and become unavailable. But at the same time, the decision space expands and the chosen path is traveled.
I love thinking about making decisions at the start. Clean sheet of paper and infinite possibilities. Yet that is an entirely artificial metaphor. We always find ourselves in the middle of the story. And here there are many constraints that are the consequences made previously, often by others. Whenever I hear discussions about the US Federal Budget deficits, I think about these constraints. Large systems have been created over the years (Social Security, Medicare and Medicaid) to prevent the widespread poverty and lack of medical care that were once commonplace among the elderly. Having created these systems, it becomes unthinkable (impossible?) to end them even as they require larger and larger resources every year. Having been created with no built in limits or budgets, these entitlements grow and grow, limited only by the ingenuity of those in my industries, medical care and drug development.
The decisions made early on, when these programs were smaller, have led to unintended consequences which could be catastrophic in a few years or decades. But now it seems that changing paths to avoid these outcomes may not be among the choices that can be made by the government.
I wonder whether there is an inevitability to certain outcomes once choices are made and systems created. Are these some kind of local minimum from which escape is impossible? Must it be the catastrophe that opens up new decision space? I use the metaphor of digging yourself into a hole. The hole gets so deep that one can no longer climb out, so that the more you dig, the deeper and more inescapable the hole becomes.
I can’t quite explain why we feel compelled to keep on digging when its clear that the path does not lead out but only deeper.
Making it cloudy
Mind As Mosaic
One of my most vivid insights during my first years in training as a Neurologist was the realization that the brain functioned as a mosaic. The many divisions of the brain  each has its own function- they collaborate but work largely independently. When part of the brain is damaged by a stroke, after a period that’s like shock, the rest of the brain carries on as before, just missing an ability due to the loss of function. It’s a lot like loosing a limb. Loose the part of the brain that produces speech, moves the left arm or sees the right side of the world and those particular abilities are deleted.
Its hard to reconcile this view of brain as mosaic with our subjective sensation of a single “I”. There seems to be a single identity inside each of us that we identify as ourselves, as our consciousness, our mind. We wonder whether animals have a similar unitary experience of identity. And we wonder whether a machine could ever experience self.
Putting aside these interesting questions about animal and computer minds, there is a related question of where the mind resides. To my pragmatic way of thinking, asking where these sensations are experienced is like asking where in some one’s body the personhood resides. I am my body and no matter how many parts of my body might be lost or replaced, my personhood is my body. Simply. The truly remarkable subjective illusion of conscious unity makes it seems like the mind has to be something or somewhere other than just the function of the brain in its totality. But I say that this mind is a mosaic of functioning brain areas.
I haven’t read Damasio’s latest: Self Comes to Mind: Constructing the Conscious Brain, but I did see the review in the NYT by Ned Block. It interesting how Block wants to define consciousness much as I do, this odd subjective sensation of inhabiting a brain that interacts with an environment, criticizing Damasio for emphasizing “knowledge of one’s own existence and of the existence of surroundings.â€
Damasio has been interested in how the subjective sensation of awareness arises in the brain. It seems unfair for Block to review a book by criticizing the author’s choice of subject rather than their approach to the subject. These questions of self awareness and more abstract thought are really more interesting than mapping experience in the brain. Damasio’s subject here seems to be the one that I’ve been writing about recently, which is how we view the metaphor and artifacts that we create in a uniquely human way. Language, metaphore and, above all I think, models capable of  running internal mental simulations are what let us imagine, plan and coordinate activity in ways never before seen on the planet.
I want to link this neurobiology of experience with the growing understanding of how language and metaphor are embodied in the brain and environment. I’m more and more convinced that the more real we see these abstractions as being real, the better we can deal with them in the world. My best example of this currently is equating fear, the emotion, with loss of control, loss of the feeling of mastery.
After all, making decisions under conditions of uncertainty can not be done well if motivated by fear of unknown, uncontrollable future events.