What’s wrong with my homunculus?

My homunculus is tired of being dismissed so casually. In neurology, we’re introduced to the Homunculus in reference to body maps in neuroanatomy. For example, there’s a map of the body superimposed on the motor cortex, the part of the cerebral cortex that initiates the final common pathway for volitional movements from the cortex to the muscles, The maps a funny one, being upside down and distorted so that the regions with the greatest control (face and hands) take up largest area:

Motor Homunculus

There’s a neat 3D model of what this distorted little man would look like:

Motor Homunculus Model

There’s a similar somatosensory map for touch sensation from the skin. These funny looking little men are the internal representations of our body in the brain.

I’m reading Alison Gopnik’s excellent book The Philosophical Baby. She does a great job of tackling some real philosophy in the context of developmental neuroscience and psychology. But like lots of scientists, when up against that very difficult barrier of materialism and functionalism, she bails.

I’ve pointed out before that materialists, faced with a complete inability to explain the subjective feeling of consciousness turn into dualists, asserting that only the brain can exist. Yet they can’t help speaking of mind as something that exists even while implying it can’t exist as it is not material and can’t be pointed at; only the brain can is.

We know that there are models of the body in the brain. The motor homunculus is a well established example. It is the motor map in the cortex. It seems clear to me that I experience voluntary movement through that map. It is the homunculus and I don’t need another map to control that map. There are also maps of the world in the brain. Maps of concepts like the analogous ideas of distance and addition of numbers

The existence of the motor homunculus doesn’t mean there’s a little man sitting on the motor cortex controlling movement. Its the cortex that’s doing the work. It is organized as a map of the body.

When she dismisses out of hand the concept of the homunculus, Gopnik makes the mistake of confusing the general concept of a homunculus with an infinite regress. That’s the fallacy of the little man controlling movement. If I have a little man in my head, then that little man needs a little man to control him and on into an infinite regress.

Many also confuse the concept of a homunculus with dualism. Certainly a theory that requires a separate homunculus, a little man that controls movement or who sees what is projected into the brain isn’t useful.. This kind of dualism accounts for consciousness but is unacceptable because it has no explanatory power.

The tough question is how does mind, consciousness, arise from its substrate, brain.

There’s a common misunderstanding of John Searle’s Chinese Room argument mostly made by materialists. Searles argues strongly and directly against materialism, believing as I do that mind exists and must itself be explained. It is a different level of analysis of the material world, the way hardness or smoothness of rocks can’t be explained solely with reference to reductionist information from inorganic chemistry. Hardness and smoothness are systems level qualities that are dependent on the chemistry, not explained by it. Similarly mind is dependent on brain, not explained by it.

In the Chinese Room argument, a translator appears the same to an outside observer whether or not he understands the language or is just looking up phrases in a big book. This demonstrates that information about function can’t distinguish between the conscious and the mechanical. Materialist explanations, without reference to a larger systems level examination are missing something. Mind is something different from the functional activity, here for example translation. It can be done with understanding or without understanding The brain need not itself be conscious to generate the events. The substrate can be aware or not aware in the black box.

The difference between the translator that knows chinese and the one who is translating mechanically without awareness is understanding.

So I’ll point to my motor homunculus as one part of the brain that I know my mind is dependent on for fine voluntary movement. I can’t yet point to where Searle’s Chinese Room argument is modeled in my brain. Sorry.

Why Buy a MacBook Air?

I’ve got a 13″ MacBook Air on the way. It was an interesting decision, long in development. It was worth considering formally because a Mac purchase is a 3 year technology commitment. And I think worth considering in detail here to illustrate a decision process.

iPhones are a one year commitment. At this point we all believe that the new version is going to be coming around each summer. AT&T has always given me the upgrade choice. I bought the original, had to upgrade to the 3G, skipped 3GS since I saw little reason to spend any money on it, then grabbed the iPhone4 which was a major advance. I await iPhone 5.

My iPad Experience: The Good

I’m glad I grabbed a WiFi+3G iPad on release. Its been a useful device, worth the money for the family just as an attractive media and browsing package. As a personal device, Useful, but not my companion. I never bonded with the iPad the way I have with my iPhone.

The iPad is great at a few things. Netflix streaming video, reading mainstream news sites (New York Times, Washington Post, etc), and Flickr. I think that iPad directed media will be an area for development and as a musical instrument, its got quite a nice start with synthesizer and drum machine apps. I’ve got a small collection. The twitter app is great.

Twitter is an interesting case because the experience is driven by the integration of the app with iOS. For interesting reasons, I had little interest in Twitter until just a few months ago. But the Twitter App on the iPad is one of my favorites. Its best feature is the integrated web browser. Click on a tweet with a link in it and a browser window slides in with the web page displayed. The iPhone app does the same. How oddly painful to have the Twitter app spawning web page after web page since there’s no similar integration.

My iPad Experience: The Indifferent

In the end, for me the iPad is a limited use satellite device. There are things I’d rather do on the iPhone or Mac. RSS feed reading has been a big problem for me on the iPad. I’m a longtime user of Google Reader, but I hated the need to do a two finger flick to scroll. It was something I constantly fought against but never found a better interface in any of the RSS reader apps that I tried. It odd that the Google interfaces on the browser or the iPhone but the iPad interface just didn’t work well. Reeder gets me part of the way there, but not enough that I don’t wait till I’m sitting with the laptop to look at feeds.

I was also disappointed with the iPad as an eReader. Its legible and fast, but its just too heavy for reading. Actually, my guess on the ergonomics is that its too dense. Its a bit heavier than a 500 page hardcover as judged by just picking them up. But the thinness and weight distribution of the iPad tires my fingers and wrists if I fail to support it. Web surfing sessions last minutes, but reading a book is a less dynamic activity and can stretch much longer.

A few months ago, when the latest Kindle was released, I bought one for reading. Its light, fast enough and the controls easy to use after a short period of gaining the right habits. At this point, there are no books on the iPad. I travel with a laptop and kindle, generally leaving the iPad

The lack of multiuser capabilities is another nagging problem with the iPad. While the iPad is a great cheap extra screen for watching movies or surfing the web, there’s no way to prevent other users from having access to the main user’s email and personal accounts like Twitter. Since its family using the device, I worry more about accidental deletions than malicious use, but it remains a situation in which a personal device is used by a group without a way to hide sensitive or important information.

Writing on the iPad

Extended writing on the iPad also has never really worked for me. The onscreen keyboard has works well enough, but has shortcomings for anything but casual text entry. There’s no apostrophe on the QWERTY keyboard. I think I’m not alone in thinking that the apostrophe is a pretty important part of english, what with its use in both possessives and contractions. Autocorrect is nice for simple writing, but get technical or expand the vocabulary and it increasing makes its own odd substitutions. Placement of the insertion point with the touch screen is really a pretty big step backward from the earliest onscreen editors like vi and emacs with their keyboard control.

I like the either/or world of mouse, cursor keys and keyboard shortcuts. The iPad enforces its tyranny of touch interface that’s slow. For a while I hoped that using a bluetooth keyboard would help with extended writing, freeing up screen space and giving me a full key set, but it doesn’t remove the conflict between keyboard and touch interface. I’d like to see how a bluetooth trackpad would do as a substitute for the touchscreen.

The bluetooth keyboard also locks out the onscreen keyboard, so that if you pick up the iPad from its stand and move to the sofa to read and edit, there’s no keyboard available until bluetooth is disabled. This is unexpected to me, since with my MacBook Pro, the bluetooth keyboard and trackpad don’t affect the function of the laptop’s own input devices.

Apps that keep their data in the cloud make notetaking on the iPad possible though. I mostly live in Evernote for note taking. It syncs perfectly across all of my devices- iPhone, iPad, MacBook Pro and Windows XP laptop supplied by my employer.

Reaching a decision

Over the last month, I performed a face-off between three options- iPad, MacBook Pro and a theoretical MacBook Air. The first two started with the huge advantage that I already own them. If I could come to a workable solution without the Air, I could postpone purchase until the next round of Apple product announcements, postponing the next hardware commitment.

I optimized both as well as I could adding apps and honing workflow to optimize each. I found in the end that the MacBook Pro was my solution of choice except for work email and document generation or editing. My inability to get writing working well enough on the iPad meant that I needed more.

I decided that a little more formal exploration would be useful. My usual approach would be to build a Tinderbox map, but since I had been working on systematically exploring the tools I had at hand, I decided to create a mind map on the iPad using iThoughtsHD. Here’s the result of an hour’s work:

Contexts.png

If you’ve never used mind maps at all, I’d recommend giving it a try. Once you get the right motor movements for iThoughts, it becomes a powerful idea sketcher. In fact, I’d hold it up as a great example of a perfect use of the iPad. Its graphical and perfectly suited for a touch interface, needing small amounts of text input.

A mind map is a hierarchical structured document, always a tree structure. It resolves into a standard outline with the central node representing the outline itself and the first set of branches the highest level of the outline. Child nodes are children in an outline. The advantages to mind mapping are due to the graphic nature of the technique. In the map here, I started considering contexts for device use, but realized that I’d need to represent devices within the map. So the pink nodes are devices or storage spots.

iThoughtsHD is nice because it has some nice nontraditional mind map elements. There are callouts, links (the read lines with arrowheads at each end). I didn’t use any here, but it also allows floating nodes so one could create the kind of flat map that’s so natural in Tinderbox. Adornments, background map zones in Tinderbox, would be a great addition to a mind mapping app like iThoughts. Mind mapping would be an interesting addition to tinderbox since it too is outline based.

What I learned from the mind mapping was that I have a huge number of tools, represented both by devices (iPhone, iPad, kindle, MacBook Pro, Windows XP, notebook) and places (Evernote, the weblog, Twitter, etc). Location and connectivity matter, but influence choice between tools more than the nature of the tools themselves.

But I actually have fewer tasks. There’s work email and documents. There’s reading, photographing and writing for this weblog and related projects. Then there’s note taking which is information harvesting from the environment both real and virtual.

For me, there ends up being no way around the conclusion that I end up needing a corporate laptop for work and a creative laptop for content generation. So the best solution for writing in the smallest Mac that will run Scrivener, Tinderbox, Evernote and MarsEdit well. So the MacBook Air 13″ is on its way.

Attaining Mastery

We must have evolved to judge risk and benefit well. So why are we so bad at understanding risk, particularly as presented in the medical literature?

Gavin de Becker  in “The Gift of Fear” advises us to trust the feeling of knowing without knowing why. We know what to do without thinking about it. I think these gut feelings are engaging brain systems Judgement from the gut uses the brains innate risk judging system. We don’t have cognitive access to it other than that gut feeling.

Mathematical models of risk and uncertainty don’t map onto the mind’s innate systems very well, particularly as odds. What does 1 in 100 mean? we are 1 not 100. Relative risk is a more intuitive way of expressing risk, but focuses on chances of failure. Often we have no idea of the context and the real probabilities of success.

Instead we tend to adopt simplifying, non-probablistic interpretations, e.g. “Cigarettes cause cancer”, Obesity will kill you. Violent rhetoric causes violence. Causation is assumed be certain and mechanistic. Exceptions disprove the simplified model. My uncle smoked every day and lived to age 90.

Maybe rationality doesn’t really work too well in the world. Probability may be more accurate way to model the world, but it causes fear and doubt because it can’t be controlled. The logic of reductionism and cause/effect thinking at least allows mental certainty even if it doesn’t work consistently. After all, what’s the difference between being wrong but knowing the outcome was uncertain and just plain being wrong? Is being wrong for the right reason actually better?

I believe that I can make a compelling argument against adopting the simple cause and effect analysis model. What do you do when you’re wrong? If you believe that going out in the cold with wet hair causes colds, how do you explain all of those guys leaving the gym with wet hair day after day returning perfectly healthy the next day? Abandon your belief? Start creating more complicated chains of events that include age and diet quality? We often end up defensive when what we profess to be the truth is not reflected in reality.

Is there something that is more natural than fixed logic but is closer to the way the world really works? Maybe cultivating the feeling of knowing without knowing why is that way.

But what do we call feeling through decisions? Neuroscience makes it “instinct” some kind of low level, subconscious, inferior urge. Rationality is always placed on a pedestal, elevated to the ideal. But adopting the view of an embodied mind, these feelings are no more or less important or integrated than seeing or understanding. Thirst or hunger are coming to mind as body signals. But where does the gut intuition about choice come from? Must be analysis and decision making by systems we don’t have access to but are using facts and perception. These are deep skills of judgement and decision making that start with natural ability honed by years of practice.

Rationality is very democratic. Knowing without analysis takes expertise, practice, and a commitment to learning over years. Perfect practice that leads to perfecting practice. So against “analysis”, I’ll place “mastery”.

Be Do Have

I’ve never been able to find a good attribution for the concept of Be Do Have, but my best understanding is that current use arose from est and the Human Potential Movement. At least I heard it from a business consulting group that had its roots in the est world. It’s been traced back to 1912 book at least, The Master Key System
. Perhaps we can simply attribute it to what Stephen Covey called “The Wisdom Literature”.

The idea is that if we embrace mindfulness and living in the eternal Now, we turn the common mode of behavior on its head.

My own best example is buying gear. I love to buy things. Usually it’s photography equipment or outdoors equipment. I really enjoy taking photographs. I love being out in the wilderness, hiking and camping. But like many of us these days, I live an over committed life. I’m focused on getting the groceries, driving the kids to school, meeting my work commitments. Months go by and I realize I haven’t been out in the woods. I haven’t posted a single new image online.

A typical response for me to this frustration is to buy a new camera or new lens in order to take more photos. Or I buy my fourth pair of hiking shoes in order to hike more.The latest is better, lighter perhaps. Or maybe more like the old school boots that I had in school when I was up in the mountains weekend after weekend. If I have those boots, I know I’ll hike more.

The logic is that if I have the photography or outdoor equipment, I will have what I need to do what photographers or hikers do and therefore be a photographer or hiker. In business, having a corner office or VP title will clearly enable you to do what a powerful executive does, and you will be that person of importance. The logic is based on have, be, do. If you have the things, you can do the actions and be the person you want to be.

Maybe the logic is really lacking. It may just be a psychological shortcut- focusing on the lack and acquiring things to avoid confronting the real reasons why I’m not really what I profess to want to be.

The concept of “Be, Do, Have” turns this around. First we reflect on who we want to be. And then start being.

I have to decide to be a photographer, a hiker or a leader. Once I’ve decided who I want to be and assumed that personal identity, it follows that I will do what that person would do. If I decide I want to be a photographer, I will simply do what a photographer does, create images. Obviously part of what a photographer does is to use a camera, but the camera becomes a tool for doing. Finally, I I decide who I want to be, and do what that person would do, I will have what that person should have. In the end, that’s how I’ll have a collection of images, experiences of wilderness or the power and satisfaction of leadership.

This is a mental habit of mine when I’m in conflict and need to decide what to do. I need to ask myself, “Who do I want to be here?” It serves as a cue to evaluate what I really want and frames the decision in the context of real values bigger than the moment. It cuts through the rationalization and avoidance, generally revealing a clear way forward.

On Risk

Risk analysis is a well developed theory and important in a wide range of fields from medicine to engineering. While it’s true that risk estimates are often based on very sparse real data, there’s often no better way of talking about somewhat rare bad outcomes. So even though there have been very few nuclear accidents, it’s important to estimate just how to build your new power plant to make another event like Three Mile Island or Chernobyl as unlikely as possible, even if the risk can’t be reduced to zero.

Risk focuses on what can go wrong. If we add up all of the risks in life, we know that the probability of a bad outcome reaches 100% since the probability of living forever is in fact zero. No one survives life. The probability of living to 120 years old is pretty close to zero. Death, along with taxes perhaps, becomes our only certainty.

Focusing on the innumerable potential risks in every undertaking and making choices solely on avoiding bad outcomes is paralyzing. One can easily come to spend all of their time avoiding and mitigating risk rather than assessing probability of success and making choices based on reaching desired goals. If you focus on the risk of air travel, you’ll miss that trip of a lifetime. In truth the risk of dying during that trip is so low as to not be a factor in the decision at all.

I propose that risk information be used to chose between alternatives when the chances of success can be improved without great cost. But looking at the path of life as being paved with risk leading to an inevitable death is to be avoided, I think.

We know there is a relationship between risk and reward. Betting on drawing an ace from a deck of cards should pay more than betting on drawing a spade. The ace is a one in thirteen chance. The spade is one in four. The cost of failure is small because the the odds should, over time, make losses even out even though the risk of failure on a single trial is much higher betting on drawing an ace. In no way is betting on drawing the spade a “safer bet” than drawing the ace.

Risk is the flip side of probability. Risk, the probabilty of failure looked at in isolation, leads to fear because we’re examining a bad outcome not in our control. But looked at over time, risk of failure shold even out over time.

Making decisions under conditions of uncertainty is hard because we have just one chance to act. It seems to be all risk, The things that can go wrong loom large. Deciding better involves a change in perspective, having decided to decide and having decided to act, to do, it becomes a matter of what action to choose. By embracing the uncertain nature of the world, the fear that comes from lack of control can be managed

On Causation and Content

Vaughan Bell of MindHack has written a nice examinataion of the link between violence and mental illness at Slate:

If suspect Jared Lee Loughner has schizophrenia, would that make him more likely to go on a shooting spree in Arizona? – By Vaughan Bell – Slate Magazine: “Seena Fazel is an Oxford University psychiatrist who has led the most extensive scientific studies to date of the links between violence and two of the most serious psychiatric diagnoses—schizophrenia and bipolar disorder, either of which can lead to delusions, hallucinations, or some other loss of contact with reality. Rather than looking at individual cases, or even single studies, Fazel’s team analyzed all the scientific findings they could find. As a result, they can say with confidence that psychiatric diagnoses tell us next to nothing about someone’s propensity or motive for violence.

It’s important that this be said because the belief in a strong link between psyciatric diseases and violence makes life even harder for those unlucky enough to clinically manifest mental illness.

I took the opportunity to leave a comment at Slate to reflect my own thoughts on how best to think about causation in a case like this:

Vaughn-

While I appreciate your efforts to look at the big picture causation question of violence and mental illness, at the individual level there are at least three strongly interacting factors: the disease, external stressors, and content of thought.

Of the disease, little more need be said as we know this is a complex interaction between genetics and environment. You discussed important stressors like drugs and alcohol which ironically are often also attempts at self medication that go wrong.

Here, many of us are concerned about the content of the abnormal thought. We know that delusions have a strong cultural and social content. To my way of thinking, also a strong metaphorical component because that’s how the mind works.

This was predicted. Political rhetoric using violence metaphors like gun sights, media reports of people bringing guns to political rallies are not in themselves incitements to violence, but perhaps provide content for the disordered mind and lead to a choice of political violence among the other options

William James and John Maynard Keynes on Deciding Better

I’m indebted to Glen Alleman for pointing out that John Maynard Keynes wrote a book on probability, A Treatise on Probability which starts with the Bayesian view of probability as belief and moves on to explain how Frequentist concepts fit into the Bayesian world view.

Herding Cats: Books for Project Managers: “Each paragraph in the book provides insight like this. Two paragraphs later is the core of the current ‘black swan’ of probabilistic management. There is a distinction between part of our rational belief which is certain and that part which is only probable. The key here is there are degrees of rational belief and if we fail to understand, and more  important, fail to ‘plan’ in the presence of these degrees, then were are taking on more risk and not knowing it. This is a core issue in the financial crisis and managing projects in the presence of uncertainty. “

This relationship between belief and probability is an important basis for decision making, forming the bedrock of what I see as the American philosophy of Pragmatism, Its a bottom up point of view that is rooted in experience and practicality. William James, who codified this point of view as “Pragmatism” famously said, “Truth is what works”.

Exploring a bit of this Keynes book already and knowing that Keynes was influenced by his Cambridge associations with G.E. Moore who similarly took this bottom up, individual belief based approach most famously in Principia Ethica. So perhaps this Cambridge-Bloomsbury connection makes this really Anglo-American philosophy.

There was a time when our search for truth as a culture led us into periods of severe doubt and Continental philosophies like Existentialism and Deconstructionism. These were times of great shifts in values and cultural upheaval. Arguments from first principles were swept aside by feelings of being without roots in a world without intrinsic meaning.

For James, Moore and Keynes, there’s a grounding in the pragmatic idea that there is a real world out there that we can know and predict however imperfectly. Decisions based on our beliefs have consequences so we had better work on refining those beliefs and improve our decision making.

Perhaps we’re ready for a return to a more practical Anglo-American philosophy based on experience, culture, belief and the scientific approach to finding meaning in the world. At least I know I am.

The Metalevel: Computational Paths to Human Understanding

I’ve got to hand it to Evan Williams. It’s not an accident that he’s been at the start of these large Internet developments.

GigaOm:

“OM: Do you think that the future of the Internet will involve machines thinking on our behalf?

Ev: Yes, they’ll have to. But it’s a combination of machines and the crowd. Data collected from the crowd that is analyzed by machines. For us, at least, that’s the future. Facebook is already like that. YouTube is like that. Anything that has a lot of information has to be like that. People are obsessed with social but it’s not really “social.” It’s making better decisions because of decisions of other people. It’s algorithms based on other people to help direct your attention another way.”

This is exactly where I think we are in the evolution of the extension of the mind by global computer networks. We are individual nodes or modules taking in information and producing behaviors. We’re individually limited in bandwidth both coming in and going out. Those are human constraints that have some fixed limit based on biology and time.

There’s an aggregated metalevel that’s too big for us to see individually, but can be looked at computationally by our machines and fed back to us as processed information. By analogy, the visual part of the brain has only indirect access to somatosensory input from the fingers, but gets access to the synthesized information to refine visual analysis.

This isn’t just limited to the internet. Another good example is how the genome, endless strings of four bases, can’t be interpreted by inspection. Its just two much information. But we very quickly developed computational methods to show us the patterns for human interpretation.

These are examples of complex, networked systems that require computation analysis for human understanding.

Steven Strogatz: Sync

Steven Strogatz has been one of the leading figures in the mathematics of biological systems. While synchronization of independent elements is the thread that brings his book  Sync together, it’s all in the context of the new systems view of biology.

His process is to frame a question about complex systems and then look for answers by running computer simulations of the process. When order emerges in the simulation, he and colleagues try to discern the mathematics underlying the order. You see these connections are so complicated that one can’t predict their behavior by inspection and reason. In general, its easier to recreate aspects of them in a simple model in order to understand how they behave.

This is a very basic demonstration of emergent behavior of a system. The behavior of the larger system can’t be predicted by understanding the behavior of the components and their interconnections. Even more interesting is how small changes to the indivual units or their connection strength can radically change the emergent bahavior of the system. Once you have a simple working model, deeper understanding of the possible states of the system can be gained by looking at behavior over a wide range of assumptions and conditions. Here Strogatz is interested in how synchronous activity emerges in networks.

These simplified systems aren’t real, but are useful tools. Just as a map is not the terrain, a system model is not the system itself. The model is useful only if it predicts the behavior of the real system. Just as a map is only useful if it can predict the proper route through the landscape. This is the iterative nature of science and a reflection of William James’ Pragmatism. Truth is what works.

As a scientist, I gained a bit of insight into why it’s easy to manipulate the state of some biological systems. I spent many years in the lab studying mechanisms of cell death. I could never understand why so many investigators were able to find so many different ways to halt the process once it had been set in motion by an experimental perturbation. Surely all of these processes couldn’t be independently responsible for the cell death? If they were independent, then blocking just one wouldn’t help cells survive. Other, unaffected processes would carry out the deed.

The many interactions within a cell place some events at nodes that have broader effects. The other day, an accident on a highway here in Baltimore managed to tie up much of the traffic north of the city. There were cascading events as traffic was shunted first here and then there by blockage and congestion in one key pathway after another. Similarly, cell processes or cell state can be shifted from one state to another by strategic triggers.

Tools like network maps and simulations promise to provide a means for understanding complexity that won’t yield to simple cause and effect diagrams. Strogatz ends the book with some contemplation of how consciousness arises from the network of neural connections. It may be that syncronization across the cerebral cortex is responsible for the binding of shape and color of visual objects or the binding of object and word.

Of course, its this idea of mind and meaning as the emergent effect of complex systems that has interested me for some time now. As a neurobiologist, calling meaning an emergent quality of brain is a neat way to bridge the material with the immaterial worlds.