Maps and Legends: Brain as world model

##Can you map decision theory onto brain mechanisms?
It’s clear brain doesn’t make decisions in the way that’s been formulated as “rational” by decision theory. You won’t find branching decision trees composed of options and there’s no probability calculation that weights the potential payoff of different options. It’s a complex system built of networked neurons, quite opaque as to where it hides meaning. Yet somehow the brain makes decisions that within limits appear pretty optimal.

##Are brain maps central?
It’s been known since the beginning of modern neuroscience that the cerebral cortex is organized as a series of maps. There are maps of the body surface in primary sensory area for touch, maps of the retina for vision, and tonotopic maps for hearing. Of course the the primary motor cortex responsible for fine movement is mapped across the body.

Flattened out, it’s an area of about 2.5 square feet, but we see it it folded into gyri to fit compactly in the scull. Other than the sensory maps, the rest of the cortex, the “association areas”, doesn’t have explicit physical maps, but instead maps other kinds of space- either movement or meaning much of which is still bound to a sensory or motor channel- vision (by far the largest in the human brain), touch, etc.

Perhaps these interconnected maps of the world are central to to decisions are made in the brain, because we experience consciousness as a representation of the world through these maps. It’s as if the brain is a simulation of our body moving through space. The global simulation I’m thinking about isn’t just a sensory image of the world built of reflected light and air pressure changes, it includes implicit understanding of physics and meaning (semantics) built into it. See an apple and know that it is something that doesn’t weigh much and is good to eat. It seems attentional mechanisms limit our access to everything going on across the cortex because of limitations on working memory or other real time control mechanisms, but the simulation is there to provide the options for action available moment to moment.

So maybe map is a bit limiting as a term. That’s the two dimensional representation of the skin or the retinal or the scale. The brain assembles that raw information into shapes and objects with qualities like color and geometry that don’t vary by quality of illumination or by angle of view. We actually see letters and words event though language is metadata cued by visual input.

##Content, not mechanism of mind
While I like the map analogy, I’m not enthusiastic about “theories of consciousness” in general. I think they are mostly category errors where someone tries to explain an emergent observation, mind, in terms of the component parts of the system, neurons and networks. It’s useful to try to understand underlying mechanisms, but fruitless in general to go the other way. I can tell you how a clock moves in a regular pattern so that I can tell the time. A clock however doesn’t have in it the idea of time or hours or late for my next appointment.

This was the challenge understood by early systems theory thinkers. As they saw very simple robot systems evidence complex and unpredictable behavior, they quickly realized that while the behavior was contained in the system, it hadn’t been designed in and wasn’t there explicitly. Each part has a limited part to play, but in interacting a complex behavior emerges. No individual ant knows how to signal to others how to get to a food source or build a network of tunnels. Implicit knowledge is built into each one. The DNA of a single cell has all the information needed to build a whale or a platypus. But no one reading the string of nucleotides would imagine there was a potential mammal there.

I’d put the theorizing of Tozzi, Friston and others into the systems theory camp. For example in Towards a Neuronal Gauge Theory, they attempt to formalize this mapping idea, casting the brain in the role of minimizing uncertainty about the external world. In fact they cite Conant and Ashby’s good regulator hypothesis [34], which states that every good regulator of a system must be a model of that system.

##Where choice comes from
So choice is implicit in the brain’s modeling of the world. The maps provide the options, values and probabilities that have been formalized as decision theory. There’s a neural calculus going on, but one that is far from the small world of even our most sophisticated models and mathematics. Fundamentally, The brain is a functioning part of a bigger system that includes other brains and a real environment that feeds a network of meaning and physical complexity that can’t be captured in the static numbers we use for computation.

Leave a Reply