I chose this paper as the first paper for the virtual Journal Club because it is the first description of a concept central to my formuation of decision making. It’s available here.
W. Ross Ashby was born in England where he earned an MD and trained as a Psychiatrist. He published this paper after he moved to the US where he was Professor, Departments of Biophysics and Electrical Engineering, University of Illinois at Urbana-Champaign. Ashby was a foundational thinker in the creation of the way of thinking variously called “systems theory”, “cybernetics” and “complexity”. Broadly speaking, thinking about systems is a useful approach in a number of scientific disciplines including ecology, evolution, neuroscience, engineering, and medicine. In a system, stated simply, the whole is analyzed as being greater than the sum of the parts; we focus not only on the reductionism of taking a system apart, but also on how interactions between parts can lead to behavior not predicted from the reductionist approach.
The crux of the argument in the paper is as follows:
under very broad conditions, that any regulator that is maximally both successful and simple must be isomorphic with the system being regulated. (The exact assumptions are given.) Making a model is thus necessary.
Conant and Ashby explicitly mention what interests me, that this implies that the brain is a model of the world- of the body and it’s environment. In the past, I’ve tried to understand where the idea of modeling and simulation originated. I’ve come to the conclusion based on discussions in the 60’s and 70’s when the approach was made explicit, as in this paper, that the term “model” became a shorthand to refer to a simpler, abstracted notion of reality. It seems to have arisen simulataneously across mathematics, physics and biology without anyone really thinking rigorously about what it meant. Sometimes its a map, sometimes an equation and sometimes a causal diagram. I grant this paper a central place because it took the question in a more rigorous direction and I think it was influential at the time.
The particular group of systems thinkers around Ashby were concerned with control of complex systems. Like me, Ashby was always looking toward understanding the brain as the ultimate goal since he was a psychiatrist. For him and other “cyberneticists”, control systems, robots and automata were simpler objects that could be studied even as more complicated systems like the brain, ecologies and social groups were the real areas of interest.
So before considering the idea of a model, they first codify the idea of regulation. Now the paper itself attempts to be rigorous, using diagrams, logical statements and theorems. I’m going to try to briefly put the arguments into worlds rather than attempt to reproduce the arguments in detail for now.
So to define regulator, we assume a system that has a collection of potential states. Some of those states are desired so the purpose of the regulator is to move the system toward that goal. For a heating or air conditioning system, it’s moving the room air temperature toward the desired temperature. For a man wielding a hammer, it’s hitting the nail on the head. The thermostat is a simple feedback system, but the hammer blow is a more complex, predictive “cause-controlled type” (in the language of the paper).
So how do we prove that the regulator, so defined has to contain a model of the system being regulated? First we need to agree that an optimal regulator comes as close as possible to successfully moving the system to its goal. Other wise, it’s not a good regulator and if another regulator would do better, it’s not optimal. So the goodness or optimization of the regulator is how close it gets the system to the goal. In physical or informational terms, this error is entropy or disorder. While the use of this kind of information theory terminology isn’t too relevant for the argument being made, it forshadows later formulations of this idea relative to the brain like Friston’s Free Energy Principle. ((You can read about it here on Wikipedia, but this paper is a nice description supporting the idea that the brain contains a model of the world.)) Here’s the formal statement of an “optimal regulator”:
.We consider the regulatory situation described earlier, in which the set of regulatory events R and the set of events S in the rest of the system (i.e. in the ‘reguland’, S, which we view as R’s opponent) jointly determine, through a mapping ψ, the outcome events Z. By all optimal regulator we will mean a regulator which produces regulatory events in such a way that H(Z) is minimal.
R is the “model†that is in the regulator which is controling S to get to the goal in the real system Z. H(Z) is the difference from the desired state and is stated in terms of information.
Defining a model, they confess, is actually more difficult than defining a regulator. The definition they end up with is vague as to allow anything that allows optimal regulation of the system. It is a black box without out any internal requirement, just a necessary function of accepting information and outputting action to minimize the difference from the desired state.
Now maybe that’s fair and suited for purpose but far from what you might imagine as “isomorphism” as a one to one mapping. To be fair, they consider the simple idea of mapping, like a scale model of a cathedral where the model is a linear transformation of the system at a different set of coordinates and then push this to non-linear mappings, like the projection of the surface of the earth from a sphere to a 2D map that has to be distorted to force a point to point correspondence. But clearly, there’s no point to point correspondence in most of the interesting regulators we’d be interested in, just some representation that can be used to tell the regulator how the system has been disturbed and how to guide it back toward it’s goal. But we’re relieved of having to find where 3D space is modeled in the brain to know that it is modeled. If the brain can guide the hand in 3D, it must embody a model in there.
So in the end:
Theorem: The simplest optimal regulator R of a reguland S produces events R which are related to the events S by a mapping h:S → R
Restated somewhat less rigorously, the theorem says that the best regulator of a system is one which is a model of that system in the sense that the regulator’s actions are merely the system’s actions as seen through a mapping h.
So we are left to say that the disturbance to the system is well mapped by the inputs of the regulator and the actions of the regulator map onto the system in a way that are the right system actions. The model itself remains a black box, but because input accurately translates to action in the regulated system, there is a model in the black box.
The developing knowledge of regulation, information- processing, and control is building similar criteria for the brain. Now that we know that any regulator (if it conforms to the qualifications given) must model what it regulates, we can proceed to measure how efficiently the brain carries out this process. There can no longer be question about whether the brain models its environment: it must.
And so too for the brain. Just like a heart has to be a pump, the brain has to be a good regulator to accurately sense events in the system and output actions to achieve its goals. The sensing of input has to be close to the real disturbance and the output has to accurately change the system for it to have any effect, closer being optimal. The brain will search to find that optimum.
In closing, let me be clear. This necessary existence of a model has nothing to do with consciousness or awareness. We have the subjective experience of the brain providing a detailed model of the world. Somehow we’re actually subjectively aware of the model, one that appears solid, consistent and real, even though it is a brain construct. This rather simple discussion of Conant and Ashby just tells us that it is necessary that a model be embodied in the brain, but nothing about how it’s constructed or how we get this subjective awareness of it.