Editing progress, emergence, prediction

So it’s been a month away from posting here. Time flies.

After Thanksgiving, I had a break for some important family activities, but on that break I actually got back to editing my manuscript. I finished the first draft back in June and started the first round of editing. I’ve been helped by following the guidance of Tucker Max at Scribe Media in my writing. In his editing method, the first pass is a “Make It Right” edit, where you make sure everything is there and it makes sense.

For me, that’s including some pretty big chapter reorganizations and filling out some key introductory discussions in the first three chapter. Toward the end of the third chapter, discussion where uncertainty comes from, I realized that there wasn’t a really good discussion of emergence and it’s role in making complex systems both unpredictable and at the same time understandable. Depending on you look at it, Sean Carroll had Anil Seth on his podcast which has resulted in a few weeks delving into Seth and others interesting approach to formalizing the idea of emergence in complex systems including ideas around simulation, compressibility and Granger Causality.

Plus, in preparation for editing the next chapter, on the nature of probability, I started to approach a next level appreciation for Bayesian inference and its relation to brain mechanisms. Our perception is an active process where incoming sensory data either matches or doesn’t match the brain’s current model of the world. In other words, we experience a hypothetical world, a set of beliefs that in the language of probablity is a Bayesian prior probability.

Those hypotheses — and not the sensory inputs themselves — give rise to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.

Some important new results comparing machine learning algorithms with neural mechanisms started me reading some of the new literature on cortical analysis and representation— an area that is really making some progress. as summarized in this article in Wired

Computational neuroscientists have built artificial neural networks, with designs inspired by the behavior of biological neurons, that learn to make predictions about incoming information. These models show some uncanny abilities that seem to mimic those of real brains. Some experiments with these models even hint that brains had to evolve as prediction machines to satisfy energy constraints.

So unlike metaphors like your brain is “a switchboard” or “a computer” and speaking of computation, it seems we’re converging on an understanding from two different directions, rather than just using current technology to describe brain function.


Since the idea of writing the manuscript is to collect my own thoughts, I can’t be too hard on myself in trying to make sure it’s all there. I have no deadlines or pressing need to get this out there. It’s a commitment to the process, not the product.

It’s a very long term project for me and as David Peril recently wrote:

Long story short, commitment is undervalued. 

So here’s how I suggest responding to this trend: whatever your tolerance for commitment is, raise it. 

If today you’re comfortable committing to something for two hours, try committing for a weekend. If you’re comfortable committing for two weeks, then raise it to two months; once you’re comfortable with two months, raise it to two years; and once you’re comfortable with two years, raise it to two decades. It’s okay to start small. All big things do. But they have to start somehow and with commitment comes momentum. Commitment happens in stages, and only by embracing it can you stop hugging the X-Axis and climb the compounding curve.