Editing progress, emergence, prediction

So it’s been a month away from posting here. Time flies.

After Thanksgiving, I had a break for some important family activities, but on that break I actually got back to editing my manuscript. I finished the first draft back in June and started the first round of editing. I’ve been helped by following the guidance of Tucker Max at Scribe Media in my writing. In his editing method, the first pass is a “Make It Right” edit, where you make sure everything is there and it makes sense.

For me, that’s including some pretty big chapter reorganizations and filling out some key introductory discussions in the first three chapter. Toward the end of the third chapter, discussion where uncertainty comes from, I realized that there wasn’t a really good discussion of emergence and it’s role in making complex systems both unpredictable and at the same time understandable. Depending on you look at it, Sean Carroll had Anil Seth on his podcast which has resulted in a few weeks delving into Seth and others interesting approach to formalizing the idea of emergence in complex systems including ideas around simulation, compressibility and Granger Causality.

Plus, in preparation for editing the next chapter, on the nature of probability, I started to approach a next level appreciation for Bayesian inference and its relation to brain mechanisms. Our perception is an active process where incoming sensory data either matches or doesn’t match the brain’s current model of the world. In other words, we experience a hypothetical world, a set of beliefs that in the language of probablity is a Bayesian prior probability.

Those hypotheses — and not the sensory inputs themselves — give rise to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.

Some important new results comparing machine learning algorithms with neural mechanisms started me reading some of the new literature on cortical analysis and representation— an area that is really making some progress. as summarized in this article in Wired

Computational neuroscientists have built artificial neural networks, with designs inspired by the behavior of biological neurons, that learn to make predictions about incoming information. These models show some uncanny abilities that seem to mimic those of real brains. Some experiments with these models even hint that brains had to evolve as prediction machines to satisfy energy constraints.

So unlike metaphors like your brain is “a switchboard” or “a computer” and speaking of computation, it seems we’re converging on an understanding from two different directions, rather than just using current technology to describe brain function.


Since the idea of writing the manuscript is to collect my own thoughts, I can’t be too hard on myself in trying to make sure it’s all there. I have no deadlines or pressing need to get this out there. It’s a commitment to the process, not the product.

It’s a very long term project for me and as David Peril recently wrote:

Long story short, commitment is undervalued. 

So here’s how I suggest responding to this trend: whatever your tolerance for commitment is, raise it. 

If today you’re comfortable committing to something for two hours, try committing for a weekend. If you’re comfortable committing for two weeks, then raise it to two months; once you’re comfortable with two months, raise it to two years; and once you’re comfortable with two years, raise it to two decades. It’s okay to start small. All big things do. But they have to start somehow and with commitment comes momentum. Commitment happens in stages, and only by embracing it can you stop hugging the X-Axis and climb the compounding curve.

The Explanatory Power of Convergent Models

There’s interesting research emerging comparing our ever-improving machine learning models to data generated from brains. Not surprisingly, we do best at replicating what the brain can do when the computer models begin to model the underlying neural machinery. Now the substrate is entirely different, but the predictive approaches appear to be similar. We used to think that the brain was creating representations of the world, with features being abstracted at each level of cortical processing.

The problem that everyone saw from the beginning with this concept is that there’s no little man in the theater of the mind to look at the representations. Instead, the brain is representing hypotheses and the these predictions are constantly updated by the stream of incoming sensation from the world.

Those hypotheses — and not the sensory inputs themselves — give rise to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.

And so too with language, the brain guesses what word comes next, providing a fixed interpretation when we hear unclear or ambiguous language. Of course, we often make wrong decisions, famously about song lyrics (e.g. “There’s a bathroom on the right”) known as Mondegreens.

I’m beginning to appreciate just how important this is as our ability to look at brain activity improves just as our computational ability to create these models begins to match it. We’re not recreating higher brain function from the bottom up by understanding circuits and connections, but instead from the top down. Perhaps not surprisingly, as this is how physical sciences like chemistry and physics have advanced. They create formulas and equations that are mathematical models of the world that have remarkable predictable powers. Once systems get too complex, these methods seem to fall apart and numerical simulation seems to be needed, but nevertheless, when those models start converging on the behavior of the real thing, they seem to tell us about what’s actually going on in that complex system being modeled. Truly a remarkable time for brain science.

Hard to be a saint in the city

I hadn’t really thought about how our social media environment might affect music and art criticism until I read this Eleanor Halls Interview

Where do you see music journalism headed?

I think we need to have honest conversations about the role of music journalism and whether much of it still has any value. I worry that music journalism—interviews and reviews—is becoming PR to some musicians. Most journalists are freelance and don’t have the support of editors or publishers, and reply on publicists for talent access so they can get work. It’s no wonder they often feel too intimidated by an artist and their team to write what they really think.

There’s always been a bargain between critics and artists regarding access and cooperation. It’s only natural that an artist would share insights with a sympathetic journalist and not one who has little enthusiasm for the style or approach of the artist. Personal relationships have always played a big role in what we read as criticism and commentary.

While some nasty letters from fans may have been the price for a critic to pay for publishing a negative take on something, I can see how the amplification of opinion in social media makes the pressure way more real. But without a publication behind the writer, freelance writers are much more dependent on these relationships for access to artists, creating a competition to curry favor with creators and their fans.

I think its true that the tone of discussion across the internet tends to be more promotional than print publications ever were. Editorial independence is lost. I don’t think its even a real bias, necessarily, but a function of writers choosing to write about what they like. It’s often just another symptom of our fragmentation. Sites team up with companies for synergy.

I like the idea of these personal blogs being islands of authenticity. I try to be positive in general, but that’s a personal bias. We’re all in this together, so my aim has to be to inform and teach a bit so we all do a bit better.

Abstract

For a very long time I had the practice of putting an image at the top of each post here. As decoration for the most part as my images carry very little semantic content. This is another image from the San Francisco trip. I continue to think about the contradictions inherent in abstract photography, but I’ve concluded that the answers lie in making more images, not contemplation.

The Bedrock of Knowledge

I’ve enjoyed Scott Young’s writing since he’s the kind of interested amateur who dips into all kinds of areas without committing to professional work. So it was interesting to read his impression of literature research; What if You Don’t Feel Smart Enough?

The expectation is that as you learn more and more, you’ll eventually hit a bedrock of irrefutable scientific fact. Except usually, the bottom of one’s investigation is muck. Some parts of the original idea get sharpened, others blur as more complications and nuance are introduced.

And it’s true that it’s not well appreciated how tentative scientific explanation is as new areas are explored. It’s been exciting for me to watch COVID-19 science develop in real time, so quickly. Yes, scary and polarized in ways that we generally don’t see in medicine, but a predictable back and forth on the properties of the virus, its propagation, and treatment.

We generally know what we know

Scott misses the important point that there is a bedrock of knowledge, the literature just doesn’t bother to discuss it. In neuroscience, the basic physical architecture and cellular makeup of the brain was established with great clarity over the last 100 years or so. As techniques have been introduced, new areas opened up and took a while to get settled into bedrock, but much of that is done now. In fact, my first published paper in 1983 was part of a major chapter in that story when labs used retrograde tracing techniques to map brain connections. My paper established the identity of all of the areas that sent connections to the motor trigeminal nucleus in the rat. That’s the collection of motor neurons that innervate the jaw closing muscles.

We’re in an in interesting era where cognitive science is successfully exploring its underlying neuronal circuitry. As is typical, the process is messy but the picture is getting filled out, even in some very tricky areas like working memory and perception.

It’s of little importance to my day job in drug development at this point, but these are the kinds of questions that sparked my interest in brain science at the beginning. So while I look on as a spectator, I’m spending time reading papers and developing at least a superficial understanding of the techniques and progress.

Building models to explore the unknown

Neuroscience Twitter is a great resource to keep up with trends across cognitive science. Case in point: I’m reading through Bayesian models of perception and action which is a draft of a book by Wei Ji Ma, Konrad Kording, and Daniel Goldreich, to be published by MIT press. I’ve been dipping into papers published by the three authors to get a feel for the deeper applications of the approach. I learned about it on Twitter

I think this is an important area to watch. I’ve talked about the idea that the brain, in order to control behavior, has to contain a model of the system. One approach is create computer models of circuitry based on observed connectivity and activity in animals when these systems are active. If some models can reproduce the brain activity, then they are candidates for hypothesized mechanisms and be used to make predictions about how the real neural circuits behave. Think about it like a physicist using equations to model physical laws and then testing the predictions from those equations against new observations. Except for the brain we don’t have any such equations, so we can use the immense computer power we have at our disposal to do the same kind of abstraction as the physicist.

Just like the equations of physics describe reality, but aren’t reality, these neural models describe little bits of the brain, they aren’t thinking. But interestingly, some of these brain inspired models can be put to work for real life tasks like image or speech recognition because the escape simple algorithmic approaches to analysis and classification.

The Coming Knowldege Work Salt Mine

It dawned on me yesterday, just like the developing “Creator Economy” we’re seeing the creation of the “Knowledge Worker Industry”.

A recent episode of Mac Power Users was full of the usual interesting workflow ideas and productivity hacks. But then it got real serious for me real fast.

You see, Sean McCabe for years has been using automation and workflow tools to improve his own productivity. Now he’s using it to gleefully industrialize knowledge work. His current company takes podcasts and videos and hacks them up to nice promo bites for social media. It’s the kind of work that takes some editing skill, editorial ability, and thought. If you can get that down to a process, an assembly line you have knowledge workers sitting in front of a screen doing industrial line work. Get an assignment, process it, send it down the line and get the next job on the line. It sounds so nerdy and innocent: Mac Power Users #613: The Future of Work, with Sean McCabe

David and Stephen talk with Sean McCabe about how he runs his businesses from what can only be described as a Mac battle station while stitching together macOS apps and several cloud services to be more productive.

But I can see how the productivity hack industry can be used to maximize the productivity of knowledge workers. And it brought to mind Cal Newport’s vision of a world without email. Rather than summarizing Cal’s latest book, here’s an interview where he makes the point: Cal Newport on an industrial revolution for office work

On Cal’s account, those opportunities are staring us in the face. Modern factories operated by top firms are structured with painstaking care and two centuries of accumulated experience to ensure staff can get the greatest amount possible done.

Our productivity grew 50x in the 20th century. Why? Because, the early 20th century is when we got really serious about process engineering. Like hey, wait a second. If we use an assembly line, we can build cars better. We really started to get serious about building things as a process we could get better and better at. And as he underscored, he’s like, 50x growth is almost inconceivably large.

I’m a highly paid knowledge worker because of my 40 plus year career in science, medicine and drug development. I work in a company that Cal thinks is a “hyperactive hive mind” when what it is a organization of shifting expert teams loosely tied together around the world by a combination of asynchronous (email, document sharing) and synchronous (Teams, Zoom, teleconferences) in which I leverage my knowledge by contributing in dozens of different ways every day.

This is how teams work. I understand that Cal, as an academic, wants to be left alone and leave the operational stuff to a theoretical knowledge industry. But I see that as a digital salt mine. Assembly line knowledge work like Sean’s company. Where creativity and improvisational problem solving go to die.

Understanding through teaching

Just processing notes and blog posts has clarified some of my recent interests. In neuroscience, it’s issues of mapping, perceptual decisions and genetic influence on cognition. Then there’s fitness and workflow and some tech news discussion.

One of the functions this casual blogging has is providing a way for me to write in short form about some of the big ideas I have in the ODB manuscript. I spent some time editing the section on chaos, complexity and emergence. The subject is top of mind right now so I’ve been tending to link to related stories on the net. But I’m hoping that my brief descriptions here help to clarify my thinking and help the presentation I’m putting in the manuscript.

I’ve also been dipping into Twitter conversations a bit more often. Since Dave Rogers says it’s the Town Square of the internet. It seems to be a more comfortable place to be over the last few months.

If I can’t describe concepts of ecology and systems here or on twitter in a few lines, I doubt I’ll do much better in the long form manuscript. This clarifying of understanding by teaching goes back to the ancient world. This is often attributed to physicist Richard Feynman but Feynman actually described learning by synthesizing the essentials of subject in a notebook rather than reading a textbook. This is closer to Zettelkasten concept of synthesis than blogging to understanding. Hence, Zettelblogging to capture both the synthesis and the teaching.

Ecologies are stable until they’re not

Systems Theory is central to my approach to understanding decision making, whether looking at mental activity or brain function. Systems Theory shows us how uncertainty arises even in fully deterministic of systems. When cause and effect feedback on each other and small changes result in non-linear effects, the future behavior of a system becomes harder to predict just because linear correlation and simple cause and effect lose explanatory power.

More often than not, I look at complex systems through the filter of ecology. Ecology is the term we use for the study of interacting biological organisms and their habitat in the real world. But by analogy, this mode of thinking becomes useful in thinking about how or brains interact with other brains and the environment. And of course we like to think of technologies, like Apple products and services as being an ecosystem where users, devices and information interact in a system.

Ecologies, like the organisms that live in them, are generally resilient. If they weren’t they wouldn’t last long enough to recognize as ongoing, functioning systems. But that’s not to say that ecologies don’t change over time in response to external inputs or changes in the environment.

A nice example from Ed Yong, who’s writing I treasure. An ecology of whales, krill and the ocean floor. Introduce man and the ecology collapses. The system is hurt but not gone and it may be possible to restart.

To Save the Whales, Feed the Whales – The Atlantic

Just as many large mammals are known to do on land, the whales engineer the same ecosystems upon which they depend. They don’t just eat krill; they also create the conditions that allow krill to thrive.

That’s the key to ecologies. They are constructed and maintained by their participants- all the animals in the environment have to reach a stable balance to persist over time. With change, a new stable state may be reached if the system persists. It may be diminished or barely recognizable, but it changes until it comes to rest in some newest of relationships.

Yong understands ecology and evolution, making his writing rich and deep. By the way, that’s a pointer to the article where it appeared in the Atlantic, behind a paywall. I read his writing in the Apple ecosystem, in Apple News+. As an information source, to enrich my information environment, Apple provides a great service.

As and ecology, I can’t see how these paywalls and Substack subscriptions reach a long term stable state. We used to have newstands where you could dip into a copy of the Atlantic for a small price. You got to read the New Yorker or Readers Digest in the doctor or dentist’s waiting room. Sure, once you have a readership you can move behind a paywall. But once the NYT and Washington Post have aggregated all the readers behind their paywalls they become just another monopoly like Facebook, YouTube and Google.

Not a stable information ecosystem.


By the way, I’m glad I read the Foundation Trilogy before starting the AppleTV series. They are very different works, related stories.

The Apple series is way more coherent and focused. They’ve collapsed lots of Asimov’s threads into a real fabric, but along the way has become so much more conventional modern SciFi than Azimov’s experimental imaginings that made less sense but were wilder with huge story gaps.

The Apple show introduces some of the fantasy elements that Asimov added only in the second book and amplified in the third. Just so you know, I can enjoy stories with mind reading and mind uploading, but I find it much more likely that antigravity and faster than light travel are possible than the possibility that neuronal networks can ever be instantiated in computers or that their activity can be read out externally. But all fiction requires suspension of disbelief.