Granger Causality and Emergence in the Brain

(Note: Another publication out of my Zettelblogging Tinderbox File. This comes from notes on reading a paper in the scientific literature. I’m seeing that my notes need to be cleaned up a bit for publication even when they are written to be understood by future me, since current you may need a bit more help understanding terms and logic. Plus I have links in Tinderbox to  files like PDFs in other apps. Those links need to be taken out or redirected to web sources. Publishing notes takes a few extra steps)

See: Granger Causality

This is a summary of a nice review of Granger Causality: “Wiener-Granger causality: a well established methodology by Bressler and Seth, 2011

Even though Granger Causality is quite limited in its utility, it’s a good starting point for understanding how to view cause and effect in complex systems. As a method, it only works with linear models- where any input causes a proportional effect. We’ve know that most of the world isn’t linear and exponential, non-linear effects are the rule in the real world rather than the exception.

Granger Causality also requires the time series to be stationary, that is not changing over time. Now over shorter intervals, some complex systems may be stable, but again, the nature of complex systems is to change and be unpredictable over time. It’s what makes prediction hard, so it’s not surprising that assigning causal effect would also be hard.

And finally, this kind of analysis can’t account for hidden variables. We might measure Y and see whether it predicts future states of X, but it’s entirely possible that Z is the real driving factor, loosely connected to X, so we mistakenly say the Y causes X because we were entirely ignorant of the real significance of Z.

The more general approach is called “Transfer Entropy” based on time-asymmetric information flow. This is nonparametric and based on Shannon entropy based on the amount of information measured between two processes. Can be used when the Granger assumptions (linearity, stability) don’t hold as it is a generalization of the Granger autocorrelation method.

But if you have time series and want a description of effective connectivity, then Granger Causality may be a good method.

There are lots of time series in neuroscience like EEG, neural spike trains, and fMRI. We can look at causal interaction between brain areas or between different types of data. For example, we might want to predict behavior from spike train recordings of individual neurons. If the data contains predictive information in addition to past events plus everything else, then is causal in this G-causality sense.

If a neural activity precedes and predicts an event, like reporting of conscious perception, it shows “Granger Causality”. This is a bottom up, weak emergence where we can say that the neural activity caused the behavior even though we know that the pure physical causal change was at a lower level, but with a courser grained analysis brain activity causes behavior, subjective experience.

This is a first step in linking causality to emergence.

Granger Causality

(Note: What follows is an example of a topic note in my Zettelblogging Tinderbox file). I was able to drop it into the revision of the ODB manuscript pretty much as is. I’m posting it here as an example, pending building out a way to more directly publish these notes on a dedicated Zettelblogging site).

Clive Granger won 2003 Nobel prize in Economics for the idea we know as Granger Causality. Causality seems intuitively obvious when a system can be explicitly understood. But in complex systems or systems that appear to us as a black box (like the brain) how do you define cause and effect?

In the early 1960’s, Granger was looking at how two time de processes could seem to be related over time. Did one cause the other? Norbert Wiener had suggested had suggested that a causal relationship could be defined simply by seeing whether series Y together with series X predicts the future series X’ better than X alone, then Y causes X.

Granger Caausality

This is causality defined purely on the basis of predictive information, with the predictor a possible explanatory variable coming before it in time. Granger expanded it to say that:

If you have Xt, Yt and Wt and try to forecast Xt+1 from Xt and Wt, if Xt, Wt and Yt proves a better prediction than Xt and Wt alone, then we can say that Yt provides some predictive information. Think of W as what you know about the world in general, (which should be really large to reflect everything you know) then if you add Yt to be really specific and it is better that X plus W alone, then Yt is passing a stringent test of containing information that we can call “causal”

Granger had created analysis methods for time series analysis using earlier events to predict later events. He had created a systems definition of causality based on information. It’s a weak causality, as it is not understood mechanistically, so we like to refer to it specifically as Granger Causality, sometimes G-Causality.

Writing about doing

One of the aims of my writing here is to reflect on the work, both the process and the substance. While I certainly appreciate all of the work by those who just write about the tools and workflows, I value even more those who do work and bring insight from the real worlds of academics, science and programming to the discussion. I could cite a long list of old-school bloggers with day jobs who write online to contribute to the community without seeking to make it a full time occupation. I know I’ve mentioned Cal Newport in this regard, but includes podcasters like Sean Carrol as well. If Sean gave up being a theoretical physicist for just writing and podcasting, I think his contribution would be diminished.

All by way of mentioning that David Sparks, MacSparky is no longer practicing law. I can’t fault him for devoting more time to what he loves, his role as writer and practicing lawyer brought a level of real world domain expertise to his online output. Now I fear the work becomes meta-work, talking about how to talk about the tools that others use for work. I’ve seen this in the photography world. There’s a difference in the value that the working photographer brings to online presence compared to the hobbyist photographers who make a living writing about the process and tools of photography. I wouldn’t go so far as calling it navel gazing, but has that self-referential quality to it as the work serves no outside purpose beyond being online content.

My advice is always to do the work. Then use writing about the work as a way to improve the work.

Journaling Academic Work

I’ve realized that there is a gap in my casual blogging since I’ve started regular posting here. I’ve written about technology and workflow for the most part, just touching at times on neuroscience and philosophy.

When I starting this site back in late 1999 at Dave Winer’s, I explicitly excluded writing about family and work. The internet was for everything else, we decided here at home. Working at a public company at the time, I had an intuition even then that public and private could collide online, with unpleasant consequences.

On the other hand, I think I’ve been neglecting discussions about basic neuroscience. These are exciting times as computational techniques are bringing together computer models of brain function and data collection from functioning brains. I started my research on the brain in the early 80’s at a time when knowledge of basic brain anatomy, neurotransmitters and receptors was exploding. Brain circuitry was being mapped with detail that was astonishing. We were beginning to understand the fundamentals of brain circuitry as building blocks.

Then, as I was exiting for my career in drug development, molecular biology was taking center stage. It was an era of more granular exploration of gene expression and cellular messaging that I found was moving away from the work on structure and function that had led me into neuroscience in the first place.

While that molecular approach has certainly paid off handsomely in understanding human neurological disease, it is computational approaches that seem to be where the excitement is. For me, it’s way more interesting that reading about the latest collection of messaging cascades and molecular explanation of neuronal function. We seem to be gaining fundamental insight into how interactions in brain circuitry lead to behavior.

Sadly, in my reading of Neuroscience Twitter today, I learned that David Linden, a Professor at Johns Hopkins and a contemporary in the Department when I was on the faculty, has been diagnosed with terminal cancer, writing about it in the Atlantic.

I’m simultaneously furious at my terminal cancer and deeply grateful for all that life has given me. This runs counter to an old idea in neuroscience that we occupy one mental state at a time: We are either curious or fearful—we either “fight or flee” or “rest and digest” based on some overall modulation of the nervous system. But our human brains are more nuanced than that, and so we can easily inhabit multiple complex, even contradictory, cognitive and emotional states.

In the article he also marvels at our recent advances

Now we know that rather than merely reacting to the external world, the brain spends much of its time and energy actively making predictions about the future—mostly the next few moments. Will that baseball flying through the air hit my head? Am I likely to become hungry soon? Is that approaching stranger a friend or a foe? These predictions are deeply rooted, automatic, and subconscious. They can’t be turned off through mere force of will.

The reminder is always there, as we learn in Pirkei Avos: “Rabbi Tarfon said: The day is short, the work is great, the workers are lazy, the reward is great, and the Master of the house presses.”

I take it as a bit of a push to bring a bit more brain science here.

Publishing Your Notes

Now that I have my Zetttelbloging workflow down and a Tinderbox file with about 100 notes in it, I’m thinking about next steps. That is besides staying current with the workflow and creating index and summary notes as areas get build up a bit.

I’ve been thinking about publishing the output of the process. I’m journaling at the front end, collecting links and making notes on the information. But once I’ve synthesized some thoughts on a few months of Zettelblogging, what’s the best way of publishing the result.

The simplest idea is to publish the summary note. But the summaries in Tinderbox link to other topical notes, not back to original journal post here on the Blog. It make sense because I treat these posts as casual and temporary. It’s the notes file that I’m curating and organizing.

That all leads to the idea to publish the Summary note with all of its supporting notes. Most of the note taking software that supports links between notes can output an HTML version of the notes respecting the links and transforming them into relative HTML links. Tinderbox can do this of course. In fact, many years ago I put together a website to publish notes. DEVONthink also can do this and actually can run a server so that notes can be read via browser on the local website. I’m sure this can be done with some of the new crop of “tools for thought”

This idea of working notes in public certainly isn’t new. It’s related to journaling, but in line with the philosophy of taking notes that have more durable use and curation of ideas over time.

I’m shopping tech to implement now. Clearly the first step is publishing HTML versions of finished notes with their links and folder structure intact. Sadly, WordPress, this site’s platform is just too beholden to its origins in blogging and its themes. The approach I need is really a static site where the site is built anew out of the current set of notes. Either right out of Tinderbox like Dave Rogers and Mark Anderson. Or use one of the fancy new static site builders like Jekyll or Hugo. We shall see.

Zetttelblogging Combines Capture, Journaling, and Reference

As much as we like to complain about distraction and the evils of social media algorithms, I think we need to recognize just how information rich our environment has become. David Allen was inspired to write Getting Things Done at the beginning of the connectedness revolution in the mid 90s when email and platforms like Lotus Notes and GroupWise started creeping into the work environment. (And can you believe that both platforms still exist 30 years later?)


The first step of GTD is capture. I think the explosion we’re seeing in note taking apps is a reaction to the proliferation of information channels we are tuned into. I love Neuroscience Twitter and Philosophy Twitter. I have set of podcasts that send me off in interesting directions. And RSS feeds and our blogging revival are now growing information flows once again. Did I mention Reddit? And a handful of old-fashioned message boards?


As I’ve chronicled here over the last few months, when I started using Drafts as my main capture tool, I found myself with a better inbox system that led directly to this casual approach to blogging. I have a stream of inbox notes in Drafts.

Now I’ve learned that if I take some time off my Inbox gets backed up and it’s a project to get it back under control. But the process of working through it is modeled on my process for getting my email inbox empty. I look at each note and evaluate it as a potential entry in the journal here. This entry, for example came from a note that just said this:

  • Capture
  • Journal
  • Explode

I jotted that down when thinking about the workflow. But now I’ve expanded that thought into the idea of capture followed by journaling. And as I’ve mentioned, once I write a journal post like this in Drafts, I publish directly to WordPress. There I review the formatting and publish without trying to edit to perfect. This is a casual writing flow.

Some of the notes don’t get journaled as they are project oriented work. So these get filled out similarly as usable notes if they are telegraphic and they plus the posts get pushed to the DEVONthink database.

Explode and Edit Notes

This final step occurs in Devonthink and is the Zettel part of of the Zettleblogging. I went back and forth about how to write journal posts that were Zettels, but realized that the purpose of this narrative form and the purpose of that note keeping form were just too different to consolidate. The natural order seems to be to narrate what I’m seeing and thinking about then, as a second step, abstract out from the journal what I want to preserve as reference. For example, out of this entry I’ll probably edit down to a simple explanation of the Capture, Journal, Explode concept with some notes as to how I arrived at the conclusion.

Save Reference Notes

One final step.

I’ve learned over 20 years of writing at this site that a blog is not a good way to keep notes. Even if it’s searchable by Google or a tool like Dave Winer’s new Daytona which loads a site into an mySQL database for local search. Since WordPress runs off a mySQL database to begin with, I have a decent search functionality right at the top of site which I use from time to time to find an older post to reference.

But a dedicated notes database in DEVONthink or Tinderbox is a different, curated reference library. It’s the equivalent of my old file cabinet full of the papers I copied at the library for research and my notes torn from the yellow legal pads I used to use for note taking from high school all the way through my years on a University Medical School faculty.

I’ve said it many time before: my current workflows are just digital refinements of those xeroxed papers and sheets from legal pads. Whenever I stray too far from those habits, I tend to spend too much time on the tools and less than I should on doing the work.

Editing progress, emergence, prediction

So it’s been a month away from posting here. Time flies.

After Thanksgiving, I had a break for some important family activities, but on that break I actually got back to editing my manuscript. I finished the first draft back in June and started the first round of editing. I’ve been helped by following the guidance of Tucker Max at Scribe Media in my writing. In his editing method, the first pass is a “Make It Right” edit, where you make sure everything is there and it makes sense.

For me, that’s including some pretty big chapter reorganizations and filling out some key introductory discussions in the first three chapter. Toward the end of the third chapter, discussion where uncertainty comes from, I realized that there wasn’t a really good discussion of emergence and it’s role in making complex systems both unpredictable and at the same time understandable. Depending on you look at it, Sean Carroll had Anil Seth on his podcast which has resulted in a few weeks delving into Seth and others interesting approach to formalizing the idea of emergence in complex systems including ideas around simulation, compressibility and Granger Causality.

Plus, in preparation for editing the next chapter, on the nature of probability, I started to approach a next level appreciation for Bayesian inference and its relation to brain mechanisms. Our perception is an active process where incoming sensory data either matches or doesn’t match the brain’s current model of the world. In other words, we experience a hypothetical world, a set of beliefs that in the language of probablity is a Bayesian prior probability.

Those hypotheses — and not the sensory inputs themselves — give rise to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.

Some important new results comparing machine learning algorithms with neural mechanisms started me reading some of the new literature on cortical analysis and representation— an area that is really making some progress. as summarized in this article in Wired

Computational neuroscientists have built artificial neural networks, with designs inspired by the behavior of biological neurons, that learn to make predictions about incoming information. These models show some uncanny abilities that seem to mimic those of real brains. Some experiments with these models even hint that brains had to evolve as prediction machines to satisfy energy constraints.

So unlike metaphors like your brain is “a switchboard” or “a computer” and speaking of computation, it seems we’re converging on an understanding from two different directions, rather than just using current technology to describe brain function.

Since the idea of writing the manuscript is to collect my own thoughts, I can’t be too hard on myself in trying to make sure it’s all there. I have no deadlines or pressing need to get this out there. It’s a commitment to the process, not the product.

It’s a very long term project for me and as David Peril recently wrote:

Long story short, commitment is undervalued.  So here’s how I suggest responding to this trend: whatever your tolerance for commitment is, raise it.  If today you’re comfortable committing to something for two hours, try committing for a weekend. If you’re comfortable committing for two weeks, then raise it to two months; once you’re comfortable with two months, raise it to two years; and once you’re comfortable with two years, raise it to two decades. It’s okay to start small. All big things do. But they have to start somehow and with commitment comes momentum. Commitment happens in stages, and only by embracing it can you stop hugging the X-Axis and climb the compounding curve.

The Explanatory Power of Convergent Models

There’s interesting research emerging comparing our ever-improving machine learning models to data generated from brains. Not surprisingly, we do best at replicating what the brain can do when the computer models begin to model the underlying neural machinery. Now the substrate is entirely different, but the predictive approaches appear to be similar. We used to think that the brain was creating representations of the world, with features being abstracted at each level of cortical processing.

The problem that everyone saw from the beginning with this concept is that there’s no little man in the theater of the mind to look at the representations. Instead, the brain is representing hypotheses and the these predictions are constantly updated by the stream of incoming sensation from the world.

Those hypotheses — and not the sensory inputs themselves — give rise to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.

And so too with language, the brain guesses what word comes next, providing a fixed interpretation when we hear unclear or ambiguous language. Of course, we often make wrong decisions, famously about song lyrics (e.g. “There’s a bathroom on the right”) known as Mondegreens.

I’m beginning to appreciate just how important this is as our ability to look at brain activity improves just as our computational ability to create these models begins to match it. We’re not recreating higher brain function from the bottom up by understanding circuits and connections, but instead from the top down. Perhaps not surprisingly, as this is how physical sciences like chemistry and physics have advanced. They create formulas and equations that are mathematical models of the world that have remarkable predictable powers. Once systems get too complex, these methods seem to fall apart and numerical simulation seems to be needed, but nevertheless, when those models start converging on the behavior of the real thing, they seem to tell us about what’s actually going on in that complex system being modeled. Truly a remarkable time for brain science.

Hard to be a saint in the city

I hadn’t really thought about how our social media environment might affect music and art criticism until I read this Eleanor Halls Interview

Where do you see music journalism headed?

I think we need to have honest conversations about the role of music journalism and whether much of it still has any value. I worry that music journalism—interviews and reviews—is becoming PR to some musicians. Most journalists are freelance and don’t have the support of editors or publishers, and reply on publicists for talent access so they can get work. It’s no wonder they often feel too intimidated by an artist and their team to write what they really think.

There’s always been a bargain between critics and artists regarding access and cooperation. It’s only natural that an artist would share insights with a sympathetic journalist and not one who has little enthusiasm for the style or approach of the artist. Personal relationships have always played a big role in what we read as criticism and commentary.

While some nasty letters from fans may have been the price for a critic to pay for publishing a negative take on something, I can see how the amplification of opinion in social media makes the pressure way more real. But without a publication behind the writer, freelance writers are much more dependent on these relationships for access to artists, creating a competition to curry favor with creators and their fans.

I think its true that the tone of discussion across the internet tends to be more promotional than print publications ever were. Editorial independence is lost. I don’t think its even a real bias, necessarily, but a function of writers choosing to write about what they like. It’s often just another symptom of our fragmentation. Sites team up with companies for synergy.

I like the idea of these personal blogs being islands of authenticity. I try to be positive in general, but that’s a personal bias. We’re all in this together, so my aim has to be to inform and teach a bit so we all do a bit better.