(Note: What follows is an example of a topic note in my Zettelblogging Tinderbox file). I was able to drop it into the revision of the ODB manuscript pretty much as is. I’m posting it here as an example, pending building out a way to more directly publish these notes on a dedicated Zettelblogging site).
Clive Granger won 2003 Nobel prize in Economics for the idea we know as Granger Causality. Causality seems intuitively obvious when a system can be explicitly understood. But in complex systems or systems that appear to us as a black box (like the brain) how do you define cause and effect?
In the early 1960’s, Granger was looking at how two time de processes could seem to be related over time. Did one cause the other? Norbert Wiener had suggested had suggested that a causal relationship could be defined simply by seeing whether series Y together with series X predicts the future series X’ better than X alone, then Y causes X.
This is causality defined purely on the basis of predictive information, with the predictor a possible explanatory variable coming before it in time. Granger expanded it to say that:
If you have Xt, Yt and Wt and try to forecast Xt+1 from Xt and Wt, if Xt, Wt and Yt proves a better prediction than Xt and Wt alone, then we can say that Yt provides some predictive information. Think of W as what you know about the world in general, (which should be really large to reflect everything you know) then if you add Yt to be really specific and it is better that X plus W alone, then Yt is passing a stringent test of containing information that we can call “causal”
Granger had created analysis methods for time series analysis using earlier events to predict later events. He had created a systems definition of causality based on information. It’s a weak causality, as it is not understood mechanistically, so we like to refer to it specifically as Granger Causality, sometimes G-Causality.