Recent Reads and Watches

I’m with Dave. I enjoyed both Don’t Look Up and The Matrix Resurrections

It took me a little while to get the time to what them, but enjoyed both. Don’t Look Up was some of the best social satire I’ve seen in a long while. In 20 years, if someone wanted to know what living through the last few years has been like in America, I think this movie will be a nice culture artifact capturing the impact of social media, post-truth media and the divide between real life and public fantasy. Plus, a nice depiction of religious belief with real sympathy. So rare in our media of public fantasy yet so pervasive in real life. It’s funny to me that the focus was so much on the climate change/comet metaphor. Maybe those who believe they influence are upset seeing how little power they actually have in the face of self-serving delusion.

As for the Matrix, I’ve been a fan and re-watched the trilogy before getting around to Resurrections. I’m glad I did because the movie did play to the fans and those knowledgeable of the cannon. I always thought the movies were about the concept of “Savior” with Neo being a Christ-like figure somehow born to be the savior and giving his life for humanity, or at least what was left of it. The new movie is the logical second coming of the savior. I’ve read other metaphorical readings of the movies, but on rewatching I was struck once more with my previous interpretation.

I finished William Gibson’s Agency this afternoon. Unfortunately, in this case I didn’t go back and read The Peripheral, so there was the usual lost context that I gradually pieced back together from the first book of some years ago. A lesson I’ve learned and apparently forgotten- don’t start trilogies until they’re completed or at least nearly so. Hopefully when I get around to the final Expanse book, Leviathan Falls I’ll still remember enough since the last book came out almost 3 years ago. When the 3rd book in Gibson’s series comes out, maybe I’ll reread the first two as warmup.

Gibson has created a unique approach in Science Fiction where he places stories in the present but, in these last two books, simultaneously portrays a future. Since these are branching timelines, to get around time travel paradoxes of course, it’s really like present and future as fantasy. But, maybe like Don’t Look Up it provides a more direct way to reflect on our times and where we may be going. Within the first pages of the book, we meet an AI arising from an interesting technology, much closer to the way I would see AI agency coming about than the spontaneous awakening stories or the mind upload yarns, none of which I find remotely plausible.

Working in the Background on Zettelblogging

While I’ve got the workflow down to do my capture through drafts and database my collections through DEVONthink, I’m a bit stuck on the step of publishing the results.

On the one hand, the simplest way forward is to just craft posts and publish them right here on the WordPress DecidingBetter.com site. Using MarsEdit I have a full list of blog posts going back as far as I want. With tags I can create collections of related posts for editing and linking with a trivial way of reposting my changes. The truth that I already know is that this is the preferred path.

This fits well into my realization that publishing and note taking have two different audiences. Publishing is for you, my imaginary, ideal reader. Smart, curious, generally knowledgable but not an MD, PhD with decades of neuroscience, neurology and philosophy work filling your head. The notes are for me, that guy who just needs to be reminded what I thinking at the time.

But then I’m known to be over-ambitious. And I really want to create some friction free ways of this republishing after pulling together ideas in the Zettelblogging repository. So there’s an output of summary and index notes that is a well reasoned, more complete version of the notes. More like lecture notes or a paragraph in a book or review. My last couple of posts on Grainger Causality and Emergence in the Brain were those kinds of posts.

Basically what it means is that I want to take the linked notes in Tinderbox and turn them into a set of interlinked web pages. This fits in well with ideas of moving to static pages and simple website construction.

For example:
This Page is Designed to Last: A Manifesto for Preserving Content on the Web

Return to vanilla HTML/CSS – I think we’ve reached the point where html/css is more powerful, and nicer to use than ever before. Instead of starting with a giant template filled with .js includes, it’s now okay to just write plain HTML from scratch again. CSS Flexbox and Grid, canvas, Selectors, box-shadow, the video element, filter, etc. eliminate a lot of the need for JavaScript libraries. We can avoid jquery and bootstrap when they’re not needed. The more libraries incorporated into the website, the more fragile it becomes. Skip the polyfills and CSS prefixes, and stick with the CSS attributes that work across all browsers. And frequently validate your HTML; it could save you a headache in the future when you encounter a bug.

And Craig Mod is an advocate for simple, static web side construction:

Running a Successful Membership / Subscription Program — by Craig Mod

I still generate my own website using the Hugo static-site tool. It has gotten a bit too complex over the years, though, and were I starting again today, I’d consider 11ty. My sites are hosted on a Digital Ocean vps (if you sign up with that link you get $100 in free credit (and I get a sweet $25, too)). After 15+ years, I stopped using Google Analytics and switched to Plausible for more privacy-friendly webstats. Fathom is also a good option with spectacular, heartwarming support.

For tools? I’ve been leaning very heavily on Drafts over the last few months. My web clippings land here from my iPhone, iPad and Mac. So a simple idea would be to just push notes from Tinderbox back out to Drafts and publish as usual.

On the other hand, the static sites want a folder based set of content, which isn’t how Drafts is designed to work. I can keep the writing in Tinderbox with export as text or even HTML. Maybe move back to text editors after export? BBEdit.

Obviously this whole Tools For Thought goes back a ways. Maybe go old school?

How to Learn Emacs: A Hand-drawn One-pager for Beginners / A visual tutorial :: Sacha Chua

I thought I’d draw a one-page guide for some of the things that people often ask me about or that would help people learn Emacs (and enjoy it). You can click on the image for a larger version that you can scroll through or download

Org-mode for Writing: Structure & Focus | The Aware Writer

Org-mode is a structured editor that combines the best features of a powerful outliner and a powerful editor in one package. I’ve been fooling with org-mode a lot lately, digging into capabilities, solving issues and fine tuning, always asking the question — is org-mode the best environment for my writing? The answer is an unqualified yes.

I see our friend Jack Baty, now using Tinderbox, is a long time Emacs user:

Weaning Myself From Emacs and Org-mode – Jack Baty’s Weblog Archive (2000-2020)

Whenever I try moving out of Emacs I have to find replacements for all sorts of tools and processes. Things like task management, journaling, email, project notes, text editing, and general note taking are all things that I’ve been doing in Emacs for a while now and if I’m ever going to move away from it I’ll need to find replacements.

If Emacs is so powerful, maybe it’s my solution? Then again, I’m suspicious of “Theory of Everything” apps.

Monday Musings: MaxThink, The Only Idea Processor | The Aware Writer

first glance, MaxThink is a powerful outliner, but the real power is under the hood. MaxThink came with a fat, printed manual that by some miracle, I still have. Neil’s book is more than a user manual for MaxThink. It’s a well written tutorial on ways of thinking: Evaluative thinking with the Prioritize command, synthesis thinking using Binsort and Randomize to combine information in new ways, curiosity or experimental thinking with the Lock command, systematic thinking using Get, Put and Gather, creative uses of the Sort command, and one of my favorites, segmented lists.

So for now, expect more of this working in public mode.

Granger Causality and Emergence in the Brain

(Note: Another publication out of my Zettelblogging Tinderbox File. This comes from notes on reading a paper in the scientific literature. I’m seeing that my notes need to be cleaned up a bit for publication even when they are written to be understood by future me, since current you may need a bit more help understanding terms and logic. Plus I have links in Tinderbox to  files like PDFs in other apps. Those links need to be taken out or redirected to web sources. Publishing notes takes a few extra steps)

See: Granger Causality

This is a summary of a nice review of Granger Causality: “Wiener-Granger causality: a well established methodology by Bressler and Seth, 2011

Even though Granger Causality is quite limited in its utility, it’s a good starting point for understanding how to view cause and effect in complex systems. As a method, it only works with linear models- where any input causes a proportional effect. We’ve know that most of the world isn’t linear and exponential, non-linear effects are the rule in the real world rather than the exception.

Granger Causality also requires the time series to be stationary, that is not changing over time. Now over shorter intervals, some complex systems may be stable, but again, the nature of complex systems is to change and be unpredictable over time. It’s what makes prediction hard, so it’s not surprising that assigning causal effect would also be hard.

And finally, this kind of analysis can’t account for hidden variables. We might measure Y and see whether it predicts future states of X, but it’s entirely possible that Z is the real driving factor, loosely connected to X, so we mistakenly say the Y causes X because we were entirely ignorant of the real significance of Z.

The more general approach is called “Transfer Entropy” based on time-asymmetric information flow. This is nonparametric and based on Shannon entropy based on the amount of information measured between two processes. Can be used when the Granger assumptions (linearity, stability) don’t hold as it is a generalization of the Granger autocorrelation method.

But if you have time series and want a description of effective connectivity, then Granger Causality may be a good method.

There are lots of time series in neuroscience like EEG, neural spike trains, and fMRI.
We can look at causal interaction between brain areas or between different types of data. For example, we might want to predict behavior from spike train recordings of individual neurons. If the data contains predictive information in addition to past events plus everything else, then is causal in this G-causality sense.

If a neural activity precedes and predicts an event, like reporting of conscious perception, it shows “Granger Causality”. This is a bottom up, weak emergence where we can say that the neural activity caused the behavior even though we know that the pure physical causal change was at a lower level, but with a courser grained analysis brain activity causes behavior, subjective experience.

This is a first step in linking causality to emergence.

Granger Causality

(Note: What follows is an example of a topic note in my Zettelblogging Tinderbox file). I was able to drop it into the revision of the ODB manuscript pretty much as is. I’m posting it here as an example, pending building out a way to more directly publish these notes on a dedicated Zettelblogging site).

Clive Granger won 2003 Nobel prize in Economics for the idea we know as Granger Causality. Causality seems intuitively obvious when a system can be explicitly understood. But in complex systems or systems that appear to us as a black box (like the brain) how do you define cause and effect?

In the early 1960’s, Granger was looking at how two time de processes could seem to be related over time. Did one cause the other? Norbert Wiener had suggested had suggested that a causal relationship could be defined simply by seeing whether series Y together with series X predicts the future series X’ better than X alone, then Y causes X.

Granger Caausality

This is causality defined purely on the basis of predictive information, with the predictor a possible explanatory variable coming before it in time. Granger expanded it to say that:

If you have Xt, Yt and Wt and try to forecast Xt+1 from Xt and Wt, if Xt, Wt and Yt proves a better prediction than Xt and Wt alone, then we can say that Yt provides some predictive information. Think of W as what you know about the world in general, (which should be really large to reflect everything you know) then if you add Yt to be really specific and it is better that X plus W alone, then Yt is passing a stringent test of containing information that we can call “causal”

Granger had created analysis methods for time series analysis using earlier events to predict later events. He had created a systems definition of causality based on information. It’s a weak causality, as it is not understood mechanistically, so we like to refer to it specifically as Granger Causality, sometimes G-Causality.

Writing about doing

One of the aims of my writing here is to reflect on the work, both the process and the substance. While I certainly appreciate all of the work by those who just write about the tools and workflows, I value even more those who do work and bring insight from the real worlds of academics, science and programming to the discussion. I could cite a long list of old-school bloggers with day jobs who write online to contribute to the community without seeking to make it a full time occupation. I know I’ve mentioned Cal Newport in this regard, but includes podcasters like Sean Carrol as well. If Sean gave up being a theoretical physicist for just writing and podcasting, I think his contribution would be diminished.

All by way of mentioning that David Sparks, MacSparky is no longer practicing law. I can’t fault him for devoting more time to what he loves, his role as writer and practicing lawyer brought a level of real world domain expertise to his online output. Now I fear the work becomes meta-work, talking about how to talk about the tools that others use for work. I’ve seen this in the photography world. There’s a difference in the value that the working photographer brings to online presence compared to the hobbyist photographers who make a living writing about the process and tools of photography. I wouldn’t go so far as calling it navel gazing, but has that self-referential quality to it as the work serves no outside purpose beyond being online content.

My advice is always to do the work. Then use writing about the work as a way to improve the work.

Journaling Academic Work

I’ve realized that there is a gap in my casual blogging since I’ve started regular posting here. I’ve written about technology and workflow for the most part, just touching at times on neuroscience and philosophy.

When I starting this site back in late 1999 at Dave Winer’s EditThisPage.com, I explicitly excluded writing about family and work. The internet was for everything else, we decided here at home. Working at a public company at the time, I had an intuition even then that public and private could collide online, with unpleasant consequences.

On the other hand, I think I’ve been neglecting discussions about basic neuroscience. These are exciting times as computational techniques are bringing together computer models of brain function and data collection from functioning brains. I started my research on the brain in the early 80’s at a time when knowledge of basic brain anatomy, neurotransmitters and receptors was exploding. Brain circuitry was being mapped with detail that was astonishing. We were beginning to understand the fundamentals of brain circuitry as building blocks.

Then, as I was exiting for my career in drug development, molecular biology was taking center stage. It was an era of more granular exploration of gene expression and cellular messaging that I found was moving away from the work on structure and function that had led me into neuroscience in the first place.

While that molecular approach has certainly paid off handsomely in understanding human neurological disease, it is computational approaches that seem to be where the excitement is. For me, it’s way more interesting that reading about the latest collection of messaging cascades and molecular explanation of neuronal function. We seem to be gaining fundamental insight into how interactions in brain circuitry lead to behavior.

Sadly, in my reading of Neuroscience Twitter today, I learned that David Linden, a Professor at Johns Hopkins and a contemporary in the Department when I was on the faculty, has been diagnosed with terminal cancer, writing about it in the Atlantic.

I’m simultaneously furious at my terminal cancer and deeply grateful for all that life has given me. This runs counter to an old idea in neuroscience that we occupy one mental state at a time: We are either curious or fearful—we either “fight or flee” or “rest and digest” based on some overall modulation of the nervous system. But our human brains are more nuanced than that, and so we can easily inhabit multiple complex, even contradictory, cognitive and emotional states.

In the article he also marvels at our recent advances

Now we know that rather than merely reacting to the external world, the brain spends much of its time and energy actively making predictions about the future—mostly the next few moments. Will that baseball flying through the air hit my head? Am I likely to become hungry soon? Is that approaching stranger a friend or a foe? These predictions are deeply rooted, automatic, and subconscious. They can’t be turned off through mere force of will.

The reminder is always there, as we learn in Pirkei Avos: “Rabbi Tarfon said: The day is short, the work is great, the workers are lazy, the reward is great, and the Master of the house presses.”

I take it as a bit of a push to bring a bit more brain science here.

Publishing Your Notes

Now that I have my Zetttelbloging workflow down and a Tinderbox file with about 100 notes in it, I’m thinking about next steps. That is besides staying current with the workflow and creating index and summary notes as areas get build up a bit.

I’ve been thinking about publishing the output of the process. I’m journaling at the front end, collecting links and making notes on the information. But once I’ve synthesized some thoughts on a few months of Zettelblogging, what’s the best way of publishing the result.

The simplest idea is to publish the summary note. But the summaries in Tinderbox link to other topical notes, not back to original journal post here on the Blog. It make sense because I treat these posts as casual and temporary. It’s the notes file that I’m curating and organizing.

That all leads to the idea to publish the Summary note with all of its supporting notes. Most of the note taking software that supports links between notes can output an HTML version of the notes respecting the links and transforming them into relative HTML links. Tinderbox can do this of course. In fact, many years ago I put together a website to publish notes. DEVONthink also can do this and actually can run a server so that notes can be read via browser on the local website. I’m sure this can be done with some of the new crop of “tools for thought”

This idea of working notes in public certainly isn’t new. It’s related to journaling, but in line with the philosophy of taking notes that have more durable use and curation of ideas over time.

I’m shopping tech to implement now. Clearly the first step is publishing HTML versions of finished notes with their links and folder structure intact. Sadly, WordPress, this site’s platform is just too beholden to its origins in blogging and its themes. The approach I need is really a static site where the site is built anew out of the current set of notes. Either right out of Tinderbox like Dave Rogers and Mark Anderson. Or use one of the fancy new static site builders like Jekyll or Hugo. We shall see.

Zetttelblogging Combines Capture, Journaling, and Reference

As much as we like to complain about distraction and the evils of social media algorithms, I think we need to recognize just how information rich our environment has become. David Allen was inspired to write Getting Things Done at the beginning of the connectedness revolution in the mid 90s when email and platforms like Lotus Notes and GroupWise started creeping into the work environment. (And can you believe that both platforms still exist 30 years later?)

Capture

The first step of GTD is capture. I think the explosion we’re seeing in note taking apps is a reaction to the proliferation of information channels we are tuned into. I love Neuroscience Twitter and Philosophy Twitter. I have set of podcasts that send me off in interesting directions. And RSS feeds and our blogging revival are now growing information flows once again. Did I mention Reddit? And a handful of old-fashioned message boards?

Journal

As I’ve chronicled here over the last few months, when I started using Drafts as my main capture tool, I found myself with a better inbox system that led directly to this casual approach to blogging. I have a stream of inbox notes in Drafts.

Now I’ve learned that if I take some time off my Inbox gets backed up and it’s a project to get it back under control. But the process of working through it is modeled on my process for getting my email inbox empty. I look at each note and evaluate it as a potential entry in the journal here. This entry, for example came from a note that just said this:

  • Capture
  • Journal
  • Explode

I jotted that down when thinking about the workflow. But now I’ve expanded that thought into the idea of capture followed by journaling. And as I’ve mentioned, once I write a journal post like this in Drafts, I publish directly to WordPress. There I review the formatting and publish without trying to edit to perfect. This is a casual writing flow.

Some of the notes don’t get journaled as they are project oriented work. So these get filled out similarly as usable notes if they are telegraphic and they plus the posts get pushed to the DEVONthink database.

Explode and Edit Notes

This final step occurs in Devonthink and is the Zettel part of of the Zettleblogging. I went back and forth about how to write journal posts that were Zettels, but realized that the purpose of this narrative form and the purpose of that note keeping form were just too different to consolidate. The natural order seems to be to narrate what I’m seeing and thinking about then, as a second step, abstract out from the journal what I want to preserve as reference. For example, out of this entry I’ll probably edit down to a simple explanation of the Capture, Journal, Explode concept with some notes as to how I arrived at the conclusion.

Save Reference Notes

One final step.

I’ve learned over 20 years of writing at this site that a blog is not a good way to keep notes. Even if it’s searchable by Google or a tool like Dave Winer’s new Daytona which loads a site into an mySQL database for local search. Since WordPress runs off a mySQL database to begin with, I have a decent search functionality right at the top of site which I use from time to time to find an older post to reference.

But a dedicated notes database in DEVONthink or Tinderbox is a different, curated reference library. It’s the equivalent of my old file cabinet full of the papers I copied at the library for research and my notes torn from the yellow legal pads I used to use for note taking from high school all the way through my years on a University Medical School faculty.

I’ve said it many time before: my current workflows are just digital refinements of those xeroxed papers and sheets from legal pads. Whenever I stray too far from those habits, I tend to spend too much time on the tools and less than I should on doing the work.

Editing progress, emergence, prediction

So it’s been a month away from posting here. Time flies.

After Thanksgiving, I had a break for some important family activities, but on that break I actually got back to editing my manuscript. I finished the first draft back in June and started the first round of editing. I’ve been helped by following the guidance of Tucker Max at Scribe Media in my writing. In his editing method, the first pass is a “Make It Right” edit, where you make sure everything is there and it makes sense.

For me, that’s including some pretty big chapter reorganizations and filling out some key introductory discussions in the first three chapter. Toward the end of the third chapter, discussion where uncertainty comes from, I realized that there wasn’t a really good discussion of emergence and it’s role in making complex systems both unpredictable and at the same time understandable. Depending on you look at it, Sean Carroll had Anil Seth on his podcast which has resulted in a few weeks delving into Seth and others interesting approach to formalizing the idea of emergence in complex systems including ideas around simulation, compressibility and Granger Causality.

Plus, in preparation for editing the next chapter, on the nature of probability, I started to approach a next level appreciation for Bayesian inference and its relation to brain mechanisms. Our perception is an active process where incoming sensory data either matches or doesn’t match the brain’s current model of the world. In other words, we experience a hypothetical world, a set of beliefs that in the language of probablity is a Bayesian prior probability.

Those hypotheses — and not the sensory inputs themselves — give rise to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.

Some important new results comparing machine learning algorithms with neural mechanisms started me reading some of the new literature on cortical analysis and representation— an area that is really making some progress. as summarized in this article in Wired

Computational neuroscientists have built artificial neural networks, with designs inspired by the behavior of biological neurons, that learn to make predictions about incoming information. These models show some uncanny abilities that seem to mimic those of real brains. Some experiments with these models even hint that brains had to evolve as prediction machines to satisfy energy constraints.

So unlike metaphors like your brain is “a switchboard” or “a computer” and speaking of computation, it seems we’re converging on an understanding from two different directions, rather than just using current technology to describe brain function.


Since the idea of writing the manuscript is to collect my own thoughts, I can’t be too hard on myself in trying to make sure it’s all there. I have no deadlines or pressing need to get this out there. It’s a commitment to the process, not the product.

It’s a very long term project for me and as David Peril recently wrote:

Long story short, commitment is undervalued. 

So here’s how I suggest responding to this trend: whatever your tolerance for commitment is, raise it. 

If today you’re comfortable committing to something for two hours, try committing for a weekend. If you’re comfortable committing for two weeks, then raise it to two months; once you’re comfortable with two months, raise it to two years; and once you’re comfortable with two years, raise it to two decades. It’s okay to start small. All big things do. But they have to start somehow and with commitment comes momentum. Commitment happens in stages, and only by embracing it can you stop hugging the X-Axis and climb the compounding curve.