How to make a zombie

Philosophers are fascinated by the idea of zombies. This thought experiment of theirs supposes hypothetical beings that behave indistinguishably from humans but lack consciousness, so “zombies”. For some reason, if they existed, they think it would prove that there is something besides brain activity that produce subjective experience. I don’t get it, since I know that people can walk around, have conversations and deny recollection of conscious experience when told what the did after the fact.

Understanding the brain by what’s missing

One of my main tools as a neurologist is to examine patients with a deficit and figure out what part of the brain has been injured. It’s our tradition to understand how different parts of the brain participate in behavior by looking at patients where some function is missing or impaired and correlate function to the damaged area of the brain. For example, a small stroke in Broca’s area, located in the frontal lobe of the cerebral cortex (specifically the posterior part of the inferior frontal gyrus in Brodmann areas 44 and 45) causes what we call an expressive aphasia, a loss in fluency when trying to speak. If the injury is limited to this area, say by a small stroke, the patient understands speech perfectly well and can read text with no problems. So we conclude by seeing this correlation over and over again that the critical brain area for speech production resides in Broca’s area.

Of course that’s not to say that the ability to produce speech is only represented there like some computer subroutine. The content of speech is fed through Broca’s area from a wide range of other areas that know about the world. The decision to speak is being triggered from the prefrontal cortex, particularly the dorsolateral prefrontal cortex (DLPFC) which is associated with higher-order executive functions, such as decision-making, planning, and goal-setting. A lesion in the prefrontal cortex causes apathy and lack of initiative as was seen in patients who had the psychosurgery of prefrontal lobotomy in the 1940s. The surgery was replaced by a pharmacological dampening of the broader dopamine control system with antipsychotics.

The localization of consciousness

We now know that maintenance of consciousness is located in one particular brain network consisting of the cerebral cortex and the more deeply located structure called the thalamus. Now we need to be careful in separating this network which controls level of consciousness from the mechanisms that provide for the content of consciousness, the particular sensory channel being activated for conscious awareness. The level of consciousness is how conscious the person is, ranging from being asleep, being in a coma or being wide awake and attending to some aspect of the sensory environment.

While there are brain lesions with profound effects on the level of consciousness, we also have an array of drugs that we use to alter level of consciousness for medical procedures. These drugs are quite capable of creating the zombie the philosophers are always hypothesizing, that its to say, a person who looks awake and is behaving as if they’re conscious but lacking awareness of their actions.

There’s actually a choice of drugs to create zombies, all of which activate the GABA inhibitory neurotransmitter system in one way or another. Among them are alcohol, gamma-hydroxybutyrate (GHB), benzodiazepines (like valium, midazolam and many others used for anxiety or insomnia) and general anesthetics both inhaled (like halothane) and injectable (like propofol).

Selective effects of propofol on consciousness

In the neuroscience literature on level of consciousness, you’ll see the intravenous anesthetic propofol studied most commonly. That’s a matter of convenience and suitability. It’s easy to use infusions in animal and human studies, the dose is easily controlled by rate of infusion, and the effects are very rapid, both coming on and wearing off.

The effects of propofol on the cerebral cortex are most easily seen by EEG, a recording of voltage differences at the scalp which reflect the electrical activity of the neurons under the electrodes as conducted though the skull and scalp. In an awake person, the electrical waves are chaotic and fast, reflecting all of the fluctuating activity across the cortex as sensory information comes in, is reflected to association areas and motor activity initiated. Even though our awareness is limited to one channel at a time through attentional systems, there’s activity across all of the systems and they are talking to each other.

Start a propofol infusion and the activity starts to slow. The EEG analysis shows a drop in EEG frequency across the spectrum. With enough propofol, we can induce a profound coma to where the signal becomes very nearly flat. We do this clinically at times to treat brain trauma and uncontrollable seizures.

Zombies are an in-between state where awareness is lost

An awake person is interesting to interact with while someone in profound coma isn’t so engaging. But it’s the in between zone where we create a zombie. If you’ve ever had general anesthesia, whether with propofol or inhalation anesthesia, you’ve had the unique experience of having your mind suddenly switched off and then back on again in what seems subjectively like no time passing. Even though hours may have elapsed on the clock in the operating room, one second they’re wheeling you in, the next second you awake in the recovery room. Its a disturbing interruption of self that doesn’t happen when you’re drowsy or asleep.

So yes, many of us can subjectively confirm that these drugs turn consciousness off. You have no experience of anything during that time. The EEG is slowed, but the cortex is continuing its business without awareness. In fact, most electical recordings from neurons in the lab are done on anesthetized animals. I did that during my PhD studies. It turns out that light anesthesia has very little effect on information flow though the visual system or autonomic control system. Hubel and Weisels pioneering recording from the visual system where the found edge detection neurons, cortical columns and surround inhibition were all done in anesthetized animals. True, spontaneous behavior disappears so it can’t be studied, but most brain circuits function pretty normally, well enough that their basic characteristics can be studied.

Behavior during sedation without subjective awareness = Zombie!

But you’ll object that the anesthetized person, even though their cortex continues processing sensory symptoms is not a zombie since there’s no behavior. Well, at just the right level of infusion, a level often called “Twilight Sleep” by the medical profession, but more appropriately just “sedation”, you can ask the patient to perform simple tasks like squeezing your hand or giving short answers to questions. That much of the cortical processing for input and output is working. If sedation gets too light, you get the problem that spontaneous behavior returns but the patient is still not conscious. They’ll try to get off the procedure table or at least move around to get comfortable. Not good during a colonoscopy. It’s just that the frontal lobe system to trigger behavior is active enough to try to get out of bed, but the thalamo-cortical network for awareness and attention is selectively turned off by the propofol infusion.

Unfortunately, this state of being unconscious but behaving is not uncommon in the real world when alcohol, benzos or GHB is circulating in the blood and activating the brain’s GABA system. It’s not uncommon for people to drink to excess, take pills or even be slipped a dose of powerful sedative like GHB. They’ll continue to act like they are awake but, just like the state of anesthesia or sedation, have a gap in the continuity of their awareness suggesting that they were behaving, but not aware. Clearly some supervisory, attentional mechanisms are active when the drinker gets a ride home from the bar and awakens with a gap. You tell the drinker how much fun they had last night and they recall none of it.

Memory is consciousness is self identity

You may realize that we’ve ended up conflating continuous awareness with memory of awareness. Since the subjective report relies on recall, they can’t two can’t be untangled. And of course, knowing who you are, that you’re the same person this morning that went to sleep last night is dependent on memory.

Actually, turning of memory storage is another way to create a zombie pharmacologically. But as I’ll argue in the next posts, much of our day passes in the zombie state. Most of the time our brains attend to controlling behavior, processing sensory input and responding to the environment but without awareness of self. Most of the time, we don’t need be anything other than a zombie. It feels strange when self awareness is gone because of external causes like sedation, not when we disengage the mechanism ourselves.

We’re already in an augmented reality

My cycling and fitness activities are being enabled by a set of technologies that were not widely available a few years ago- online coaching through internet enabled analysis of power files and streaming fitness sessions monitored by a wearable measuring movement and heart rate.

20 years ago we started using heart rate monitors widely. Before that there was just subjective effort. With a few chips we could see inside ourselves and augment the experience by seeing another, physiological dimension. Now my bike has a power meter, so I can see my actual output and compare it to the physiological effect and the subjective effort. My experience of riding and ability to train is augmented. The winner of Tour de France last year was only 23. The cycling pros of the previous generations ascribe the rise of younger champions to their use of these technologies to skip the years of learning and slow builds that the previous generation had to go through.

The key is that you need to use the technology to achieve real personal goals, not just enrich tech platforms.

One concern is that a machine would not need to be more intelligent than humans in all things to pose a serious risk. “It’s something that’s unfolding now,” he said. “If you look at social media and the algorithms that choose what people read and watch, they have a huge amount of control over our cognitive input.”

Do you want to blindly give away your data to others and allow their algorithms manipulate us? Or seek real experiences augmented by technology? Do we live in the real, augmented world or in their “metaverse”

I’m hoping that Apple’s augmented or virtual reality device will be more along the lines of a way to enhance the experience of real or imagined worlds and not a way to enslave us in their artificial environment.

The Augmented Environment

One of my conclusions from studying how the brain interprets the world and how people actually make decisions is that the single most important decision we make is the choice of our environment. And by that I mean both physical and semantic environment. Who are the people we surround ourselves with? What are the books and websites we read? The key is that our beliefs strongly condition how the world occurs to us. We can’t decide in the moment how to react to the statement of a politician or writer.

In statistics, we call these beliefs “priors” that determine the probabilities we assign to events. We update those priors with new information all of the time, so the information we’re exposed to on an ongoing basis determine perception.

At the brain level, we can see this in the most basic forms of perception, like how we see ambiguous figures, for example here: Long-term priors influence visual perception through recruitment of long-range feedback

A computational model incorporating hierarchical-predictive-coding and attractor-network elements reproduced both behavioral and neural findings. In this model, a bias at the top level propagates down the hierarchy, and a prediction error signal propagates up.

It’s reasonable to extend this kind of biased perception extending to how we perceive what people say to us or their motives. If you believe you live in a violent environment or that some classes of people are inherently violent, your priors will influence your interpretation of the words and actions of everyone around you. No choice in the matter, because belief comes from experience and experience largely comes from environment.

The trouble is that in our augmented reality, we don’t experience much of the world at all. We read reports of the world and interpretations of events. That’s an overlay that we experience as part and parcel of the real world, even though it’s just an overlay providing an interpretation, augmenting the pure sensory event.

So choose your friends and your information sources carefully.

First Draft of On Deciding Better Manuscript Complete

I surprised myself this morning by writing what seemed to be the final words in my first draft of the book I call “On Deciding . . . Better: The New Ecology of Mind and Brain”.

This version of the book started with an outline in Google Docs on May 19, 2020 and took about a year to turn into a very rough first draft. As I’ve talked about before, the process of creating what Scribe Writing calls a “vomit draft” got me unstuck from all of my previous attempts.

The manuscript as it stands is pretty close to how I outlined it, which may not surprising given that this is a project I’ve been mulling over for a long time. More than 20 years if you start counting when I starting this blog in late 1999. The central ideas in the book were mostly present in the earliest posts I made here, but I’ve made some progress and had wanted to create something coherent from all of the work.

I’ve struggled with the problem expressing the ideas contained in the book as short posts in a blog. It’s clear that a manuscript provides the space to build the  concepts that can guide a reader through this journey out of simple enlightenment rationality. So I tried to document the journey I’ve made from the rational theories of decision theory to the ways we really make decisions which are based in ecology and evolution.

The book is a bit over 60,000 words which they tel me is a 300 or so page published book. Now comes editing, which I’m guessing will both remove a good bit of text but identify some gaps that need some more material, so net-net this seems about right.

For the edit, I hope I can focus on coherency and finding those gaps so that I can have a manuscript I trust someone could read and understand the general flow and arguments I’m making. It’s only at that point I can imagine anyone else reading it. I’d like to get it into some final form eventually and move on to another long term project to explore this ever facinating area of what it means to be human. It will be fun to get back into an exploratory mode rather than the grind of 250 to 500 words a day, aimed at filling in an outline of what I’ve learned.

I’m not sure what I’ll do with the ODB book once its done. At this point I don’t have much interest in all of the book promotion activity that goes along with publishing these days and I doubt a major publisher would be interested in this niche of neuroscientific philosophy book on decision making. On the other hand, I would like it to reach as large an audience as possible, so I guess that means releasing it on some platform or other and doing some promotion.

So on to editing!

Why You Need to Publish Your Notebook

Note taking seems to be having one of its many turns as an internet topic. The world of analog journalling is flourishing, propelled by (mostly) Japanese stationary brands like Midori with its Traveler’s Notebook and the Hobonichi Techo, but some stalwart European and British brands like Rhodia and even Filofax. On the digital side there’s a new crop of back linking network note taking apps that join the still relevant categories of GTD task managers, plain text approaches, Wikis and Hypertext (like my favorite, Eastgate’s Tinderbox), and integrated note systems from Evernote to Ulysses to Markdown editors to the latest like Craft. And then there’s the handwriting on iPad apps like Notability and Good Notes. I almost forgot the emerging ePaper device category, which has tempted me a few times ((But I know that fountain pen and paper just works for me better than stylus on iPad and probably one of these cool, single purpose devices)).

I ask myself, “With all this note taking, where’s the shared content being shared?”

Lots of notes, lots of systems, lots of discussion. Not so much content coming back out of these notes outside of work product like academic publications, corporate reports and the commercial content universe of the internet which has replaced magazines and newspapers, but tends toward the slick and superficial.

I miss the individual voices of the early internet and it seems that all of this note taking should provide some path for sharing what we’re all learning outside of commercial sites.

An Appeal To Contribute

I often see mention of how Google/YouTube has turned the web into a closed system by controlling search and advertising. For a long time, its been noticeable how many searches return no real results on the first or event the second page. I get advertising, shopping and howto Youtube videos.

If I see real content it’s often a link to Reddit, a company support forum or, sometimes a post on Medium. In fact, I’ve subscribed to Medium because so often search results take me there. Since most of the web writing going on is on commercial sites selling courses or subscription content, even when I get sent to a content page, it’s often a teaser for some course or software. I rarely get linked to one of the sites I follow through RSS feeds via NetNewsWire. In fact, I’ll often add a word or two to the search to make sure that I get an answer from Thom Hogan on photography plus others who may have commented on his opinion.

Curiously, huge amounts of the internet are generally completely cut out of these searches- Twitter, Instagram, Facebook. And email subscription newsletters. And Substack. We’re in a world of walled gardens and click farms.

Sadly, this isn’t entirely the fault of the Machine Learning algorithms that now rule the world and decide how to rank the results on a Google search page. There’s definitely less for the search engines to point to and less interchange between web sites that leads to links an interest.

Where’s the useful content then?

Now I’ll admit my own guilt here. I lost the drive to create useful content here when that early blogging community dissipated. I do learn things I ought to be sharing. There’s a steady stream of traffic here, so I could contribute more. I read, take notes, come to conclusions. My notebooks are full, my blog is empty.

An example? Is it worth the cost to buy a 16GB M1 Mac or should I get the quickly available 8GB memory model? Once I found a few examples of limitations, it seemed clear that for my use, processing big RAW files in Photoshop with Nik filters, a 16GB upgrade was going to be worth it. And it is. I don’t have a suite of test results, but I did see how fast the Apple Silicon Photoshop beta processed gaussian filters to create the now infamous Orton Effect ((If you haven’t heard, the Orton effect is that glowy landscape look, taking over from the overdone HDR workflow (which is not at all natural looking!) as getting likes on Flickr. Previously, I participated in the oversaturated landscape movement, to my everlasting shame.))

Another one? Should I switch to editing this website with the WordPress Gutenberg editor or keep my existing MarsEdit staging workflow? It looks like it makes sense to do only early drafts in MarsEdit to take advantage of creating links and pulling in media from Photos, but then publishing is best out of the new WordPress interface. And it looks like there’s no reason to move off of WordPress to any other system.

Workflows. How to use tools. Hacks, tricks and lessons learned in the course of exploration and implementing. All interesting reads and worth sharing.

If I needed any evidence of the value of sharing useful lessons, I need look no further than the content that is seen here. By far, the most viewed page is How Bill Gates Takes Notes, which is a short piece about the tale of his using the Cornell note taking system during meetings. I wrote that page because I found it interesting and there was no where left on the web where it was ell documented. I also have some photography pages and note taking workflow pages that get a hit or two day. Mostly from Google based on questions asked about how to do something or what something is, bypassing all the shopping and YouTube content.

3 Keys To Writing More

During the pandemic I seem to have figured out a few things. Like how to actually write a book manuscript. The method is simple really, just hard to sustain. But know that most everyone has a book or two in them already.

Last year, I mentioned I was renewing my effort to do some longer form writing on what I’ve learned over the last 20 years. It’s turned out to be a busy year for my work in drug development even with travel curtailed by the COVID-19 pandemic. I’ve found myself working from home for most of the year, but with a pretty packed day of calls and need to get out work product. How was I going to push the writing project forward.

Fortunately, about this time last year I found Tucker Max’s website ((Check out the web site Scribe Writing. Download the free ebook. Follow the directions. In a few months, you’ll have a first draft of a manuscript.)). His company provides services to authors like you and me who have books in them but perhaps no real ambition for a career in writing. Books can boost careers, publicize businesses and influence public discourse. I don’t have any these motives really. I’m just interested in sharing what I’ve learned from what I think is a unique perspective as a neuroscientist working in business environments.

Tucker provides a lot of free content on the site besides selling courses and services to writers. I extracted three key insights :

  1. Use the tools at hand to get words into the computer. Word, Google Docs, Notes- anything. I know I have way too many tools. So I picked Ulysses because it’s plain text, semistructured and syncs across Mac and iOS.
  2. Writers write. A book of 100 to 200 pages is 20 to 40 thousand words. Write 250 words a day, every day for at least 30 minutes a day. 60 minutes a day is reasonable and 120 is optimal. Since my schedule is different every day given project meetings, fitness schedule and work product due dates, I simply decided that my first 30 minutes free at my desk would be dedicated to getting at least 250 words out. Sometimes I’ve gotten 500 or more if I had an hour, but I just put fingers on keyboard and got the ideas down.
  3. The first draft is a marathon to get the words out. A lousy first draft. A vomit draft. At 250 words a day, 100 days of work equals a short book. What I have is nothing I’d want to share with anyone, but I think that after the first editing round I could serialize the chapters on the website here. Ulysses telsls me I’ve gotten 30,348 words down and I’m about halfway through the outline. So half of a 200 page book done.

I found that Tucker will actually answer questions, just as he promises on his website. One of my biggest obstacles was how much material I already had at hand. 20 years of blog posting and at least 2 previous attempts at turning the material into a book. I wanted to edit it into a book to save the time writing.

So I asked Tucker how to deal with the mass of material I’ve collected over all this time. Several manuscripts in volume really. He suggested I use it as reference, but start with a fresh outline and start writing anew.

Starting over turned out to be exactly the right approach. His assertion that people like me have a book in our heads already is absolutely right. And I’ve done this many times before as it turns out. I’ve often written research papers, review articles and book chapters by referring to. references, but then doing the careful citation and fact checking during editing once the ideas and flow were down on the page. The principle is that that the author has the book inside already, it just needs to get out of their head and into that linear computer file called a book manuscript.

As I’ve been writing, I’ve gotten a clearer idea where this is all going, so I spend some other time diving into some of the newest insights into brain mechanisms for decision making as well as my guidepost books on self improvement and making better decisions.

I hope to share the effort at some point, but given that I have no ambition to an author, the whole exercise has been a personal project to clarify some of the ideas I’ve had over the years about the relationhsip between decision making, brain function and the philosophy of ethics.

Our Limited Capacity to Decide

With over 30,000 words done on my manuscript and about halfway plowing through the outline, the thesis of the book has become ever more clear. Here’s the essential. question about decision making from the point of view of neuroscience:

The ability of our brain’s executive function to make decisions is limited not only by the model it creates out of experience , but by the decisions made by brain systems that are inaccessible to awareness or executive control. We then can ask : How do you make better decisions when agency is so limited?

This morning I drafted a few paragraphs about eating that seemed to encapsulate much of the argument and seemed worth sharing here. I think it provides a little idea of the style and approach I’m taking in a longer form.

Recognize this? Your visual system will make a perceptual decision.

Embodied cognition

We’re on the subject of what has been called “the embodied mind” ((The Stanford Encyclopedia of Philosophy makes the historical attribution of embodied cogntion to a short list of authors including George Lakoff and Andy Clark. Their books have been important influences on me in formulating the abstract world of thought as metaphor in the brain’s model of the real world)). The brain is the body’s regulator of behavior of all kind including not only attention and voluntary movement but every regulatory system in the body that keeps us alive and healthy. There’s no line between “mind” and “body”, so our experience has to include input from those systems and our behavior has to be adjusted to take care of them appropriately without involvement of the executive network in the cerebral cortex.

For example, while I can’t control my heart rate directly, the feeling of my heart pounding is an important aspect of the world. My internal model of the world needs to account for whether it’s pounding because I’m sprinting in a friendly competition on my bike in a group ride or whether I’m finishing a high stakes race or whether I’m angry with and/or afraid the driver of the car that just side swiped me and who’s continuing to threaten me ((Sadly confrontations with motorists is all too common out there on the bike)). There’s context to heart rate that important in the bigger world model beyond regulation of the cardiovascular system.

The exquisite control of appetite

And so too with appetite. When I was in medical school, they had just introduced lectures on nutrition into the curriculum. There was one fact I took away from the lectures that I think about very frequently. Now you have to understand that the two families that owned the Coca Cola Company (the Candlers and then in 1919 the Woodruffs) have been major benefactors of Emory University, where I got my MD, PhD training. ((In fact my father, who had been a Pepsi drinker all his life switched to Coke after I was accepted to the Medical Scientist Training Program there to show his gratitude. It was a program with full scholarship and stipend after all.))

A 12 oz can of Coke has 140 calories. If you decided to add a can of coke a day to your daily diet, perhaps with lunch at the campus cafeteria, that would be 365 cans of Coke or 51,100 extra calories a year. Over a decade, more than half a million additional calories from that can of Coke. We know that there are 3500 calories in a pound of body fat. So that half million extra calories would add 146 extra pounds. Drink that can of Coke for a few decades and you’ll be hundreds of pounds overweight. Looked at another way, the average caloric intake for a man in the US is 2500 calories per day. That can of Coke that caused so much theoretical havoc is only 5.6% of daily caloric intake. For most of us with relatively stable body weight year by year, that means that our average daily intake of calories is regulated down to single digit percentage points! Not only that, but the great difficulty that we find with dieting and the empiric data showing that diets don’t work for most, demonstrates that consciously trying to regulate caloric intake is almost impossible over the long term.

Some of the ability to maintain stable body weight is due to cellular metabolic control regulating basal metabolic rate, the burning of calories at rest. But most of it is behavioral- how active we are choosing to sleep, sit or move around. And of course what and how much we eat. We are no more in control of eating than we are of breathing or respiratory rate or blood pressure.

A complex set of signals exchanged between the body’s fat stores, the gastrointestinal tract, the endocrine system and the brain allows hormones and levels of blood nutrients (sugar, amino acids, fats) to trigger food acquisition and consumption. We’re really good at knowning how much to eat. We’re fully in charge yet not aware of the expertise we have implicitly and not in any kind of long term control of it. Within a few percentage points, every day year after year. Despite the best efforts of our executive network to influence body weight.

Perceptual Choices

Deciding Without Thinking

The original premise of this site is that it possible to actually make better decisions. That’s why I called it called “On Deciding . . . Better” to begin with. After all, we are the agents responsible for our actions, so who else is responsible for making better choices? I’ve written about the false promises made by Decision Theory which asserts that by making choices can be made more rational, the decisions can be more successful. The problem isn’t the mathematicl basis of Decision Theory, it’s the problem with implementing it when people actually need to make decisions in the real world. There are valuable habits and tools in the Decision Theory literature, but it’s clear to me that when our brains are engaged in rapidly making decisions, we generally are not aware of the decision process. If we’re not aware of the decisions, then those deliberative executive function mechanisms can’t be brought online as they are being made.

Perceptual Decision Making

This is the Kanizsa Triangle, created by the Italian psychologist Gaetano Kanizsa in 1955. I like it as an example because it is so simple and yet so stable. The brain creates the contours of the second white triangle. When I first heard this kind of illusion being called a “perceptual choice”, I rejected the notion. After all, a choice is an act of will, of mental agency.

Yet calling this “a perceptual choice” makes a lot of sense from a brain mechanism point of view. A simple set of shapes and lines is projected on the retina and relayed back through the visual cortex and the surrounding cortical areas. That part of the brain makes an unshakable choice to interpret the center of the figure as containing a triangle. Similarly, seeing the face of my son, a different area of cortex decides who it is. Further, circuits are activated with all kinds of associated information, some brought up into consciousness, others not, but just ready to be used when needed.

Are We Responsible for Perceptual Choice?

If perceptual choice is like most other choices, like choosing a breakfast cereal or a spouse, it seems I’m advocating abandoning a lot of perceived responsibility for one’s own’s actions. It seems that we walk through the world mostly unaware of how perceptions are constructed and don’t have access to why we act the way we do. Actions are based on perceptions that were chosen without awareness in the first place. And it goes without saying that we have no responsibility for the perceptions and actions of everyone around us. Their brains, wired mostly in the same way as ours, chose how to perceive our words and our acts.

It seems to me that to make better decisions there have to be rather deep changes in those perceptual brain processes. Any decision tools have to become deeply embedded in how our brains work, any rules to guide how we perceive, choose or act lie as deep habits in those automatic functioning circuits of the brain. Some, like the the Kanizsa Triangle are in the very structure of the brain and can’t be changed. Others are strongly influenced by experience and deepened by practice.

Lessons in Science and Culture

John Nernst at Everything Studies provides a long and thoughtful analysis of a discussion of a dangerous idea: A Deep Dive into the Harris-Klein Controversy. I think it’s worth a comment here as well.

As a neuroscientist and reader of all of these public personalities (Charles Murray, Sam Harris and Ezra Klein), I’ve followed the discussion race and IQ over the years. We know that intelligence, like many other traits like height or cardiovascular risk are in part inherited and influenced strongly by environment. Professionally, I’m interested in the heritability of complex traits like psychiatric disorders and neurodegenerative diseases. The measured differences in IQ between groups falls squarely in this category of heritable traits where an effect can be measured, but the individual genes responsible have remained elusive.

I’m going to side with Erza Klein who in essence argues that there are scientific subjects where it is a social good to politely avoid discussion. One can learn about human population genetics, even with regard to cognitive neuroscience without entering into an arena where the science is used for the purpose of perpetuating racial stereotypes and promoting racist agendas of prejudice. That the data has a social context that cannot be ignored.

Sam Harris, on the other side of the argument, has taken on the mantle of defender of free scientific discourse. He takes the position that no legitimate scientific subject should be off limits for discussion based on social objections. His view seems to be that is that there is no negative value to free and open discussion of data. He was upset, as was I, at Murray’s treatment at Middlebury College and invited Murray onto his podcast. Sam was said by some to be promoting a racist agenda by promoting discussion of the heritability of IQ in the context of race.

In fact, Ezra Klein joined the conversation after his website Vox published a critique of the podcast portraying Harris as falling for Murray’s pseudoscience. But that’s nothing new really; Murray surfaces and his discussion of differences in IQ between populations is denounced.

As one who knows the science and have looked at the data, it bothers me like it bothers Harris that the data itself is attacked. Even if Murray’s reasons for looking at group differences is to further his social agenda, the data on group differences is not really suprising. Group differences for lots of complex inherited traits are to be expected, so why would intelligence be any different than height? And the genes responsible for complex traits are being explored, whether its height, body mass index or risk for neurodegenerative disease. Blue eyes or red hair, we have access to genomic and phenotypic data that is being analyzed. The question is whether looking at racial differences in IQ is itself racist.

I’ve surprised myself by siding with Klein in this case. His explanation of the background is here and his discussion after his conversation directly with Harris is here. Klein convincingly makes the argument that social context cannot be ignored in favor of some rationalist ideal of scientific discourse. Because we’re human, we bring our cultural suppositions to every discussion, every framing of every problem. Culture is fundamental to perception, so while data is indifferent to our thought, the interpretation of data can never be free of perceptual bias. Race, like every category we create with language, is a cultural construct. It happens to be loaded with evil, destructive context and thus is best avoided if possible, unless we’re discussing the legacy of slavery in the United States, which I think is Klein’s ultimate point.

Since these discussions are so loaded with historical and social baggage, they inevitably become social conversations, not scientific ones. Constructive social conversations are useful. Pointless defense of data is not useful; we should be talking about what can be done to overcome those social evils. No matter how much Sam would like us to be rational and data driven, people don’t operate that way. I see this flaw, incidentally, in his struggle with how to formulate his ethics. He argues with the simple truth that humans are born with basic ethics wired in just like basic language ability is wired in. We then get a cultural overlay on the receptive wiring that dictate much of how we perceive the world.

Way back when, almost 20 years ago, I named this blog “On Deciding . . . Better” based on my belief that deciding better was possible, but not easy. In the 20 years that have passed I’ve learned just how hard it is to improve and how much real work it takes. Work by us as individuals and work by us in groups and as societies.

Lack of Authority

It's hard to believe how little we trust what we read in this age of the internet.

The US election of 2016 demonstrated just how profoundly our relationship to authority has changed. We're exposed to conflicting opinions from the online media. We hear facts followed by denial and statement of the opposite as true. Everyone lies, apparently. There's no way to make sense of this online world in the way one makes sense of a tree or a dog or a computer.

Perhaps relying on confirmation bias, we are forced to interpret events without resort to reasoned argument or weight of evidence. We have to fall back on what we already believe. You have to pick a side. Faced with a deafening roar of comments on Twitter, cable news, news websites, we shape what we hear to be create a stable, consistent worldview.

Welcome to a world of narrative where the truth is simply the story we believe. And pragmatically, it seems not to matter much. Believe what you will, since we mostly yield no power in the world.

So what am I to make of this nice new MacBook Pro that I'm using right now? Is it really evidence of Apple's incompetence or their desire to marginalize or milk the Mac during its dying days? Again, believe what you will, but I've got some work to do.

Mind In The Cloud

“Technology changes ”how“ not ”what.“ Expands in space, compresses in time. The results are sometimes breathtaking.”

Notebooks as Extended Mind

In 1998, Andy Clark and David Chalmers made the radical suggestion that the mind might not stop at the borders of the brain. In their paper, The Extended Mind, they suggested that the activity of the brain that we experience as consciousness is dependent not only on brain but also on input from the rest of the world. Clark’s later book, Supersizing the Mind clarifies and expands on the idea. Taken to its logical conclusion, this extended mind hypothesis locates mind in the interactions between the brain and the external world. The physical basis of consciousness includes the brain, the body and nearby office products.

I mean to say that your mind is, in part, in your notebook. In the original paper, Clark and Chalmers use the hypothetical case of Otto. Otto has Alzheimer’s Disease and compensates for his memory deficit by carrying a notebook around with him at all times. They argue for externalism- that Otto’s new memories are in the notebook, not in his brain. The system that constitutes Otto’s mind, his cognitive activities depends not only on his brain, but on the the notebook. If he were to lose the notebook those memories would disappear just as if removed from his brain by psychosurgery. It should make no difference whether memory is stored as physical traces in neuronal circuitry or as ink marks on paper since the use is the same in the end.

The paper actually opens with more extreme cases like neural implants that blur completely whether information is coming from the brain or outside. We have brain mechanisms to separate what is internally generated and what is external. The point is that these external aids are extensions. In medical school I learned to use index cards and a pocket notebook reference, commonly referred to as one’s “peripheral brain”. Those of us who think well but remember poorly succeed only with these kinds of external knowledge systems.

In 1998, when The Extended Mind was published, we used mostly paper notebooks and computer screens. The Apple Newton was launched August, 1993. The first Palm Pilot device, which I think was the first ubiquitous pocket computing device , in March, 1997.

The Organized Extended Mind

When David Allen published Getting Things Done in 2001, index cards and paper notebooks were rapidly being left behind as the world accelerated toward our current age of email and internet. I’ll always think of Getting Things Done system as a PDA system because of the lists I created my system that lived on mobile devices. First it was the Palm, then Blackberry and most recently, iPhone. @Actions, @WaitingFor, @Projects were edited on the PC and synced to a device that needed daily connection to the computer. I had a nice collection of reference files, particularly for travel called “When in London”, “When in Paris”, etc.

My information flow moved to the PC as it became connected to the global network. Two communication functions really: conversations and read/write publishing. Email and message boards provided two way interaction that was generally one to one or among a small community. Wider publishing was to the web. Both of these migrated seamlessly to hand held devices that replicated email apps or the browser on the PC. Eventually the mobile device became combined with phone. Even though capabilities have grown with faster data rates, touch interfaces, bigger screens and large amounts of solid state data storage, The first iPhones and iPads showed their PDA roots as a tethered PC device in the way they backed up and synced information. That world is rapidly fading as the internet becomes a ubiquitous wireless connection.

Access to email and internet through smartphones has served to further “expand time”“ and ”compress space" as Dave put it. I adopted a text file based approach so that I could switch at will between my iPhone, iPad and MacBook Air and have my external thoughts available. The synced plain text files seems transformational, but feels like my old Palm set of lists.

The age of the cloud is one of information flakes. Much of what we know is now latent and external requiring reference to a small device. Is it any wonder that our streets and cafes are filled with people peering down into a screen rather than out into the world?

It was a rapid transition. One that continues to evolve and that demands frequent reconsideration of the means and methods for constructing the extended mind.

A Mind Released

The SimpleNote and Notational Velocity and DropBox ecosystem was the enabling technology for me. Suddenly there was seamless syncing between the iPad or iPhone and the Mac. The rapid adoption of Dropbox as the defacto file system for iOS broke the game wide open so that standard formats could be edited anywhere- Mac, Windows, iPhone, iPad, Unix shell. This was a stable fast data store available whenever a network was available.

Editing data on a server is also not a new idea. Shell accounts used for editing text with vi or Emacs on a remote machine from anywhere is as old as computer networking. I started this website in late 1999 on Dave Winer’s Edit This Page service where a text editor in the browser allowed simple website publishing for the first time.

Incremental searching of text files eliminates the need for databases or hierarchical structure. Text editors like Notational Velocity, nvAlt, SimpleNote or Notesy make searching multiple files as effortless as brain recall from long term memory. Just start typing associations or, for wider browsing, tags embedded in metadata, and unorganized large collections become useful. Just like brain free recall of red objects or words that begin with the letter F. Incremental searching itself is not a new idea for text editors. What’s new is that we’re not seeing just a line of text, but rather multiline previews and instant access to each file found. But together incremental searching with ubiquitous access and the extended mind is enabled across time and space.

What seems to have happened is that the data recorded as external memory has finally broken free from its home in notebooks or on the PC and is resident on the net where it can be accessed by many devices. My pocket notebook and set of GTD text lists is now a set of text files in the cloud. Instantly usable across platforms , small text files have once again become the unit of knowledge. Instant access to personal notebook knowledge via sync and search.