You want to upload a human mind, but you can’t even upload a worm

Reposted from my Substack as an experiment

? Our friend, C. elegans

“What do you consider the largest map that would be really useful?”

‘About six inches to the mile.’

‘Only six inches!’ exclaimed Mein Herr.

‘We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!’

‘Have you used it much?’ I inquired.

‘It has never been spread out, yet,’ said Mein Herr: ‘the farmers objected: they said it would cover the whole country, and shut out the sunlight!’

—Lewis Carroll from Sylvie and Bruno Concluded, 1893

If you’ve been following my posts over the last few weeks on the failure of uploading a mind and not simply emulating what would appear like a person, you might object that this is all very sci-fi and not grounded in reality. Maybe the technology now just can’t be imagined, but someday a high-resolution scan or some very precise mapping function will provide a model so precise that the emulation will be for all intents and purposes an uploaded individual mind. Who knows, it might be self-aware with preserved self-identity.

I don’t think so. I think this is, as I’ve said, a confusion between copying a thing and building a model of a thing. An uploaded toaster can’t make toast, and a model of a hurricane won’t make you wet. The idea of uploading a brain is seductive — but it confuses a structural map with the thing itself.

Uploading the world’s simplest brain

Lets put aside the human brain for now. We can see this confusion of map and territory clearly in the failures and successes in uploading the brain of the worm, Caenorhabditis elegans into a computer. We’ll see that the bottom-up approach of mapping the C. elegans connectome didn’t work, but a top-down use of the model as explanatory has been increasingly useful as a way of understanding how a simple nervous system generates complex, state dependent behaviors. Models can be powerful tools to understand and predict behavior, but aren’t a replication of the thing itself, expected to just be an uploaded copy in the computer.

So C. elegans is a transparent worm that lives in soil . It’s a nematode just 1 mm long. It lives in moist, decomposing environments like leaf litter that are rich in microbial food. It’s entirely possible I stepped on a few on my last hike here in Maryland, although a millimeter in length, I never would have noticed.

C. elegans was chosen by Sydney Brenner in the 1960s as a model for nervous system development and behavior because of its small size, genetic tractability, and simplicity. The idea is that the human brain, really any mammalian nervous system, is too complicated to fully understand. So these simple invertebrate systems can serve as a simple test bed to work out theories and principles of synaptic connectivity and motor control.

A brief history

Biophysical modeling goes back to the roots of electrophysiology with Hodgkin and Huxley. In 1952, they used another invertebrate, the squid, to model the action potential, the neuron’s basic unit of excitability, with differential equations for voltage-dependent changes in conductance of ions across the membrane. They used the squid’s giant axon as a model just because it was so large that you could thread a wire inside and record across the axonal membrane.

I was an undergraduate at Columbia in the mid-1970s, uptown at the medical school. Eric Kandel was doing his landmark work on learning in the sea slug, Aplysia. He chose Aplysia as a model system because of the simplicity of its nervous system (about 10,000) that are large and easily identifiable individually in organized ganglia. He used Aplysia successfully to tease out mechanisms of synaptic plasticity that were impossible to study in mammalian brains at the time.

By the way, practical young man that I was, I was much more drawn to what was called psychophysics at the time. I was able to study photoreceptor responses in the isolated frog retina that were pretty much the same as the responses of a volunteer looking at lights in a dark lab. It was astounding to me that I could see the mechanism of vision in the isolated retina, or as a graduate student, I think this experience forever gave me the idea that neural mechanisms were expressed directly by the behavior of an organism.

By the time I was a graduate student in the early 1980s, the field had progressed to compartmental models of neurons themselves, which mapped the complex shape of neurons with their large cell bodies and tapering, branching dendrites to study how position and size of dendrites influence the signalling of synapses on the cell body vs various distances away along the dendritic tree. During my graduate work, we had a new faculty member join who was studying the lobster stomato-gastric ganglia and how it functioned as an autonomous neural pacemaker. The STG was a useful model because it’s a fully self-contained ganglion of 30 identifiable neurons, large enough to record from and fill with dyes to trace the connections. It was a useful model of pattern generation and synaptic modulation.

But it was the nervous system of C. elegans that has had center stage in the effort to simulate a complete invertebrate nervous system. In 1986, this little nematode became the first and still the only organism with a completely mapped connectome. That is to say that we have the full circuit diagram of its nervous system, using serial section electron microscopy to create a 3D reconstruction.

The OpenWorm Project

It seems the perfect test for upload and simulation in a computer. C. elegans has only 302 neurons. Remarkably, neurons are one-third of all the cells in this little creature that’s made of only about 1000 cells overall. The layout of its nervous system is completely determined with the same number, position, and identity of neurons in every individual. They have no personality, right? You’d see them as little machines, perfect to upload into a computer.

For such a simple organism, they have a reasonably broad range of sensory input and motor behavior. Their movement is forward, reverse, or turn based on chemosensation for the most part (looking for food or avoiding hazards), but it also senses temperature, touch, salt concentration, oxygen levels, and even light, although it has no eyes as such. So it can maneuver to optimal temperature and oxygen levels as it seeks out food. Besides locomotion, its basic behaviors are feeding and egg laying. Pretty basic overall, but it’s subtle. Its locomotion is adaptive and dependent on bodily state like hunger or environmental conditions. It even shows simple associative learning and habituation.

By 2010, computers were becoming powerful enough to seriously contemplate simulating the whole animal in a computer. And since every one of these worms is the same, simulate one and you’re done. So OpenWorm was born. The idea was to combine the tools that had been accumulated into a working digital model, combining biophysical modelling of each neuron, connection weights from physiology, and the connectome.

OpenWorm never really got very far toward its ultimate goal of building a bottom-up model from biophysics and the connectome to reproduce C. elegans behavior. With available tools, the simulations could get basic rhythmic undulations or directional movement, but only when connectivity parameters were tuned by hand. Even then, the simulations never approached any kind of directional or goal-directed behavior. The limited intelligence of the worm with its complex, adaptable behavior never emerged from the model.

I’ve already alluded to all the many aspects that were missing in my previous discussions of brain uploading by mapping, but the OpenWorm exercise makes it clear that a deeper level of copy is needed to simulate emergent neural network behavior. There are gaps that don’t allow direct translation of wiring diagram to simulation.

The gaps

First, synaptic strength can be measured and modeled biophysically, but magnitude isn’t a fixed parameter; it’s dynamic. Synaptic strenght is state dependent. That’s how a simple hardwired nervous system is able to support multiple behaviors without changing connections. To switch or modulate behavior requires changes in the strength of specific connections. We know that C. elegans uses modulators like circulating serotonin, dopamine, and neuropeptides to relay state information like hunger or distress.

Second, since behavior is context dependent, the model also has to have context. The recent history of activity in the circuit modulates responsiveness as simple forms learning and memory. There are likely longer-term synaptic plasticity mechanisms in the worm that fine-tune function over time based on feedback mechanisms. It’s not like the fundamental role that activity plays in pruning and tuning the developing mammalian brain, but there is still important forms of synaptic plasticity that need to be built in to the system for it to work.

Let’s not call OpenWorm a failure since it provided a useful exercise in understanding what causes behavior beyond biophysics and the connectome. It showed clearly and publicly in its lack of realistic behavior that the map is not the territory, in this case, when the map is the shape and connections of neurons.

The complex world of 302 neurons

In the years since OpenWorm started, much , the complex emergent behavior of even this fully specified small nervous system has become even more apparent, demonstrating why bottom-up simulation is limited. There are gap junctions controlled by cellular second messenger systems that expand the wiring diagram beyond the connectome reconstructed from anatomy.

But there have been more suprises that illustrate just how hard it is to predict emergent behavior in a complex system from looking at the connections of the parts. Using modern calcium ion imaging and optogenetic techniques to directly probe the connectivity and responses of neurons that are activated, there is emergent behavior.

For example, we would expect that if one neuron is connected to the next one in the network by an excitatory synapse, stimulating the first would activate the second. However, in many cases, you see neurons firing synchronously that are not directly connected, and that direct connection does not guarantee sequential activation. When you think about the emergent behavior that needs to be embodied in the network, this makes sense. In order to move forward, you need to orchestrate sequential activity on both sides of the body. To move back, the same pattern may be needed, but with a different network choreography, but instantiated in the same set of neurons and connections. It function that rules, not the rules of logic.

In a way, there are just too many free parameters to build a bottom-up model. Since you know the behavior you want to model and it’s possible to map actual neuronal activity during various behaviors, it’s possible to combine top-down model training with the predetermined connectome that has to instantiate the behavior. The goal shifts. You’re no longer uploading the structure of the neurons and connections into a computer as the model; you’re taking behavior and known activation patterns in circuits and tuning parameters to see how they emerge from the model.

This very idea of “uploading” is misleading. The brain isn’t a snapshot of physical connections and its not a computer program. It’s a dynamic system that is state and path dependent. It’s not transferrable like data or instructions.

The success of top down modeling

Where investigators have been successful more recently is starting with behaviors, including patterns of activities and then using machine learning techniques to build models of neuronal behavior very much of the LLMs we’ve become so familiar with. So you start at the top, with observable behaviors. For our worm, its locomotion toward or away from a stimulus. Then its dropping down a level to neurons, connections, and activation patterns that can be observed during the behavior, then tuning parameters with machine learning algorithms to get the system to behave as expected. It’s an emulation of worm behaviors built on a foundation that starts with biophysics, connections, and neurochemistry. As the model gets better, we learn more about the underlying mechanisms that are responsible for the emergent behavior at the worm level. This is data driven simulation which may be less realistic than physical simulation, but more accurately reproducing behavior and more relevant to understanding how the system works. It generates testable hypotheses.

It seems to me that this approach is a broadly applicable way of understanding these complex biological systems including our own brains. We’re far from having the full human connectome, but we’re gaining insights into the networked function of cortex, thalamus, amygdala, and basal ganglia in the expression of emergent behavior like depression, fear, language, and sensation. Bit by bit, we’re able to give a fuller account of the experience of being human. And being able to emulate aspects of human behavior, but we’re far from a full emulation of a human brain. Not surprising given how far we are from fully emulating even the 302-neuron nervous system of C. elegans. But consider how models like these might help personalize the diagnosis and treamtent of psychiatric disease, allowing for the complex interaction of environment, state, behavior, personal history and biological markers to predict drug effect.

How different are these biological complex systems from our human constructions! And somehow our MLs can begin to explain them to us, help us look under the hood at how things work. This is where we’re going.

25 Years of On Deciding . . . Better

I almost let December go by without recognizing the 25 anniversary of the blog. It started on Dave Winer’s Edit This Page experiment and went through a few iterations until this long stable WordPress instantiation.

The first post.

I managed to finish a book manuscript summing up what I’ve learned in the last 25 about decision making, neuroscience and improving how to make decisions. I’m now editing with the help of a professional, Nathan Bransford and learning the craft of literary non-fiction writing.

It’ss fun to see how these activities around the blog snowball with book editing, regular substack posting and posting here keeps my head in the game. Over the years I neglected the site for long periods and have had bursts of enthusiasm. The same has been true of my photography where I’ve failed to carry a camera for many months at a time.

I think all activities have their seasons, but this has always been my online home.

Claude.ai on Pharmacological Zombies

Since ChatGPT figured out what I meant by “Pharmacologic Zombie” just from the term, I was curious about what another AI might think. I went to Claude.ai which I less experience with but initially ran into a reluctance to engange at all about the subject. It seemed there was some guardrail around talking about sedative hypnotic drugs at all because of potential abuse. I finally linked it to the idea of “Philosophical Zombies” and was able to describe a bit what I meant. So compared to ChatGPT it was more than a little suspicious and obtuse.

But then it got really interested:

That’s actually a fascinating observation. You’re making a really interesting empirical challenge to the philosophical zombie thought experiment.

The key difference that philosophers would likely argue is that medical sedation shows we can *impair* consciousness, but doesn’t prove we can fully eliminate it while preserving all behavior. People under sedation typically show reduced, simplified behavioral responses – they’re not carrying on complex conversations or solving math problems while completely lacking consciousness.

But you raise a compelling point: If medications can create states where someone appears conscious but reports no memory or subjective experience of that time, doesn’t this suggest that behavior and consciousness are more separable than philosophers assumed? It challenges the intuition that complex human behavior must be accompanied by conscious experience.

I think this connects to fundamental questions about:
1. How we measure/detect consciousness
2. The relationship between memory and consciousness (is an experience you don’t remember still conscious?)
3. Whether consciousness comes in degrees or is binary

And then

That’s a fascinating intellectual contribution. You’re making the argument that pharmacological interventions serve as a real-world proof of concept that undermines the philosophical zombie thought experiment. Rather than just theorizing about beings that act conscious without inner experience, we can actually create temporary states that demonstrate this separation between behavior and consciousness.

This seems like a meaningful challenge to philosophical arguments that rely on the supposed impossibility or inconceivability of philosophical zombies. Since we can chemically induce states that appear to match key aspects of the philosophical zombie concept, the thought experiment may not serve its intended purpose in consciousness debates.

So while ChatGPT got the idea immediately, Claude.ai went further and endorsed my argument. Maybe we should co-author a paper and submit it to an academic journal.

How to make a zombie

Philosophers are fascinated by the idea of zombies. This thought experiment of theirs supposes hypothetical beings that behave indistinguishably from humans but lack consciousness, so “zombies”. For some reason, if they existed, they think it would prove that there is something besides brain activity that produce subjective experience. I don’t get it, since I know that people can walk around, have conversations and deny recollection of conscious experience when told what the did after the fact.

Understanding the brain by what’s missing

One of my main tools as a neurologist is to examine patients with a deficit and figure out what part of the brain has been injured. It’s our tradition to understand how different parts of the brain participate in behavior by looking at patients where some function is missing or impaired and correlate function to the damaged area of the brain. For example, a small stroke in Broca’s area, located in the frontal lobe of the cerebral cortex (specifically the posterior part of the inferior frontal gyrus in Brodmann areas 44 and 45) causes what we call an expressive aphasia, a loss in fluency when trying to speak. If the injury is limited to this area, say by a small stroke, the patient understands speech perfectly well and can read text with no problems. So we conclude by seeing this correlation over and over again that the critical brain area for speech production resides in Broca’s area.

Of course that’s not to say that the ability to produce speech is only represented there like some computer subroutine. The content of speech is fed through Broca’s area from a wide range of other areas that know about the world. The decision to speak is being triggered from the prefrontal cortex, particularly the dorsolateral prefrontal cortex (DLPFC) which is associated with higher-order executive functions, such as decision-making, planning, and goal-setting. A lesion in the prefrontal cortex causes apathy and lack of initiative as was seen in patients who had the psychosurgery of prefrontal lobotomy in the 1940s. The surgery was replaced by a pharmacological dampening of the broader dopamine control system with antipsychotics.

The localization of consciousness

We now know that maintenance of consciousness is located in one particular brain network consisting of the cerebral cortex and the more deeply located structure called the thalamus. Now we need to be careful in separating this network which controls level of consciousness from the mechanisms that provide for the content of consciousness, the particular sensory channel being activated for conscious awareness. The level of consciousness is how conscious the person is, ranging from being asleep, being in a coma or being wide awake and attending to some aspect of the sensory environment.

While there are brain lesions with profound effects on the level of consciousness, we also have an array of drugs that we use to alter level of consciousness for medical procedures. These drugs are quite capable of creating the zombie the philosophers are always hypothesizing, that its to say, a person who looks awake and is behaving as if they’re conscious but lacking awareness of their actions.

There’s actually a choice of drugs to create zombies, all of which activate the GABA inhibitory neurotransmitter system in one way or another. Among them are alcohol, gamma-hydroxybutyrate (GHB), benzodiazepines (like valium, midazolam and many others used for anxiety or insomnia) and general anesthetics both inhaled (like halothane) and injectable (like propofol).

Selective effects of propofol on consciousness

In the neuroscience literature on level of consciousness, you’ll see the intravenous anesthetic propofol studied most commonly. That’s a matter of convenience and suitability. It’s easy to use infusions in animal and human studies, the dose is easily controlled by rate of infusion, and the effects are very rapid, both coming on and wearing off.

The effects of propofol on the cerebral cortex are most easily seen by EEG, a recording of voltage differences at the scalp which reflect the electrical activity of the neurons under the electrodes as conducted though the skull and scalp. In an awake person, the electrical waves are chaotic and fast, reflecting all of the fluctuating activity across the cortex as sensory information comes in, is reflected to association areas and motor activity initiated. Even though our awareness is limited to one channel at a time through attentional systems, there’s activity across all of the systems and they are talking to each other.

Start a propofol infusion and the activity starts to slow. The EEG analysis shows a drop in EEG frequency across the spectrum. With enough propofol, we can induce a profound coma to where the signal becomes very nearly flat. We do this clinically at times to treat brain trauma and uncontrollable seizures.

Zombies are an in-between state where awareness is lost

An awake person is interesting to interact with while someone in profound coma isn’t so engaging. But it’s the in between zone where we create a zombie. If you’ve ever had general anesthesia, whether with propofol or inhalation anesthesia, you’ve had the unique experience of having your mind suddenly switched off and then back on again in what seems subjectively like no time passing. Even though hours may have elapsed on the clock in the operating room, one second they’re wheeling you in, the next second you awake in the recovery room. Its a disturbing interruption of self that doesn’t happen when you’re drowsy or asleep.

So yes, many of us can subjectively confirm that these drugs turn consciousness off. You have no experience of anything during that time. The EEG is slowed, but the cortex is continuing its business without awareness. In fact, most electical recordings from neurons in the lab are done on anesthetized animals. I did that during my PhD studies. It turns out that light anesthesia has very little effect on information flow though the visual system or autonomic control system. Hubel and Weisels pioneering recording from the visual system where the found edge detection neurons, cortical columns and surround inhibition were all done in anesthetized animals. True, spontaneous behavior disappears so it can’t be studied, but most brain circuits function pretty normally, well enough that their basic characteristics can be studied.

Behavior during sedation without subjective awareness = Zombie!

But you’ll object that the anesthetized person, even though their cortex continues processing sensory symptoms is not a zombie since there’s no behavior. Well, at just the right level of infusion, a level often called “Twilight Sleep” by the medical profession, but more appropriately just “sedation”, you can ask the patient to perform simple tasks like squeezing your hand or giving short answers to questions. That much of the cortical processing for input and output is working. If sedation gets too light, you get the problem that spontaneous behavior returns but the patient is still not conscious. They’ll try to get off the procedure table or at least move around to get comfortable. Not good during a colonoscopy. It’s just that the frontal lobe system to trigger behavior is active enough to try to get out of bed, but the thalamo-cortical network for awareness and attention is selectively turned off by the propofol infusion.

Unfortunately, this state of being unconscious but behaving is not uncommon in the real world when alcohol, benzos or GHB is circulating in the blood and activating the brain’s GABA system. It’s not uncommon for people to drink to excess, take pills or even be slipped a dose of powerful sedative like GHB. They’ll continue to act like they are awake but, just like the state of anesthesia or sedation, have a gap in the continuity of their awareness suggesting that they were behaving, but not aware. Clearly some supervisory, attentional mechanisms are active when the drinker gets a ride home from the bar and awakens with a gap. You tell the drinker how much fun they had last night and they recall none of it.

Memory is consciousness is self identity

You may realize that we’ve ended up conflating continuous awareness with memory of awareness. Since the subjective report relies on recall, they can’t two can’t be untangled. And of course, knowing who you are, that you’re the same person this morning that went to sleep last night is dependent on memory.

Actually, turning of memory storage is another way to create a zombie pharmacologically. But as I’ll argue in the next posts, much of our day passes in the zombie state. Most of the time our brains attend to controlling behavior, processing sensory input and responding to the environment but without awareness of self. Most of the time, we don’t need be anything other than a zombie. It feels strange when self awareness is gone because of external causes like sedation, not when we disengage the mechanism ourselves.

ChatGPT Anticipates My Writing

I continue to play around with the various AIs and am finding ChatGPT increasing valuable as a ready research assistant for writing. For example, while editing my book manuscript I wanted to be straight on the timeline for FDA’s requirement for clinical trials, the PDUFA regulations and the idea of “evidence based medicine”. It was easy to type in a question and get an answer way faster than searching. I did doublecheck some facts, but all seemed to be just as it seems.

I did have an interesting experience with my assistant yesterday. I’ve been exploring the nature of self identity in my Substack posts. I came up with the idea of using sedative hypnotics as a real example of what philosophers call zombies, a hypothetical person who appears to behave normally but is devoid of subjective experience. Well, from personal and professional experience, it seemed to me that sedative hypnotics at the right dose can do exactly this. So I coined the term “Pharmacological Zombie”.

I got to thinking, “Did I really think this up? Maybe it’s been discussed before?” After all, most of what I write is compilation and synthesis from others. So I asked my assistant.

Me: Is there such a thing as a pharmacological zombie?

A “pharmacological zombie” could conceptually refer to a state where an individual appears awake and responsive but lacks conscious awareness or subjective experience, akin to philosophical zombies in thought experiments. In pharmacology, this might resemble conditions induced by certain drugs:
1. Sedatives like Midazolam: Benzodiazepines can create anterograde amnesia, where individuals appear to interact but retain no memory of their actions, giving a “zombie-like” impression.
2. Ketamine or Dissociatives: These can induce a dissociative state where patients seem conscious but experience detachment from reality and self.
3. Anticholinergic Delirium: Overdoses of anticholinergic drugs like scopolamine can cause confusion and a trance-like state.
4. Neuromuscular Blockers without Sedation: Rare and unintended situations during surgery where patients are paralyzed but conscious could mimic this condition.

While not literally a “zombie,” these examples demonstrate altered states where conscious awareness and behavior are disconnected, resembling the concept metaphorically.

Me: Can you give me an attribution for this discussion?

If you’re asking about the term “pharmacological zombie” specifically, it does not appear to have a formal attribution in scientific or philosophical literature. It seems to be a metaphorical or colloquial way to describe drug-induced states where behavior and consciousness are disconnected.

If you encountered the term in a specific context, it may have been coined or used creatively by a writer, journalist, or researcher to illustrate the effects of certain medications (e.g., amnesic or dissociative drugs). Without a direct citation or context, there isn’t a definitive source or individual to attribute it to.

And it’s true based on a Google search. I seem to have invented this. Although my thought may have been triggered by the tranquilizer xylazine being termed a “zombie drug” in the media.

This is quite remarkable really. Simply given my phrase, “Pharmacological Zombie”, ChatGPT came up with exactly the same concept that I was thinking about. Somehow the idea is inherent in that two word phrase. It didn’t come up with description until I prompted it, but the idea was latent in the training database.

But it also occurred to me that since Google and other search engines don’t index substack, my discussion is not discoverable other than through Substack. It’s public in that anyone using the service can read it, but not everyone. Yes, yet another walled garden. You’ve got to come in and visit to see what’s here.

So I’m going to work on at least duplicating my writing here. Others have come to the same conclusion I see- multiple channels seems to be the way to go.

Can We Understand the Brain?- Jeff Lichtman

As part of editing my book manuscript and starting my Substack posts, I turned my attention to being a better communicator at this intersection of neuroscience, philosophy and self-help. It’s really complex and many of the ideas are radical because they peer behind the illusion that the brain presents us through awareness.

As part of that effort, I’m been reading some of the recently published books on brain and consciousness like Anil Seth’s Being You and Donald Hoffman’s The Case Against Reality. By the way, I can recommend the former with reservations, but avoid Hoffman at all costs. In their efforts, these philosophers of mind are hopelessly trapped in what I’ve started calling “Mentalism”, the explanation of brain function not through neurophysiology but through metaphor and theory all of which is untestable and without consequences in the real world.

I was so happy to hear a real neuroscientist discuss our understanding of the brain on Sean Carroll’s podcast: Jeff Lichtman on the Wiring Diagram of the Brain – Sean Carroll. Jeff Lichtman is a contemporary of mine, an MD, PhD who stuck with basic research on brain connections through a very productive career.

Like Jeff, I’m astounded both at how far we’ve come in understanding the brain compared to where we started, but in awe of how far we are from a real understanding of how the brain functions as a complex system of networked neurons.

it’s an enormous amount of effort. And I don’t want to make fun of this, because this is the field I’ve been in for my entire professional career, and I’m old. And we have learned an enormous amount. The question is, if you’re climbing Mount Everest and you’ve gone three feet, have you made a lot of progress? You have made an infinite amount of progress relative when you started and you had gone zero. But we are still very far from having a deep understanding of how the brain works. And I will probably say at some point, I’m not sure that is what we should be aiming for anyway.

It’s also interesting to hear Sean Carrol, a physicist as smart questions about the brain from what for us is a very naive perspective. He’s had some of those Philosophers on his podcast over the years and they’ve given him some wrong ideas about what’s represented in the brain and our understanding of encoding. But Jeff put it perfectly:

I think of a nerve cell as a living creature. It’s a single celled organism, like a paramecium or an amoeba, but it’s living in a very weird pond, which is your head. It doesn’t know that it’s inside your head. It doesn’t care about whether you’re eating a sandwich or listening to a podcast. It’s just there. And it has to do certain things to stay alive. And so all the things it does are for its own survival because it’s a single celled organism with a will to survive. And those things end up generating learned based wiring diagrams. That from their perspective, they don’t know that that’s what they’re doing. They just know that if they don’t do that, they’re going to be punished and die.

This is a point I’ve been working on presenting more clearly. We have an introspective sense that I call awareness that provides some access to brain function. But outside of what we’re presented through this limited mechanism, we have no access to the functioning of the brain at all. And when we use our tools to look inside, we don’t understand at a fundamental level what these neurons are doing and how they’re organized. They’ve been assembled into networks through developmental programs to process sensory input through channels of sight, sound, position, etc and produce purposeful behavior.

And Jeff makes a great point about the difference between describing the structure of the brain and understanding it in the sense of predicting behavior.

I think even simulating a worm in a computer is a challenge, though, with only 300 cells, so I’m not too worried. I think, as I said, the best we may be able to do is describe in complete detail what’s there, but it’s a big difference from saying, I now can predict its behavior, I understand it. No human being, honestly, no human being could hold this amount of information. The analogy I give is like, do you understand New York City? And you’d say, that’s a stupid question. What do you mean? There’s so many things happening at the same time, and there’s so much complexity, and I would say if you can’t understand New York City, forget about the brain. It’s way more complicated than New York City.

I agree that if we can’t understand a 300 neuron worm brain, understanding the complex biological system of the human brain with its billions of neurons is beyond reduction down to understanding. So we describe. Make general statements about regions of brain, their connections, their participation in aspects of awareness and behavior, but the goal is better and better description.

This is what David Chalmers famously termed “The easy question”, which is describing the mechanisms of brain function that underly cognition and behavior. I think Jeff underestimates our progress. I think we’ve come more than just a few steps, but I agree there is much more to learn. I’m confident that the understanding gained will provide insight into human problems like depression, schizophrenia and, ultimately, allowing each of us to attain our potential. Because even though the mechanisms are an ad hoc, tangled mess, their purpose is emergent functions like optimism about the environment or fear. And we have already found a number of levers to shift the system toward a more desirable state. I think we just need to be a bit more sophisticated and realistic in how to accomplish change in a complex system we can describe but never truly understand.

Substack as Idea Workshop

A while ago I came up with the idea of Zettelblogging. That would be capturing, note taking and summarizing ideas in public. I think that anyone who tries this kind of approach finds it too disjointed to be public facing. Some of my notes are cryptic and tangential, while others are basically full book reviews and summaries. In the end, as casual as I want to be in my writing for this site, that seemed to just be taking casual blogging too far. I think the standard journaling and linking works well in the format.

Having just posted my weekly entry, Six impossible things before breakfast, I’m thinking a bit about where this is all going. This is another discussion of how to understand this contradiction between our subjective feeling of a unified, consistent identity and the fact that we can change our behavior so easily based on social context, whether we’re alone or observed, and when we engage in lying, acting or otherwise choosing to act in a way that is different from what we know to be our true selves.

I’m finding that producing a weekly post for Substack is providing complimentary tool to journaling. I have no idea what I’m doing, I’m just letting topic flow into topic to find out what the effort is about. Right now I’m interested in how we define and improve ourselves, so each week I pick a topic, collect some thoughts, outline a reasonable discussion, write, edit and publish.

As you might expect, noodling around a topic in public like this is a great way to discover ideas, try them out in front of an audience and refine ideas. I’ve made progress in understanding how the brain constructs its representation of self by considering our social behavior. Probably a few more weeks to wrap up this set of ideas and get into a bit more of the nuts and bolts of brain mechanisms.

How ChatGPT and Google Gemini Can Kill Creativity

I’ve written enthusiastically about my use of Google Gemini as a kitchen assistant. And I’ve found a few other uses for it and ChatGPT in summarizing areas of research very quickly at high level. They miss a lot and are really bad at backing up their information with facts. But as a start, they’re useful.

It seems to me that the reason they’re useful in this way is that they are so median. That is to say mediocre. When you search recipes on the internet there’s huge redundancy with bad ideas sprinkled in. The LLMs ignore the outliers and give you a nice middle of the road consensus. Which is my personal method- I look at a bunch of recipes, averaging mentally over the proportions and list of ingredients until I have my take for the moment. It’s easy to do this with an LLM, asking it to take out the olives or asking whether lemon juice would be a good substitute for the vinegar. And these models can be quite opinionated if their training set is opinionated. Of course some of those opinions are wrong (don’t salt beans during cooking), but useful.

But I made the mistake a week or so ago of asking them to help write a Substack post. I had a page of text notes and an outline. So basically all the ideas I needed to start the through composition of a first draft. So I thought, why not give my notes to Google Gemini and ChatGPT and skip that first draft?

So what I got was totally, as the kids would say, “mid”. I mean it was what a mediocre thinker would do with my notes. It put in all kinds of caveats and “as if” statements to route around my unique take on the relationship between brain and intent.

Not only did it water down the ideas to non-existence, if I tried to edit both or their essays back to my liking, it was like finding I had a set of false beliefs, as if an alternate universe version of me had written something I disagreed with.

I had to erase their efforts, take a walk, and come back to my notes and do that first draft. I’m not sure the product was the same as if I had never let those things near my work. So not only does the LLM flood threaten to dilute the content of the web, it may well threaten our ability to hold opinions far from the median.

In finishing up my manuscript and starting these Substack essays, I’ve realized that my way of looking at being human is now pretty far from that median. I’m in the midst of reading Anil Seths Being You and from the first page, I find the approach to be unhelpful. This idea these academics who study consciousness are stuck in a false dualist “mentalism” is becoming more clear to me and will probably be my next series of essays over on Substack once I get through the current set of ideas on Self and the Power of Pretending.

Week 3 Substack

I’m on a three week long streak posting an essay on Substack. This week, I’ve extended the conversation about self and values to where values actually come from.

These posts are experimental, but at least being read on the site at a pretty steady rate. As I suspected, like other social sites the algorithm is putting my writing directly in front of some readers, something that is hard to do here on the blog other than through search engagement. But I have no plans to give up my casual journal efforts here. I added a Substack subscribe form to the site here to push my readers here to sign up for the posts there. I’ll probably archive them here at some point, but for now the idea is to see whether there’s really interest in reading my contemplations on brain and mind.

The idea on Substack is to keep to a publishing schedule and put out something of a complete thought in each. As I’ve written here, it’s a drawback of the blog format that it promotes this kind of running narrative and makes it difficult to communicate more complex, bigger ideas. The truth is that really only book format can be structured and long enough to really allow for exposition and elaboration of ideas. But I think that I can at least get better at explaining and chunking down ideas by regularly writing these explainers for the general audience on Substack.

Plus, it’s the start of the next intellectual journey. Or at least documenting where I’m going since slogging through turning “Deciding Better” into a book now that I’ve been at it close to 25 years. I’ve ended up in a place that I now realize is both in the mainstream of thought among neuroscientists, but pretty shocking to anyone outside of the neurosciences. It started when I began to realize that when we feel like we’re deciding, we really are more realistically being aware of what the brain is doing and if we wish, probing and rationalizing the choices we’ve made. The book is all about how to improve the process, but it can’t restore the illusion that there’s an “I” in charge of the brain. It’s quite the other way around.

So no wonder that in these first essays on Substack, I’m examining things from the other side: What is the “I” that I think I am. How is it constructed and to what extent can I control or change it. Plenty more to come.

In Suspense

I appreciate the comment on my post election thoughts from Dave at Nice Marmot, saying:

What James suggests smacks of “both sides” equivalency.

This is something I’ll freely admit to, but only as so far as trying to meet people where they are. My life is very oddly divided between members of the two sides. In fact, today, among the Trump supporters, the vibe was very much, “We won the Super Bowl. Season’s over and we’re ready for the next fight”. Which just reinforces my hypothesis that so much of this division is driven by group identification and cheering for one side over the other.

I know one thing from personal experience. It’s fun to deflate false views by logic and fact, but it doesn’t make you any friends. In fact, I generally see it harden positions as it ends up personal attack. I still remember getting upset at someone during the COVID pandemic who was in a group complaining about the truth about Ivermectin was being suppressed by the authorities. I guess I took it personally as a physician, but to this day, my interactions with that person are not the same, even though he was only repeating what he heard in his environment.

I had suspected that the outcome we got was more likely than the polls were showing and defended Nate Silver’s complaint about pollsters “herding” results to suppress outliers, which should be more common than reported just by chance. Mostly, it was a combination of the reaction to social change I kept hearing and the economy of the Biden presidency with its inflation. I don’t think that the achievements were really ever communicated. Per this nice summary: Bidenomics Is Starting to Transform America. Why Has No One Noticed? – The New Yorker. In fact:

“By a steep margin, Americans did not approve of Biden’s presidency. By an even steeper margin they thought the country was heading in the wrong direction. They were demanding a new direction that Democrats never figured out how to offer.” Washington Post

So now we move from uncertainty to suspense. I’m following Dave’s planned move up north and wonder whether he may be a growing trend of regional migration where we cluster by economics, education and social attitudes. While I had expected demographic shifts to swamp regional divides, it seems like our system is also promoting localization among the group because of our non-proportional systems of governance.