Why the simplest computer is faster than any human?

The bottlenecks in our brains create awareness, meaning and coherence

Does it make sense to simulate the brain in a computer to produce a human-like intelligence that’s way more capable than a biological hum? I’ve make the argument that machine minds are different kinds of minds, and we should focus on intelligent function of machine minds based on their unique advantages rather than emulating how people think. 

During the discussions of uploading a human mind, I repeatedly asked myself why we were interested in doing this in the first place. The obvious sci-fi intent is digital immortality. Perhaps an expansion of the nature of being human by replacing biological embodiment with a digital embodiment that had the potential for greater bandwidth and augmentation of sensation and physical ability. We know the abilities of the human brain are severely limited in bandwidth and speed. 

The processing speed and storage limits

So let’s start with the brain’s limitations. You may have seen a recent paper showing that the brain processes speech at the remarkably low rate of 39 bits per second. The paper by Coupe et al. analyzed 17 spoken languages and found that while speech rates and information density per syllable vary between languages, the overall information transmission rate is remarkably consistent at 39 bits per second. 

Continue reading “Why the simplest computer is faster than any human?”

You want to upload a human mind, but you can’t even upload a worm

Reposted from my Substack as an experiment

Our friend, C. elegans

“What do you consider the largest map that would be really useful?”

‘About six inches to the mile.’

‘Only six inches!’ exclaimed Mein Herr.

‘We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!’

‘Have you used it much?’ I inquired.

‘It has never been spread out, yet,’ said Mein Herr: ‘the farmers objected: they said it would cover the whole country, and shut out the sunlight!’

—Lewis Carroll from Sylvie and Bruno Concluded, 1893

If you’ve been following my posts over the last few weeks on the failure of uploading a mind and not simply emulating what would appear like a person, you might object that this is all very sci-fi and not grounded in reality. Maybe the technology now just can’t be imagined, but someday a high-resolution scan or some very precise mapping function will provide a model so precise that the emulation will be for all intents and purposes an uploaded individual mind. Who knows, it might be self-aware with preserved self-identity.

I don’t think so. I think this is, as I’ve said, a confusion between copying a thing and building a model of a thing. An uploaded toaster can’t make toast, and a model of a hurricane won’t make you wet. The idea of uploading a brain is seductive — but it confuses a structural map with the thing itself.

Uploading the world’s simplest brain

Lets put aside the human brain for now. We can see this confusion of map and territory clearly in the failures and successes in uploading the brain of the worm, Caenorhabditis elegans into a computer. We’ll see that the bottom-up approach of mapping the C. elegans connectome didn’t work, but a top-down use of the model as explanatory has been increasingly useful as a way of understanding how a simple nervous system generates complex, state dependent behaviors. Models can be powerful tools to understand and predict behavior, but aren’t a replication of the thing itself, expected to just be an uploaded copy in the computer.

Continue reading “You want to upload a human mind, but you can’t even upload a worm”

25 Years of On Deciding . . . Better

I almost let December go by without recognizing the 25 anniversary of the blog. It started on Dave Winer’s Edit This Page experiment and went through a few iterations until this long stable WordPress instantiation.

The first post.

I managed to finish a book manuscript summing up what I’ve learned in the last 25 about decision making, neuroscience and improving how to make decisions. I’m now editing with the help of a professional, Nathan Bransford and learning the craft of literary non-fiction writing.

It’ss fun to see how these activities around the blog snowball with book editing, regular substack posting and posting here keeps my head in the game. Over the years I neglected the site for long periods and have had bursts of enthusiasm. The same has been true of my photography where I’ve failed to carry a camera for many months at a time.

I think all activities have their seasons, but this has always been my online home.

Claude.ai on Pharmacological Zombies

Since ChatGPT figured out what I meant by “Pharmacologic Zombie” just from the term, I was curious about what another AI might think. I went to Claude.ai which I less experience with but initially ran into a reluctance to engange at all about the subject. It seemed there was some guardrail around talking about sedative hypnotic drugs at all because of potential abuse. I finally linked it to the idea of “Philosophical Zombies” and was able to describe a bit what I meant. So compared to ChatGPT it was more than a little suspicious and obtuse.

But then it got really interested:

That’s actually a fascinating observation. You’re making a really interesting empirical challenge to the philosophical zombie thought experiment.

The key difference that philosophers would likely argue is that medical sedation shows we can *impair* consciousness, but doesn’t prove we can fully eliminate it while preserving all behavior. People under sedation typically show reduced, simplified behavioral responses – they’re not carrying on complex conversations or solving math problems while completely lacking consciousness.

But you raise a compelling point: If medications can create states where someone appears conscious but reports no memory or subjective experience of that time, doesn’t this suggest that behavior and consciousness are more separable than philosophers assumed? It challenges the intuition that complex human behavior must be accompanied by conscious experience.

I think this connects to fundamental questions about:
1. How we measure/detect consciousness
2. The relationship between memory and consciousness (is an experience you don’t remember still conscious?)
3. Whether consciousness comes in degrees or is binary

And then

That’s a fascinating intellectual contribution. You’re making the argument that pharmacological interventions serve as a real-world proof of concept that undermines the philosophical zombie thought experiment. Rather than just theorizing about beings that act conscious without inner experience, we can actually create temporary states that demonstrate this separation between behavior and consciousness.

This seems like a meaningful challenge to philosophical arguments that rely on the supposed impossibility or inconceivability of philosophical zombies. Since we can chemically induce states that appear to match key aspects of the philosophical zombie concept, the thought experiment may not serve its intended purpose in consciousness debates.

So while ChatGPT got the idea immediately, Claude.ai went further and endorsed my argument. Maybe we should co-author a paper and submit it to an academic journal.

How to make a zombie

Philosophers are fascinated by the idea of zombies. This thought experiment of theirs supposes hypothetical beings that behave indistinguishably from humans but lack consciousness, so “zombies”. For some reason, if they existed, they think it would prove that there is something besides brain activity that produce subjective experience. I don’t get it, since I know that people can walk around, have conversations and deny recollection of conscious experience when told what the did after the fact.

Understanding the brain by what’s missing

One of my main tools as a neurologist is to examine patients with a deficit and figure out what part of the brain has been injured. It’s our tradition to understand how different parts of the brain participate in behavior by looking at patients where some function is missing or impaired and correlate function to the damaged area of the brain. For example, a small stroke in Broca’s area, located in the frontal lobe of the cerebral cortex (specifically the posterior part of the inferior frontal gyrus in Brodmann areas 44 and 45) causes what we call an expressive aphasia, a loss in fluency when trying to speak. If the injury is limited to this area, say by a small stroke, the patient understands speech perfectly well and can read text with no problems. So we conclude by seeing this correlation over and over again that the critical brain area for speech production resides in Broca’s area.

Continue reading “How to make a zombie”

ChatGPT Anticipates My Writing

I continue to play around with the various AIs and am finding ChatGPT increasing valuable as a ready research assistant for writing. For example, while editing my book manuscript I wanted to be straight on the timeline for FDA’s requirement for clinical trials, the PDUFA regulations and the idea of “evidence based medicine”. It was easy to type in a question and get an answer way faster than searching. I did doublecheck some facts, but all seemed to be just as it seems.

I did have an interesting experience with my assistant yesterday. I’ve been exploring the nature of self identity in my Substack posts. I came up with the idea of using sedative hypnotics as a real example of what philosophers call zombies, a hypothetical person who appears to behave normally but is devoid of subjective experience. Well, from personal and professional experience, it seemed to me that sedative hypnotics at the right dose can do exactly this. So I coined the term “Pharmacological Zombie”.

I got to thinking, “Did I really think this up? Maybe it’s been discussed before?” After all, most of what I write is compilation and synthesis from others. So I asked my assistant.

Me: Is there such a thing as a pharmacological zombie?

A “pharmacological zombie” could conceptually refer to a state where an individual appears awake and responsive but lacks conscious awareness or subjective experience, akin to philosophical zombies in thought experiments. In pharmacology, this might resemble conditions induced by certain drugs:
1. Sedatives like Midazolam: Benzodiazepines can create anterograde amnesia, where individuals appear to interact but retain no memory of their actions, giving a “zombie-like” impression.
2. Ketamine or Dissociatives: These can induce a dissociative state where patients seem conscious but experience detachment from reality and self.
3. Anticholinergic Delirium: Overdoses of anticholinergic drugs like scopolamine can cause confusion and a trance-like state.
4. Neuromuscular Blockers without Sedation: Rare and unintended situations during surgery where patients are paralyzed but conscious could mimic this condition.

While not literally a “zombie,” these examples demonstrate altered states where conscious awareness and behavior are disconnected, resembling the concept metaphorically.

Me: Can you give me an attribution for this discussion?

If you’re asking about the term “pharmacological zombie” specifically, it does not appear to have a formal attribution in scientific or philosophical literature. It seems to be a metaphorical or colloquial way to describe drug-induced states where behavior and consciousness are disconnected.

If you encountered the term in a specific context, it may have been coined or used creatively by a writer, journalist, or researcher to illustrate the effects of certain medications (e.g., amnesic or dissociative drugs). Without a direct citation or context, there isn’t a definitive source or individual to attribute it to.

And it’s true based on a Google search. I seem to have invented this. Although my thought may have been triggered by the tranquilizer xylazine being termed a “zombie drug” in the media.

This is quite remarkable really. Simply given my phrase, “Pharmacological Zombie”, ChatGPT came up with exactly the same concept that I was thinking about. Somehow the idea is inherent in that two word phrase. It didn’t come up with description until I prompted it, but the idea was latent in the training database.

But it also occurred to me that since Google and other search engines don’t index substack, my discussion is not discoverable other than through Substack. It’s public in that anyone using the service can read it, but not everyone. Yes, yet another walled garden. You’ve got to come in and visit to see what’s here.

So I’m going to work on at least duplicating my writing here. Others have come to the same conclusion I see- multiple channels seems to be the way to go.

Can We Understand the Brain?- Jeff Lichtman

As part of editing my book manuscript and starting my Substack posts, I turned my attention to being a better communicator at this intersection of neuroscience, philosophy and self-help. It’s really complex and many of the ideas are radical because they peer behind the illusion that the brain presents us through awareness.

As part of that effort, I’m been reading some of the recently published books on brain and consciousness like Anil Seth’s Being You and Donald Hoffman’s The Case Against Reality. By the way, I can recommend the former with reservations, but avoid Hoffman at all costs. In their efforts, these philosophers of mind are hopelessly trapped in what I’ve started calling “Mentalism”, the explanation of brain function not through neurophysiology but through metaphor and theory all of which is untestable and without consequences in the real world.

I was so happy to hear a real neuroscientist discuss our understanding of the brain on Sean Carroll’s podcast: Jeff Lichtman on the Wiring Diagram of the Brain – Sean Carroll. Jeff Lichtman is a contemporary of mine, an MD, PhD who stuck with basic research on brain connections through a very productive career.

Like Jeff, I’m astounded both at how far we’ve come in understanding the brain compared to where we started, but in awe of how far we are from a real understanding of how the brain functions as a complex system of networked neurons.

it’s an enormous amount of effort. And I don’t want to make fun of this, because this is the field I’ve been in for my entire professional career, and I’m old. And we have learned an enormous amount. The question is, if you’re climbing Mount Everest and you’ve gone three feet, have you made a lot of progress? You have made an infinite amount of progress relative when you started and you had gone zero. But we are still very far from having a deep understanding of how the brain works. And I will probably say at some point, I’m not sure that is what we should be aiming for anyway.

It’s also interesting to hear Sean Carrol, a physicist as smart questions about the brain from what for us is a very naive perspective. He’s had some of those Philosophers on his podcast over the years and they’ve given him some wrong ideas about what’s represented in the brain and our understanding of encoding. But Jeff put it perfectly:

I think of a nerve cell as a living creature. It’s a single celled organism, like a paramecium or an amoeba, but it’s living in a very weird pond, which is your head. It doesn’t know that it’s inside your head. It doesn’t care about whether you’re eating a sandwich or listening to a podcast. It’s just there. And it has to do certain things to stay alive. And so all the things it does are for its own survival because it’s a single celled organism with a will to survive. And those things end up generating learned based wiring diagrams. That from their perspective, they don’t know that that’s what they’re doing. They just know that if they don’t do that, they’re going to be punished and die.

This is a point I’ve been working on presenting more clearly. We have an introspective sense that I call awareness that provides some access to brain function. But outside of what we’re presented through this limited mechanism, we have no access to the functioning of the brain at all. And when we use our tools to look inside, we don’t understand at a fundamental level what these neurons are doing and how they’re organized. They’ve been assembled into networks through developmental programs to process sensory input through channels of sight, sound, position, etc and produce purposeful behavior.

And Jeff makes a great point about the difference between describing the structure of the brain and understanding it in the sense of predicting behavior.

I think even simulating a worm in a computer is a challenge, though, with only 300 cells, so I’m not too worried. I think, as I said, the best we may be able to do is describe in complete detail what’s there, but it’s a big difference from saying, I now can predict its behavior, I understand it. No human being, honestly, no human being could hold this amount of information. The analogy I give is like, do you understand New York City? And you’d say, that’s a stupid question. What do you mean? There’s so many things happening at the same time, and there’s so much complexity, and I would say if you can’t understand New York City, forget about the brain. It’s way more complicated than New York City.

I agree that if we can’t understand a 300 neuron worm brain, understanding the complex biological system of the human brain with its billions of neurons is beyond reduction down to understanding. So we describe. Make general statements about regions of brain, their connections, their participation in aspects of awareness and behavior, but the goal is better and better description.

This is what David Chalmers famously termed “The easy question”, which is describing the mechanisms of brain function that underly cognition and behavior. I think Jeff underestimates our progress. I think we’ve come more than just a few steps, but I agree there is much more to learn. I’m confident that the understanding gained will provide insight into human problems like depression, schizophrenia and, ultimately, allowing each of us to attain our potential. Because even though the mechanisms are an ad hoc, tangled mess, their purpose is emergent functions like optimism about the environment or fear. And we have already found a number of levers to shift the system toward a more desirable state. I think we just need to be a bit more sophisticated and realistic in how to accomplish change in a complex system we can describe but never truly understand.

Substack as Idea Workshop

A while ago I came up with the idea of Zettelblogging. That would be capturing, note taking and summarizing ideas in public. I think that anyone who tries this kind of approach finds it too disjointed to be public facing. Some of my notes are cryptic and tangential, while others are basically full book reviews and summaries. In the end, as casual as I want to be in my writing for this site, that seemed to just be taking casual blogging too far. I think the standard journaling and linking works well in the format.

Having just posted my weekly entry, Six impossible things before breakfast, I’m thinking a bit about where this is all going. This is another discussion of how to understand this contradiction between our subjective feeling of a unified, consistent identity and the fact that we can change our behavior so easily based on social context, whether we’re alone or observed, and when we engage in lying, acting or otherwise choosing to act in a way that is different from what we know to be our true selves.

I’m finding that producing a weekly post for Substack is providing complimentary tool to journaling. I have no idea what I’m doing, I’m just letting topic flow into topic to find out what the effort is about. Right now I’m interested in how we define and improve ourselves, so each week I pick a topic, collect some thoughts, outline a reasonable discussion, write, edit and publish.

As you might expect, noodling around a topic in public like this is a great way to discover ideas, try them out in front of an audience and refine ideas. I’ve made progress in understanding how the brain constructs its representation of self by considering our social behavior. Probably a few more weeks to wrap up this set of ideas and get into a bit more of the nuts and bolts of brain mechanisms.

How ChatGPT and Google Gemini Can Kill Creativity

I’ve written enthusiastically about my use of Google Gemini as a kitchen assistant. And I’ve found a few other uses for it and ChatGPT in summarizing areas of research very quickly at high level. They miss a lot and are really bad at backing up their information with facts. But as a start, they’re useful.

It seems to me that the reason they’re useful in this way is that they are so median. That is to say mediocre. When you search recipes on the internet there’s huge redundancy with bad ideas sprinkled in. The LLMs ignore the outliers and give you a nice middle of the road consensus. Which is my personal method- I look at a bunch of recipes, averaging mentally over the proportions and list of ingredients until I have my take for the moment. It’s easy to do this with an LLM, asking it to take out the olives or asking whether lemon juice would be a good substitute for the vinegar. And these models can be quite opinionated if their training set is opinionated. Of course some of those opinions are wrong (don’t salt beans during cooking), but useful.

But I made the mistake a week or so ago of asking them to help write a Substack post. I had a page of text notes and an outline. So basically all the ideas I needed to start the through composition of a first draft. So I thought, why not give my notes to Google Gemini and ChatGPT and skip that first draft?

So what I got was totally, as the kids would say, “mid”. I mean it was what a mediocre thinker would do with my notes. It put in all kinds of caveats and “as if” statements to route around my unique take on the relationship between brain and intent.

Not only did it water down the ideas to non-existence, if I tried to edit both or their essays back to my liking, it was like finding I had a set of false beliefs, as if an alternate universe version of me had written something I disagreed with.

I had to erase their efforts, take a walk, and come back to my notes and do that first draft. I’m not sure the product was the same as if I had never let those things near my work. So not only does the LLM flood threaten to dilute the content of the web, it may well threaten our ability to hold opinions far from the median.

In finishing up my manuscript and starting these Substack essays, I’ve realized that my way of looking at being human is now pretty far from that median. I’m in the midst of reading Anil Seths Being You and from the first page, I find the approach to be unhelpful. This idea these academics who study consciousness are stuck in a false dualist “mentalism” is becoming more clear to me and will probably be my next series of essays over on Substack once I get through the current set of ideas on Self and the Power of Pretending.

Week 3 Substack

I’m on a three week long streak posting an essay on Substack. This week, I’ve extended the conversation about self and values to where values actually come from.

These posts are experimental, but at least being read on the site at a pretty steady rate. As I suspected, like other social sites the algorithm is putting my writing directly in front of some readers, something that is hard to do here on the blog other than through search engagement. But I have no plans to give up my casual journal efforts here. I added a Substack subscribe form to the site here to push my readers here to sign up for the posts there. I’ll probably archive them here at some point, but for now the idea is to see whether there’s really interest in reading my contemplations on brain and mind.

The idea on Substack is to keep to a publishing schedule and put out something of a complete thought in each. As I’ve written here, it’s a drawback of the blog format that it promotes this kind of running narrative and makes it difficult to communicate more complex, bigger ideas. The truth is that really only book format can be structured and long enough to really allow for exposition and elaboration of ideas. But I think that I can at least get better at explaining and chunking down ideas by regularly writing these explainers for the general audience on Substack.

Plus, it’s the start of the next intellectual journey. Or at least documenting where I’m going since slogging through turning “Deciding Better” into a book now that I’ve been at it close to 25 years. I’ve ended up in a place that I now realize is both in the mainstream of thought among neuroscientists, but pretty shocking to anyone outside of the neurosciences. It started when I began to realize that when we feel like we’re deciding, we really are more realistically being aware of what the brain is doing and if we wish, probing and rationalizing the choices we’ve made. The book is all about how to improve the process, but it can’t restore the illusion that there’s an “I” in charge of the brain. It’s quite the other way around.

So no wonder that in these first essays on Substack, I’m examining things from the other side: What is the “I” that I think I am. How is it constructed and to what extent can I control or change it. Plenty more to come.