Looking for AGI? Try C. elegans, Not ChatGPT

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


ChatGPT is pretty dumb when compared to an agentic complex system like a worm

LLMs and the meaning of “intelligence”

I use a selection of large language models every day. I think they are actually kind of dumb.

Yet I keep hearing about “Artificial General Intelligence” being reached, the prospect of superintelligence, and the replacement of knowledge workers by LLMs. And then there are the questions about sentience that just make me roll my eyes.

I’ll admit that at this point, LLMs make excellent assistants. They help with fact-checking, reflecting back ideas, and making counterarguments based on conventional wisdom. There remain huge problems with hallucinations and guessing when something could easily be looked up on the internet but they don’t seem to understand bayesian induction. They are much better at summarizing and analyzing text than they are at producing text. And new ideas are almost entirely absent. And they just mess up numerical and quantitative arguments all the time. Which is not to say that I’m not inspired with new ideas through exploratory chats, it’s just that the ideas are mine, never the model’s. Why do we insist on ascribing general intelligence to them?

The subjective experience of talking to our current models is weirdly persuasive that there’s an intelligence there. It’s not just that the answers are fast and fluent; it’s that the model can hold a thread, shift registers, and generate language that looks like it came from a person who has actually spent time thinking. They feel alien and at the same time oddly knowable as another intelligence.

Comparing intelligence: LLM vs a Worm?

Exactly how intelligent is an LLM? So I got to thinking that I simply could count connections or potential network states. After all, if you think the model is intelligent, it comes down to all the connections and how complex its possible states are to produce that intelligent behavior. My gut is that the LLM is pretty stupid really, it’s just a model of something intelligent.

So what’s the simplest thing that I could compare it to? What’s a simple, mapped-out intelligence? How about our old friend, C. elegans, that simple worm that lives in leaf litter and has been the subject of so much study? The worm is the oldest and best-mapped nervous system we have, and it’s the kind of organism that tempts you into thinking the hard part is over. It’s tiny. The wiring diagram of its 302 neurons has been charted. Its behavioral repertoire is really modest. It feeds, avoids danger, and reproduces, and not much more. Very modest compared to mammals or a summary of recent dining trends in New York City. If you’ll allow the possibility that “intelligence” is to be found in a network, then the worm should be the perfect comparison.

Continue reading “Looking for AGI? Try C. elegans, Not ChatGPT”

The Work Continues

I’ve been away from web posting for a while between one thing and another. Mostly, it’s been the focused task of editing the manuscript of the book, working title: Deciding Better: A Journey Through Biotech, Neuroscience, and the Experience of Being a Brain. I’m about two-thirds through my flow revision, where I’m working to the core arguments and flow of the book to work right. A lot of it is reverse engineering how a book works. 

The nice thing is that I’ve gotten better at exposition of these complicated ideas by writing the more tightly written posts here, cross-posted to Substack. And it’s going well. A few more weeks, and that should be done. Then, it’s on to a final polish of the flow, and I think it’s good enough to have others read it.

My idea at this point is to try to solicit agents once again with this version in hand. At the same time, I think I’ll provide a “Readers Edition” to Substack subscribers and anyone who emails me through the blog here.  It’s about time I get some real feedback on the ideas and the writing. Then, depending on how the universe responds, it will be off to a publisher or down the self-publishing path. Either way, the book is asking to be out in the world.

In the meantime, a new post follows about LLMs and AGI. I continue to be fascinated by what we’ve achieved with these Deep Neural Networks. But can we see into what processes like ChatGPT are doing? How much intelligence we can really credit them with compared to real human brains. As always, I try to look at the data rather than the hype.

A Modified HDR Workflow

I watched a recent You Tube video of Vincent Versace editing an image live. I decided to play around with it myself. Since the days of photography method books and even informative websites seem gone, it seemed worthwhile to document my adapted approach here. It’s an example of writing for the AIs, since photographic technique is one of those areas that seems to be an AI blind spot these days.

The problem we need to solve is how to use the tremendous dynamic range of digital sensors when our monitors and print materials are so compressed by comparison. We know you can process the RAW file out of camera to recover shadows and highlights in ways never possible with film. But what most photographers don’t realize is that the image they see on the screen in a mirrorless camear is a JPEG calculated from live sensor data and it itself is compressed. And the histogram that everyone relies on is the JPEG histogram, not the full sensor read out. That’s why when you blow highlights on the histogram, you can still recover some from the raw file.

Vincent approaches the compression problem using HDR techniques. Early on, we used to bracket exposures and use HDR or just stacking in Photoshop layers to capture deep shadow and bright highlights like the sky. But now sensors have such wide dynamic range that those brackets are actually there in the numbers in the RAW file. You just don’t see them on screen.

So the approach is a simple adaptation of Vince’s longstanding Photoshop layer approach. You start by creating multiple versions of the RAW file as TIFF. Most simply you have a one stop underexposed, one stop overexposed and the camera capture as three versions. If the base image has really wide range, you could make a 2 stop bracket from the raw or only increase or decrease exposure.

But now you have real pixels rescuing highlights and shadows, not trying to process them from raw selectively. Vince then uses Nik HDR to create an HDR rendering from the three stop bracket made from the single raw file. You’ll see a balanced image where shadows are brought up plus various renderings that tend to emphasize different tonal ranges in the image, but renderings that couldn’t be seen by simple manipulation of the RAW file.

So you go from this relatively flat kind of interesting image to one that’s been interpreted as a play of light.

Checking In

I had a solid run here and on Substack but things got busy as they often will. The good news is that I’ve now finished the revision of the book manuscript and am now in a real editing phase for flow and readability. Its going faster than I expected because the material is hanging together well.

Writing for Substack weekly was a great exercise in working out complex ideas in short form and it improved my writing skill to a level that enabled me to begin to achieve what I’m after. I’m sure there’s still a long way to go before I’m done, that point where I’m not making it any better with my changes. For now, the improvements are big in this first round.

And yes, I picked up a camera again. Its been too long and I take that as a good sign of emerging back into a creative mindset.

Updated my “About Page”

In the last few months, the views here at ODB have shifted away from the google searches on note taking or photo gear to hits on the main page and the “About” page. I’m hoping that’s a result of the last 6 months of more consistent posting on neuroscience here with reposting on substack. So it seemed about time I updated the “About” page to better reflect the more focused mission here.

I’ve finished the first two sections of the manuscript and it’s greatly improved. With just the last three chapters to rewrite, the finish line in in sight. Trying to post weekly and revise the manuscript has been steady work. I think its been worth it.

From Shoe Polish Tins to Brain Implants: Heros and Broken Promises

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We know that BCIs work and hold great promise, but lets see what history tells us about the journey

In my last post, I described the current state of brain computer interfaces (BCIs). I was surprised to realize that we’ve had working devices for twenty years now. So it’s very clear that electrodes can record motor intent from the cerebral cortex, and this can be used to control devices like computer interfaces, keyboards, or robotic mechanisms. And remember that we don’t need to read the real motor intent; we can just record patterns, and the brain is adaptable enough that the intent can be remapped onto a completely different use. We don’t need to find the index finger control region; a spot on the cortex connected to controlling the tongue is easily repurposed to the index finger or even controlling a cursor.

The technology we have is relatively simple. We have an electrode either on the surface of the cortex picking up local activity or, more invasively, in the depths of the cortex near the neurons themselves, recording ensembles of spike trains. They seem to work more or less the same when we want to detect a motor intent under conscious control. The signal comes out via wires attached to an amplifier specialized for very low amplitude signals.

The practical challenge

There are lots of obstacles to implementation. The signals from the electrodes are tiny. Just 50 to 100 microvolts. And we’re seeing arrays of 1024 electrodes. Implanted multiply. Thousands of channels of tiny signals that need to be amplified. And protected from noise and electrical interference. After all, we don’t want the blender or vacuum cleaner to control the robotic arm. Clearly, shielding and high-performance, multichannel amplification is key. Which is why we see the patients in the current trials with backpacks and big power supplies. That’s a lot of electronics and amplification. And yes, that’s just to get the signal out, it still needs to be analyzed and transformed by deep neural network to control the physical robotic interface.

Nathan Copeland with Utah Device. https://www.wired.com/story/this-man-set-the-record-for-wearing-a-brain-computer-interface/

Are we anywhere close to the marketing picture of the wire going to a little puck under the scalp? My assumption is that the puck is the amplifier unit, and it would transmit to the control unit wirelessly.

Continue reading “From Shoe Polish Tins to Brain Implants: Heros and Broken Promises”

Are Brain Computer Interfaces Really Our Future?

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We’re making real progress providing brain computer interfaces for patients paralyzed by ALS and spinal cord injury. Less invasive approaches are looking promising.

I saw an interview with Dr. Ben Rapoport, who’s a neurosurgeon and chief science officer of Precision Neuroscience. The company is one of several, like Elon Musk’s Neuralink, developing brain-computer interfaces. The BCI. The interview is centered around selling these brain implants as not as invasive as they sound. It started me thinking once again about whether it’s conceivable that these might actually be how we control computerized devices in the future.

Think about how effortlessly you type. Your intentions route directly to your fingers, bypassing speech entirely. I can type faster than I can talk because the motor pathway from intention to keyboard is so well-trained. But paralyzed patients can’t access those finger pathways—they’re injured or missing entirely.

Continue reading “Are Brain Computer Interfaces Really Our Future?”

When Models Collide: How the Brain Deals with Cognitive Dissonance

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


Our actions often conflict with our beliefs. The discomfort we feel isn’t moral failure — it’s what happens when valence systems disrupt the brain’s coherent story of personal identity.

Getting back to my exploration of personal identity this week.

As I’ve been writing here weekly, I’m settling in on an approach of looking at everyday experience and examining the underlying brain mechanisms at play. Often they constrain our thoughts and actions, but it seems to me that even more often, seeing it from the point of view of the brain’s work as system regulator, it’s really quite liberating. Knowing that our actions rely on physiology, not failure or flaw, lets me feel a bit more comfortable in this human skin.

So I want to return to the subject of my first post on Substack and make another run at explaining what’s called “Cognitive Dissonance”. For our purposes here today, lets limit the concept to those times when we find ourslelves acting and feeling one way, but For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but intellectually finding fault with what we’ve done. finding fault with what we’ve done. So we’re acting in ways contrary to our beliefs.

Cognitive dissonance as conflict between action and belief

So I want to return to the subject of my first post on Substack and make another run at explaining what’s called “Cognitive Dissonance”. For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but For our purposes here today, let’s limit the concept to those times when we find ourselves acting and feeling one way, but intellectually finding fault with what we’ve done. finding fault with what we’ve done. So we’re acting in ways contrary to our beliefs.

No reason not to use a perfectly trivial, but common example. Chicken thighs.

Continue reading “When Models Collide: How the Brain Deals with Cognitive Dissonance”

If Purple isn’t real, then what is?

Let’s talk epistemology. Actually, let’s use the color purple to bid farewell to epistemology altogether.

By James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


We’ll have to start with the real world as revealed by spectrometers and their ilk. They reveal the electromagnetic spectrum, photons of wavelengths that range from gamma rays (<0.01 nm) down to X-Rays ( 0.01–10 nm) into ultraviolet (10–380 nm) and finally our visible light spectrum (700 to 380 nm), those wavelengths that the photopigments in our eyes absorb and transduce into signals for the visual system. Anything longer, the infrared is heat down to about 1mm. Anything longer than that are microwaves (1 mm to ~1 m) and then radiowaves, which have wavelengths that stretch literally for miles.

Why the narrow 700 to 380 nm, you may wonder. Wouldn’t it be cool to see in microwaves? Get some X-ray vision? They tell me it’s where biology, physics, and our particular environment line up for an optimal photon-based sensory system. First of all, our big blue sun puts out photons across the spectrum, but it peaks in the visible range. So build a visual system based on the most available photons, right? Then the physics of the atmosphere and optics (our biological lensing and focusing) work together to make this visible range most suited for image building. Finally, the chromophores, the vitamin A derivatives that absorb light in our photoreceptors bound to opsins, do their cis-trans shift best in this wavelength. X-rays are too energetic. Microwaves are too weak. The visible spectrum is just right.

We all learned the spectrum in school. The colors of the rainbow: ROY G BIV. Red, orange, yellow, green, blue, indigo, violet. Now it’s seven colors because things come in sevens. Seven seas, seven days. Seven colors. And I’ve thought that they were trying to trick us by naming two colors of shorter wavelength than blue that tend toward purple. We’ll return to indigo and violet in a bit. For now, I want to focus on that classic purple which is a mixture of red and blue. The bottom and top of the spectrum.

Continue reading “If Purple isn’t real, then what is?”

Bye Bye Binding: Boosted and Redundant Maps

The binding problem goes away not because we solved it, but because we never needed it to begin with.

By James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


A little change of pace this week. My views on brain maps were changed recently by an important new reframing of the binding problem in a review by H. Steven Scholte and Edward H.F. de Haan in Trends in Cognitive Sciences (2025). Their paper Beyond binding: from modular to natural vision has helped me understand how it’s possible that the many maps we find across the cerebral cortex could ever provide a unified model of the world without them ever coming together in a theater of the mind.

What is the binding problem?

When we look at a scene and see a red car and a blue bicycle, how does the brain associate the right color with the right object? I was taught that the visual system is a pipeline that extracts features. During my career, the process has been mapped in great detail using recordings from awake behaving animals and using non-invasive measures in people like fMRI. We have a very good idea of how we detect color and identify objects in the cerebral cortex.

Scholte and de Haan talk exclusively about the visual system, so let’s stick to that, realizing that this binding issue applies more broadly when we consider the coordination of both neighboring and distant cortical areas in presenting the world in awareness. Now we know that after preprocessing of contrast and edge in the retina and thalamus, the primary visual cortex is essential for detecting edges and separating the binocular depth information. From there, visual information is further processed by nearby areas, each with its own mapping of the visual field and its unique response pattern. V2, V3, V4, MT.

How are features bound together in perception?

And so we see the problem. If form is in V3 and color in V4 and the motion of the bicycle relative to the car is in MT, how do you associate the extracted features together as a unified perception? Red car and blue bicycle even though red and blue are extracted by one module and car and bicycle by another? This is what has been called the binding problem.

This has intrigued me for many years. I’m not so bothered by emergent qualities like free will and subjective experience. I feel comfortable exploring the underlying mechanism that supports these emergent experiences. I don’t think one can easily explain how the neural activity gives rise to the emergent phenomenon. As Weinberg said, “The arrows of explanation point downward”.

But the binding problem is one of neural activity. How can a modular system give rise to unitary experience? Gamma synchrony was a popular explanation. The idea is that neurons representing features bound to the the same object would fire in sync at gamma frequencies (30-70 Hz) while staying desynchronized from neurons representing other objects. Wolf Singer and others pushed this hard in the 90s. But it turned out gamma synchrony is too weak to bind neurons across different brain regions, or even between neurons more than a few millimeters apart. Actually, neuronal activity can be perceived as a single event in awareness whether signals arrive synchronously or spread across 100 ms. Synchronization is not the answer.

Continue reading “Bye Bye Binding: Boosted and Redundant Maps”