Looking for AGI? Try C. elegans, Not ChatGPT

James Vornov, MD PhD
Neurologist, drug developer and philosopher exploring the neuroscience of decision-making and personal identity.


ChatGPT is pretty dumb when compared to an agentic complex system like a worm

LLMs and the meaning of “intelligence”

I use a selection of large language models every day. I think they are actually kind of dumb.

Yet I keep hearing about “Artificial General Intelligence” being reached, the prospect of superintelligence, and the replacement of knowledge workers by LLMs. And then there are the questions about sentience that just make me roll my eyes.

I’ll admit that at this point, LLMs make excellent assistants. They help with fact-checking, reflecting back ideas, and making counterarguments based on conventional wisdom. There remain huge problems with hallucinations and guessing when something could easily be looked up on the internet but they don’t seem to understand bayesian induction. They are much better at summarizing and analyzing text than they are at producing text. And new ideas are almost entirely absent. And they just mess up numerical and quantitative arguments all the time. Which is not to say that I’m not inspired with new ideas through exploratory chats, it’s just that the ideas are mine, never the model’s. Why do we insist on ascribing general intelligence to them?

The subjective experience of talking to our current models is weirdly persuasive that there’s an intelligence there. It’s not just that the answers are fast and fluent; it’s that the model can hold a thread, shift registers, and generate language that looks like it came from a person who has actually spent time thinking. They feel alien and at the same time oddly knowable as another intelligence.

Comparing intelligence: LLM vs a Worm?

Exactly how intelligent is an LLM? So I got to thinking that I simply could count connections or potential network states. After all, if you think the model is intelligent, it comes down to all the connections and how complex its possible states are to produce that intelligent behavior. My gut is that the LLM is pretty stupid really, it’s just a model of something intelligent.

So what’s the simplest thing that I could compare it to? What’s a simple, mapped-out intelligence? How about our old friend, C. elegans, that simple worm that lives in leaf litter and has been the subject of so much study? The worm is the oldest and best-mapped nervous system we have, and it’s the kind of organism that tempts you into thinking the hard part is over. It’s tiny. The wiring diagram of its 302 neurons has been charted. Its behavioral repertoire is really modest. It feeds, avoids danger, and reproduces, and not much more. Very modest compared to mammals or a summary of recent dining trends in New York City. If you’ll allow the possibility that “intelligence” is to be found in a network, then the worm should be the perfect comparison.

Continue reading “Looking for AGI? Try C. elegans, Not ChatGPT”