I’ve written enthusiastically about my use of Google Gemini as a kitchen assistant. And I’ve found a few other uses for it and ChatGPT in summarizing areas of research very quickly at high level. They miss a lot and are really bad at backing up their information with facts. But as a start, they’re useful.
It seems to me that the reason they’re useful in this way is that they are so median. That is to say mediocre. When you search recipes on the internet there’s huge redundancy with bad ideas sprinkled in. The LLMs ignore the outliers and give you a nice middle of the road consensus. Which is my personal method- I look at a bunch of recipes, averaging mentally over the proportions and list of ingredients until I have my take for the moment. It’s easy to do this with an LLM, asking it to take out the olives or asking whether lemon juice would be a good substitute for the vinegar. And these models can be quite opinionated if their training set is opinionated. Of course some of those opinions are wrong (don’t salt beans during cooking), but useful.
But I made the mistake a week or so ago of asking them to help write a Substack post. I had a page of text notes and an outline. So basically all the ideas I needed to start the through composition of a first draft. So I thought, why not give my notes to Google Gemini and ChatGPT and skip that first draft?
So what I got was totally, as the kids would say, “mid”. I mean it was what a mediocre thinker would do with my notes. It put in all kinds of caveats and “as if” statements to route around my unique take on the relationship between brain and intent.
Not only did it water down the ideas to non-existence, if I tried to edit both or their essays back to my liking, it was like finding I had a set of false beliefs, as if an alternate universe version of me had written something I disagreed with.
I had to erase their efforts, take a walk, and come back to my notes and do that first draft. I’m not sure the product was the same as if I had never let those things near my work. So not only does the LLM flood threaten to dilute the content of the web, it may well threaten our ability to hold opinions far from the median.
In finishing up my manuscript and starting these Substack essays, I’ve realized that my way of looking at being human is now pretty far from that median. I’m in the midst of reading Anil Seths Being You and from the first page, I find the approach to be unhelpful. This idea these academics who study consciousness are stuck in a false dualist “mentalism” is becoming more clear to me and will probably be my next series of essays over on Substack once I get through the current set of ideas on Self and the Power of Pretending.