I don’t want to talk to my computer. For that matter, I don’t want to talk to yours, either. Under any circumstances.
More importantly, I do not want these computers to attempt to talk to me.
Shortly after ChatGPT became available, I took the time to check it out and walked away unimpressed. I’ve continued to be unimpressed, since, but the nature of it has both deepened and clarified.
So I’m going to write about it some more.
Alchemy
I’m generally skeptical of anything AI-related, partially because whenever something useful boils out of that field we seem to give it a different name and get on with things. Mostly because I’m familiar enough with a few boom and bust cycles where towering claims were made that ultimately never came to fruition.
The quest for generalized artificial intelligence, some sort of computerized mind that can think, is beginning to strike me much like alchemical attempts at transmutation. We’ve decided that it’s possible with varying degrees of evidence backing that up. The most serious proponent I’m aware of is deep in study with neuroscientists and believes the mind is computable. It’s not that I disagree with this premise as much as I simply don’t care right now.
It may be that I’m missing something but I don’t understand the preoccupation with attempting to get a machine to think. Someone may be able to explain this to me in terms I’d understand, but I’m more likely to frustrate that someone with questions about the nature of their claims.
Talking to Computers
I mentioned, right at the start, that I don’t want to talk to these things. By that I mean that the back-and-forth, prompt-driven, chatbot nature of LLMs is maddening. Especially because LLMs fail to give you correct information a nontrivial percentage of the time and, for some applications, you wouldn’t even know unless you were already well-established with regard to what you’re asking about. “Hallucination” has been pressed into service to explain this away, but if I had launched software with that kind of defect rate, I’d have been fired long ago regardless of what I called it.
More important, though, is that I can’t figure out a use case that wouldn’t be better served by some other approach. If I need information, I’d rather use a (good) search engine. If I need a summary, I’m going to count on my ability as an adult to read and produce my own – it might be more time consuming but I am able to trust it. If something really needs a summary, then I’m going to ask myself how important it is in the first place. If I need something to get a simple math problem wrong, I’ll grab a pencil and work it out myself. At no point in my history with computers have I thought to myself “it’d be just fantastic if this did a reasonable job of imitating a human.” This is much the same way that I’ve never wanted to have a conversation with a pair of pliers.
It strikes me that the real positioning of LLMs is to become the ultimate middleman between you and stuff you want to do with information and it just so happens this service has a fee. Imagine that.
Bias
I will fully admit that I am biased against this technology. Some of it has come from my disappointing first hand experience with it. Others who have had similar experiences were subject to protests that were approximately “you’re not prompting it correctly for your desired output.”
That may be, and my reply to that is: that’s stupid. On one hand, you don’t know what’s going to come out of these things so you can’t really have great confidence that your unique approach to prompting is having a positive effect or not. On the other hand, there are existing tools at which you can’t levy that accusation. I contend that it’s a failure of the tool!
Mostly I’m biased against them in the same way that I would be biased against the most prolific house thief in the world. The house thief might have things I want or like, but I don’t want to buy them from the house thief. Those items were taken without permission and now someone else intends to profit. That turns my stomach.
And More!
It is likely that I will think of some new, critical angle for this stuff in the future. I didn’t touch on other ridiculous things like the sheer amount of resources these things are set to consume.
I will, however, provide some people who produce recommended reading. They don’t promise the moon, or revolutionary changes in the computing world, or unicorns. They do suggest that things could be better, and that achieving incremental gains will take hard work. One of those sets of promises, in my experience, meshes with reality much more harmoniously.
An incomplete list:
Ed Zitron