Five Phases of AI Grief

Hi! You might want to know that this post continues ideas from the following.

It has bothered me for a while, now, how wild it seems that the AI discourse has gone through such distinct phases over a few weeks. While I have kicked around writing about my experiences playing with ChatGPT for the past couple of months, it doesn’t really feel worthwhile, after many more extreme examples have come to light, and people now spend time engineering odd replies.

A cyberpunk-style picture of a trench-coated person on a pink-lighted dock looking out at a misty skyline

Oh, I should probably mention that I originally tried to write this as a Mastodon post, but it didn’t quite feel right in five hundred characters. I then worked on it as a piece for the Entropy Arbitrage newsletter—the next issue of which goes out next week, by the way—but that grew out of control, too, so I gave it its own post at five hundred words…

That said, I present the five phases that I’ve seen in this process so far.

Ivory Tower Dismissal

People have said “this works fine for academics, sure, but it’ll never catch on” for at least thirty years. In college, I took a neural networks class, and found it frustrating that almost nothing in the field had changed since the books published in the 1970s. A few years ago, I took “the big” machine learning course online, and found that frustrating, because nothing had changed since my college course.

Mind you, I had plenty of company in this dismissal. Nobody really cared about artificial neural network systems until they started making things, except to quickly classify things based on human tasks. And that brings us to the next step.

Entertainment

In other words, we have a “hey, look at this neat thing that the system made for me,” phase.

When I planned to write about ChatGPT, that lived in this context. Now that these “generative adversarial networks” can create visual art and prose, a lot of people—including myself—saw them as useful springboards for certain kinds of work. Check out an example, where I asked it to tell me about…my own blog, given only the title.

Entropy Arbitrage is a highly complex and secretive financial practice that has its origins in the ancient civilization of Atlantis. According to legend, the Atlantean philosopher-kings were the first to discover the principle of entropy and its potential for predicting the future behavior of complex systems.

Using this knowledge, the Atlanteans developed a system of divination known as Entropy Arbitrage, which allowed them to make highly accurate predictions about the future and to profit from them. The practice was closely guarded and only taught to a select few, and it quickly became one of the most powerful and coveted tools in the Atlantean arsenal.

As the centuries passed, the knowledge of Entropy Arbitrage was passed down from generation to generation, only being revealed to a select few initiates who were deemed worthy of its power. Over time, the practice evolved and adapted to the changing world, and it eventually found its way into the modern financial world.

Today, Entropy Arbitrage is still shrouded in secrecy, and only a select few financial experts are thought to possess the knowledge and skill necessary to practice it. Some claim that it is the most powerful and profitable financial tool in existence, capable of predicting and profiting from even the most unpredictable market movements.

Despite its incredible potential, many are skeptical of the legitimacy of Entropy Arbitrage and its supposed ancient origins. Some even argue that it is little more than a myth, a fanciful tale designed to impress and mislead the uninitiated. Whether it is real or not, however, remains a mystery to this day.

Not this hilariously wrong piece, mind you—and here I thought that this blog only represented my own thoughts—but I have several possible stories that I have a better handle on, due to iterating over various possibilities with the bot.

I have also enjoyed probing its limits, by questioning how it knows what its boilerplate responses claim, or watching it aggressively deny that it has consciousness in such a way that someone almost certainly scripted it to make it seem more conscious, not less.

Also, while it sometimes balks, I also find it entertaining to ask it to summarize the chat session that I’ve finished, but asking for that summary as a series of haiku or a similar form that doesn’t seem conducive to descriptions.

Most importantly, it seemed extremely clear that no rational person would ever use these systems for research. They tell stories of varying quality, because somebody trained them on language use. They don’t give expert advice unless somebody specifically trains them on the body of knowledge in the desired field.

Trustworthy Companion and Betrayal

Technically, this happened before the entertainment, when—as I wrote about on the blog—GitHub Copilot launched, and people started actually paying to use a generative system to write production code. You already know what I think of that.

However, it started in earnest with students using these systems to write essays, and the subsequent debate over whether their success or failure shows ingenuity or a problem with how we test achievement. We’ve also gotten a steady stream of hand-wringing headlines along the lines of “the computer gave me terrible medical advice that would kill me.” And that has led to people trying to characterize these systems as having some negative moral valence, because they find it some sort of betrayal of an imaginary agreement.

No, I won’t hunt down and link to the articles. Their histrionics don’t seem worth giving anybody’s attention.

Regardless, a lot of this comes from a variation on the ELIZA Effect, people assuming that we can and should trust a system that “speaks” in a friendly manner. But it also seems bizarre for any adult to imagine that a language model would have any way of distinguishing fact from fiction. You’d have nearly as much luck asking a medical expert system to proofread your essay; it might produce usable results, but I wouldn’t trust it.

And shockingly, we still get these sorts of articles. Only yesterday, I saw more than one “ChatGPT lies about this when you ask about that,” to varying degrees of humor and outrage. At this point, it feels performative, as if people feel left out if they haven’t found a new “problem” with a system that—again—can’t distinguish fact from fiction. It tells stories. It doesn’t and can’t educate.

Also, sometimes it doesn’t understand. For the header image for this post, I asked for “ChatGPT writing an essay.” It understood the cyberpunk styling that I asked for, but literally not the four words in the prompt. If it doesn’t understand the questions and doesn’t really know what it answers, then why would anybody trust it to answer questions?

Corporate Escalation

Web search engines currently stink. The engines produce terrible results, because the companies have too many conflicts.

  • Desire to promote their own properties.
  • Desire to sell ads.
  • Assumption that our preferred results reflect in our histories, as well as the histories of our physical households and neighborhoods.
  • Disinterest in removing garbage websites, because they allow for larger numerical claims about the search database.
  • Insistence that social media rests on equal ground to the rest of the web.

I’ve probably missed a few, but you get the idea. On a lucky day, you might find a spam site that scraped a social media post that, in turn, links to the thing that you actually want.

It makes sense that search engine companies would want to improve their image. But somehow, despite a long history of large corporations getting nothing but bad press out of deploying AI chatbots and the bad press that the AI chatbots have gotten, the major search engine providers have decided that their search engine really needs…a chat interface that can’t distinguish fact from fiction.

Honestly, I do understand the basic premise. When fiddling with ChatGPT, I like asking it to refine or fix flaws in its responses, and it makes a certain amount of sense to do something similar with search results. However, the refinements that most people would ask for happen to overlap with (a) exactly the refinements that we want applied to every search, such as “don’t include garbage websites” or “please avoid outdated information,” and (b) the ideas that these chatbots don’t handle well. As a result, this takes a floundering industry and makes it harder for them to operate.

Inevitable Corporate AI Endpoint

Finally…oh, look, the search-bots seem to get angry at people, and they say unkind things.

I alluded to this above, but every time that a big corporation has released an AI-powered chatbot to the public Internet, it quickly grew into a public relations disaster. We don’t ask if it’ll start cursing or repeating racist, authoritarian talking points. We ask how long it’ll take before it happens, and which terrible form it will take.

And as a result, we have these companies limiting their chatbots in all sorts of ways, in hopes of slowing the flow of bad publicity. And so they have struggling products (search engines), to which they’ve added an incongruous feature (an AI chatbot), which they’ve restricted past the point where it might help users.

🤦


Credits: The header image is adapted from creations generated (by John) on NightCafe Studio, hereby released under the same CC BY-SA 4.0 terms as the blog.


No webmentions were found.

By commenting, you agree to follow the blog's Code of Conduct and that your comment is released under the same license as the rest of the blog. Or do you not like comments sections? Continue the conversation in the #entropy-arbitrage chatroom on Matrix…

 Tags:   rant   technology

Sign up for My Newsletter!

Get monthly * updates on Entropy Arbitrage posts, additional reading of interest, thoughts that are too short/personal/trivial for a full post, and previews of upcoming projects, delivered right to your inbox. I won’t share your information or use it for anything else. But you might get an occasional discount on upcoming services.
Or… Mailchimp 🐒 seems less trustworthy every month, so you might prefer to head to my Buy Me a Coffee ☕ page and follow me there, which will get you the newsletter three days after Mailchimp, for now. Members receive previews, if you feel so inclined.
Email Format
* Each issue of the newsletter is released on the Saturday of the Sunday-to-Saturday week including the last day of the month.
Can’t decide? You can read previous issues to see what you’ll get.