Before we get rolling, I should mention that I wrote a preliminary version of this post for the January newsletter, almost as a supplement to The Return of AI Antics, which also started life as a newsletter piece, before it grew uncontrollably. And before that, I feel fairly confident that I’ve written something else like this elsewhere, in response to someone on…Mastodon, maybe, or maybe over e-mail. In other words, if this post sounds familiar to you, then I apologize for the repetition.

Melting snowmen in front of a house

As a quick note, I’ll use terms like “artificial intelligence” and “AI” extremely broadly, in this post, to save you the frustration of constantly reading clunky phrases like “large language model” or “chatbot based on an artificial neural network trained by scraping the Internet.” I don’t believe that any of these systems display anything like intelligence, and use the term in the sense of (the products of) the field of study that has produced everything from ontological knowledge-representation systems to planning agents to language models.

A lot has rightly come up about the utterly bizarre schism among the wealthy people—the AI Doomers and the Effective Accelerationists, apparently, names so silly that I needed to look them up twice for this paragraph, because I forgot them immediately after the first search, and to compound the problem, the Doomers actually call themselves “Effective Altruists” to make everything even more difficult to follow—pouring money into machine learning. I call it bizarre, because both sides differ so much from how the rest of us see things, that it (almost?) seems fake. Cory Doctorow had a similar parsing of the goofiness, recently.

Specifically, you’ve probably seen me describe the overall AI-landscape as something like this.

Sure, we could point to thousands of years of really smart people trying and utterly failing to build mathematical models for innovation and thought. But it also does make a certain amount of (emotional) sense that, if you pile up enough transistors and wish really hard, that your investment must eventually Frosty the Snowman itself into manifesting itself as your friend, right…?

When I say that, I mean that we’ve yet to find a useful way to model a person’s thoughts, or even measure a person’s intelligence, and we still argue over the existence animal cognition because we can’t even define intelligence, showing that we have no idea how any of this works. But the industry seems to want to believe that they can force emergent behavior—of precisely the sort that they want, and nothing else, to boot—onto a system…primarily through spending money combined with their expectations of a particular result. Personally, I find that disconnect funny, if only because people shouldn’t need to look to someone like me for a rational approach to someone else’s field.

By contrast, over on the other side of the divide, the kids trying to find the right magic hat have decided to argue over whether Frosty will enslave and murder us all if we don’t explicitly block it from doing so (Doomers), or if they can raise Frosty to become a quiet slave well-behaved global citizen (Accelerationists). And in that argument, I see a few maybe-interesting, inter-related ideas to talk about.

Offices As AI

We have a sort of chain of worry, to start looking at this.

I forget where I stole this particular idea from—maybe Sugata Mitra—but I’ve always liked the idea that the first computer and artificial intelligence came about before electricity, in the form of the bureaucracy. Both systems seek to automate processes to a degree that the person taking action doesn’t need any knowledge or skill relevant to the situation.

And if you think about it, the way that wealthy people and corporations look at artificial intelligence strongly resembles how the rest of us look at corporations, seeing them as either saviors or despoilers, depending on personal politics. And it makes some sense, because AIs—software in general, really—and corporations work along similar lines, substituting rules for human judgment, at least in part to avoid any individual taking the blame for something going wrong.

In the same way, the problems with using these chatbots look a lot like the problems of dealing with any large agency: Neither cares about what happens to you. They both hide their biases behind a wall of Intellectual Property protections. And if something goes wrong, you can’t legitimately hold anybody responsible, because nobody visible made any decision.

I might even go so far as to say that every hypothetical danger of a future rogue AI already occurs among unregulated corporations.

  • Will AI steal human-created art and try to pass it off as its own? A quick web search suggests that Disney alone has a high-profile case along these lines every few years.
  • Would an AI put massive numbers of people out of work? Presumably, you’ve heard of the massive layoffs in Silicon Valley that happen whenever we have good news about the economy, and don’t need me to tell you about them.
  • How about killing people, even customers, to extract the most value from their work? Let me introduce you to the so-called tobacco industry playbook that worked endlessly to convince customers to shorten their lives through smoking more.
  • And would an AI consume the entire environment to accomplish its goals? That sounds like everything from fighting over water rights to excessive logging to spreading pollution and beyond.

You get my point, I imagine: The same people who worry incessantly about rogue AI often also run or invest in organizations that do exactly what they worry about, without their objections.

Embodied Expense, Human Edition

We also need to look at the embodied labor of a human being.

When a company hires a new employee, they get that person’s strength, knowledge, and skill without paying for the hundreds of thousands of hours of labor that went into making that person healthy. If you hire me, I don’t charge you for the work that my family, over a hundred educators including clerical staff, a few dozen prior managers of various sorts, and probably more people than that put in to turn me into the person who you chose to hire. I wouldn’t even want to guess at how much money it would cost to do that on purpose. But if we could jump over the Frosty the Snowman line to build a real artificial intelligence, the creator and owner of that system would need to fund all that labor, making the system no longer cheaper than hiring a human.

This, I think, explains a lot of the schism between—and I need to look up the names again—the “Effective Accelerationists” and “AI Doomers.” They seem to differ primarily on the cost of simulating that embodied expense. If they think that they can have their AI learn by watching footage of real parents, or something similar, then they believe that this will end well. If they believe that will cost them too much, then they believe that AI will fundamentally turn against us at some point if we don’t lock it down.

Dystopia

Finally, I’ve said before that most of us misunderstand dystopian fiction. We see it as a warning of a possible future, but a good dystopian story almost always takes the plight of an existing group who we generally ignore and marginalize, and asks what the world would look like if everybody needed to live with that burden. To paraphrase the (alleged) William Gibson idea, dystopia evenly distributes the future to include all of us.

Content Warning: Discussion of non-Free books that you should probably read for yourself, if you haven't.

For example, as someone with a decent sense of history, it infuriates me when people refer to Dobbs v. Jackson Women’s Health Organization as (some variation of) “ushering in Gilead,” because that tells me that the speaker completely missed the point of The Handmaid’s Tale. Margaret Atwood didn’t invent how society treated women in the story, nor did she see the future. Gilead looked at how people treated certain women in the 1980s, and extended that treatment to white, middle-class, straight women.

Likewise, I cringe whenever someone refers to The Parable of the Sower as “prophetic,” especially if they cite the President’s “make America great again” slogan. Why? Well, Octavia Butler didn’t actually write about the 2010s. She probably took the slogan from famous Ronald Reagan campaign material. Do you also think that the idea of minority groups facing violence, or fleeing environmental devastation came from some bespoke imagined future…?

I apologize for repeating the point, but both books have us to look at how we treat an underclass, and ask us to imagine that we all become part of that underclass. And here, I see the other ideas that I have around wealthy-thought around AI coming together.

Many people have reported the lengths that the wealthy go to in hopes of “dystopia-proofing” their lives. They bribe officials, distribute their assets across multiple countries, hire bodyguards to isolate them from angry employees, build doomsday bunkers , and so forth, and they’d love to colonize other planets even though they struggle to reach the Kármán line and would apparently need to reinvent the reusable launchpad . For any problem that might arise, from labor strikes to political regime change to environmental collapse, they at least believe that they can spend money into creating a shield, so that they don’t need to care.

However, the promises of AI often look like dystopia for the wealthy, offering a world where the abuses that they heap on the rest of us could now become problems that they need to live with, too. You can’t bribe an AI charged with enforcing the law, and can’t guarantee that the inherent biases won’t point away from wealthy white people. You can’t build a doomsday bunker resistant to an AI that might see your bunker as raw material to build an impenetrable shield for itself.

In other words, if they don’t control AI, then it might treat them like they treat the rest of us. If they can’t teach these systems to defer to (wealthy) humans, then their wealth might no longer matter, and they need to get by like the rest of us.

I know. If I shed a tear, it’ll probably come from laughter, too.

One Last Idea

I have one more twist for you, here: Thinking about God makes people more likely to trust AI. Or, to think about it another way, artificial intelligence may have religious connotations to it.

This might make some sense, if you think about how strongly Cosmism left its influences on Silicon Valley thinking. From here, you get the transhuman impulse for uploading brains to computers and colonizing the universe, effectively a belief in a technology-based afterlife.

And so, they see artificial intelligence as necessarily real, a Frosty the Snowman who also serves some role in their cosmology. That may amplify the dystopian angle, because a robot uprising would seem to have a different emotional resonance if the robots look like some equivalent of angels and demons to onlookers.

And Yet…

Despite this seemingly whole picture, though, it all still rests on the Frosty the Snowman premise, that enough silicon piled together must, eventually, come to life, no matter the expertise or insight of the makers. And like any religious parable catering to the wealthy, Frosty must separate the worthy from the unworthy. In their hubris, they wonder whether they can guide Frosty’s hand, so that they can escape their vengeful snowman’s judgment.

Meanwhile, the rest of us realize that figuring out how to model intelligence—instead of wishing for it and hoping that throwing enough money at the problem will convince the universe to solve the problem for us—would represent such a leap in mathematical sophistication that it would revolutionize how we do anything and everything.

No, really. Think about it: If we had a mathematical model of intelligence, or even some subset of thinking, then we could teach that model, and then a person could follow that process on paper or mentally, if slower than a computer could execute it. And if even a small child can “spin up” a variety of intelligences for help, that changes how we approach all sorts of problems, and maybe even how we interact with each other.

It also raises questions, as I’ve mentioned before, of civil rights. If you have a model capable of independent thought, then doesn’t it have a right to autonomy? If you created it on paper, and so it only “wakes up” when someone interacts directly with it, does that intelligence have a right to have someone keep it active? Could we even keep track of every model created, so that we could protect its rights?

We could also push this premise in the other direction: If piling up enough transistors will Frosty their way into intelligence on their own, and intelligence works mathematically, then would the same hold true of paper and ink? Would the same hold true of snow, since you can carve symbols into snow exactly like writing? Do we need to worry about the Snowman Liberation Movement…?

Can I Say It without Hurting Feelings…?

The Doomers and Accelerationists seem awfully united in one important way: They both want you to know that you should invest in companies hawking artificial intelligence. Either they will save the world, depending on which “side” that you listen to, or they’ll do horrible things that will give you an unfair edge.

Strip away their pseudo-religious overtones and my hypothetical situations, and they don’t actually have much beyond marketing. They don’t have products that do what they claim or warn. They don’t have a plan to get there. But they want you and I to believe that—thumpety thump thump, and all that—they will change the world with nothing more than another deposit from investors.

Oh, and they have a fancier version of a Markov process, I suppose…

☃️


Credits: The header image is Melting snowman by Tristan Schmurr, made available under the terms of the Creative Commons Attribution 2.0 Generic license.