It has been a while since I’ve written a Sunday-morning post, because I’ve been working on a lot of future projects. So, to try something new on the blog, I’m launching an extremely irregular series of posts—as in, I only know what the first one, this one, will be or when I might write them—called Let’s Fix…, wherein I take a look at some major property and ramble a little about what I think that it would take to…well, fix it.

Mark Zuckerberg testifying in front of the House Energy and Commerce Committee, 2018 April 11

I expect that in the case of a pop culture property, if it ever comes to that, I’ll use the basic concept as a template to build a Free Culture counterpart that I think makes no sense. In other cases, like today, I’ll talk about regulation to force an entity to behave in a way that fits better with its alleged purposes.

So…Facebook

If you missed it or are reading this post in the distant future, former Facebook whistleblower Frances Haugen testified in front of Congress about the company’s knowing abuses, including such issues as damaging the self-image of teens and enabling political violence.

This revelation comes amid continued calls to regulate large technology companies, and roughly coincided with a massive outage.

Basically, though, any time someone from Facebook talks about Facebook, we find out that the organization is far worse than we had previously known. And let’s remember that the organization started as a system to rate the appearances of college girls with Zuckerberg comparing his fellow students to farm animals, so it keeps getting worse than that.

Fixing Facebook

I should probably note that I originally wrote this as a reaction to a recent episode of the Bad Voltage podcast—sorry, I have no idea what they were thinking in selecting that header image, but it’s usually a decent podcast under a Free License—which discussed a similar issue.

Like that podcast, this will keep to the assumptions that options like “try Mark Zuckerberg for crimes against humanity” or wiping the company out of existence aren’t magic cure-alls. They might be cathartic, but there is and will be other terrible social media besides Facebook.

With that in mind—and the obvious disclaimer that I’m not a lawyer or a legislator, so may have missed some minor details in my analysis—I see a handful of potential legal steps that would limit an organization like Facebook’s ability to do harm, but wouldn’t overburden an “insurgent” competitor, such as someone running a Diaspora pod or setting up their own systems.

Actually, let’s just lay out my criteria.

  • The action must be achievable. Any solution that gets thrown out by the courts, is unenforceable, or requires convincing a jury beyond a shadow of a doubt isn’t worth bothering with.
  • It must solve a real problem. As much as I’d love to float an idea like “social media should be required to track down users who threaten violence and report them to the local authorities,” I’m not really sure that genuinely happens often enough to be viable.
  • It must be economical, especially at small scales. If someone sets up their own social network, the law can’t prevent that network from gaining enough traction to threaten the entrenched companies, or it becomes counter-productive.
  • Every solution should either disrupt reliance on making users feel bad or make it more difficult for one company or few companies to dominate the market. Both of those conditions must be true for Facebook to do significant damage, so disrupting them should be the priority.
  • No solution can be specific to Facebook, since the problems are industry-wide.

So, with those in mind, let’s dive in.

Ban Advertising

Advertising was almost banned in the 1960s in the United States, and the reasons that the Supreme Court didn’t do it are all essentially obsolete. Without advertising revenue, social media no longer cares about “engagement,” because you can’t make money by making people frightened enough that they’ll click on the erectile dysfunction ad in hopes of feeling less…well, impotent.

You can—probably rightly—argue that, without advertising, the Internet as we know it ceases to exist. So would broadcast television, for that matter. It would be more difficult for individuals to fund their “free” projects, for example. I’m sympathetic to that argument. All Around the News has advertising slots open, after all, and I’m considering adding more traditional sidebar ads and maybe native advertising.

However, the arguments against advertising are too overwhelming. Advertisements…

  • Are anti-competitive—the wealthier company can saturate its demographic with product placement to squeeze out competitors—and we claim to have a capitalist economy that thrives on competition.
  • Are often misleading, requiring infrastructure to monitor them.
  • Bias news, as journalists worry about losing a sponsor if a report is about that sponsor or a related organization or individual.
  • Seek to actively disrupt attention from whatever it claims to support.
  • Require and demand the most gullible audiences.
  • Enable surveillance capitalism.
  • Provide no legitimate information that isn’t already easily available in 2021 through a search, with social media even providing places for brands to easily provide information to prospective consumers.
  • Encourages breaking works into smaller units, to provide more opportunities to show the ads.

Because of the evidence against it, I’m not sure that “but the monopolists sometimes throw a couple of pennies at YouTube stars” is enough to overcome the problem. Just kill it. We’ll find another solution.

Break up Companies

Unwind the mergers and put roadblocks in the way of future mergers, so that no company buys their way into having control over parts of the lives of billions of people.

The counter-argument to breaking up monopolies is the so-called starfish problem. Like starfish regenerate their bodies in astonishing ways, fractured companies in an economy that doesn’t care about monopolies will just grow to become multiple gargantuan companies to worry about.

For example, the United States Department of Justice broke up the original AT&T in 1984, leading to a group of companies that have grown and re-merged into the modern incarnation of AT&T, Verizon, and Lumen. The first two companies are among customers’ least-favorite companies, regional monopolies, and also media behemoths. AT&T in particular is the current owner of WarnerMedia, and Reuters recently revealed that they have invested heavily in creating and sustaining One America News Network.

That all said, there’s a straightforward solution to the starfish problem, though: Monitor the resulting companies. If the Federal Trade Commission hadn’t been co-opted by anti-consumer zealots, the telephone companies wouldn’t have been allowed to merge, would have been forced to compete with upstarts and with each other, and wouldn’t have integrated into vertical monopolies on top of everything else.

Common Carrier

Require web services to follow “common carrier” (in the older telecommunications sense) rules, that the company can’t forbid non-abusive ways that the network can be used. If someone wants to write a script to export someone’s entire public profile or unfollow everybody, that should be the user’s choice.

Today, in the United States, common carrier status is tied up with Net Neutrality, but it wasn’t always the case.

Since I already brought up AT&T, one of the criticisms of their monopoly was that they fought against customers connecting “unauthorized equipment” to “their network.” Over the century that they were in business, that equipment ranged from modems and answering machines to simple cardboard cones to amplify sound for the hard-of-hearing. Answering machines were the point where the courts definitively pushed back against the company, a situation where AT&T was willing to reduce its revenue—they didn’t charge for a call unless it connected, so an answering machine means that they’d be able to charge for missed calls and the inevitable follow-up—to spite manufacturers who dared connect to their copper wires. The eventual result of those cases was homes converting to the modular phone connector that some of you might recognize as RJ-45 plugs, and manufacturers producing novelty phones, dialers, answering machines, modems, caller ID boxes, and other phone-enabled gadgets.

This is an important idea, because Facebook and other Internet-based services often ban people caught making use of their site in ways that they haven’t created, such as a browser making it easy to switch user profiles or a tool to unfollow all accounts, not to mention watchdog organizations. If Facebook is required to permit all interaction with the site that doesn’t interfere with their ability to deliver service to someone else—that is, they should be allowed to limit access rates, enforce privacy rules, and chase after people who harass others—then they immediately become more transparent, because it’s more difficult for them to hide what their site might do.

The “advanced” version of this concept is to mandate some public API. However, that gives the company more control over what users can do than just allowing users to scrape the site, and it puts smaller companies at a slight disadvantage, because they’d also need to allocate resources to implementing their API.

Improve Moderation

All decisions regarding moderation should be transparent within some time-frame, and there needs to be a public appeals system that explains results to the community.

As a quick example, I’ll reiterate my story about the time that Quora banned me. Some troll reported hundreds of my posts as spam, presumably with a script. Quora gave me no notice and no way to find out what happened, without creating a second account to investigate. The appeals process was also a black hole, with no explanation, apology, or even notice that I was allowed back on.

While I understand that a company that’s the size of Quora doesn’t have the resources to do this properly, they—and especially larger companies—should be required to do so. Moderation is what creates a community.

To be clear, I’m not looking for eighty page Supreme Court decisions. Rather, I’d just expect something like a blog with a daily list of moderation actions, something like the following.

User #84930124: Deleted post #1681234711 for violating §342.78 of the Terms of Service.
User #389539408 ban submitted for manual appeal.
User #1883577287: Banned for the following actions.
    Posts violating §487.3 of the Terms of Service:  #556346022, #1184370802, #1817090314, ...
    Multiple accounts
    Attempt to circumvent security restrictions
User #24233988: Reinstated after manual review.
    Report #1102407139 found fraudulent.
    Offending post #883015101 voluntarily deleted.

That is, this only needs to be a place where a user can look up the automated and manual actions taken with their account. In the above scenario, I imagine that every banned user is sent an e-mail with their ID number, so that they can watch for their case. It also allows interested parties to see whether the company is being even-handed in moderation, rather than going easier on people with certain ideologies. And maybe most importantly, in some ways, it serves to disrupt one of the more popular genres of social media post, these days, the “this famous person is being canceled for daring to speak the truth,” when it really just turns out that it’s because repeatedly calling for insurrections and recommending that people snort bleach is blatantly against the terms of service…

Update, 2021 October 21: I’ve been reminded that this clarity of moderation actions—though not necessarily its public-facing aspect—is part of the so-called Santa Clara Principles. I’m taking their “numbers” idea a step further, but the other aspects are similar and I’ve probably read the principles in the past and forgot about them.

Research Transparency

It’s probably time to remove the “existing/anonymized data” exemptions from institutional review board mandates, at least for internet services, and (maybe) add a requirement that all such research on Internet users needs to be conducted in public.

One of the reasons that Facebook gets to be a hive of pro-violence disinformation, after all, is because Facebook decimated the journalism ecosystem by defrauding news outlets with their “pivot to video” pitch that apparently just lied about their data. Likewise, Haugen talked about how Facebook’s own metrics show that using the service is harmful to users, but they opt to ignore and hide that research every time.

If Facebook needed to tell someone outside their organization about the research they perform on user behavior and publish the results, they can still opt to ignore it, but that information becomes much more difficult to hide.

De-Automate

Users should be allowed to opt-out of “the algorithm,” without any question.

I should be able to log in to any social media site and just get a linear record of posts that were created by people I follow, without the systems trying to guess what I “want” to “engage” with.

Honestly, I suspect that this change would the big key, because it unravels the “outrage feedback loops.” By allowing users to disable the artificial intelligence work, they can leave the world where a system chooses to show them what will make them angry, so that they will run out of the energy required for critical thinking, so that they’ll click on the ad about their local political race. The ad is still there, if the user finds it interesting. The angering posts are still there, if the user wants to find them. But there’d no longer necessarily be an algorithm using one to funnel users to the other.

Those still using “the algorithm” should be able to see clear metrics on why they’re being shown what they see in an exportable form, so that users—or, more realistically, researchers—can see what the system finds important at any given time.

Adjust the Relationship

Probably most importantly, though, is that people need to stop assuming/accepting that Facebook is “making mistakes,” when these problems consistently come up. Yes, the aphorism is “never attribute to malice what is more easily attributed to stupidity,” but we’re well past the point where the latter is easier.

The company consistently makes decisions that are terrible for its community, even when it doesn’t make shareholders much money. The product started out as a tool for Zuckerberg and his creepy friends to rate women, and—surprise!—that misogynist garbage heap has gone on to openly support fascists and conspiracy theorists. These people are not here to help people.

Likewise, when Zuckerberg says that he welcomes regulation, he means that he wants his lobbyists to draft laws with onerous reporting requirements that Facebook can easily afford but an upstart competitor cannot, not the sort of regulation that I propose above.

Is Facebook Fixed, Yet?

To be fair, the resulting Facebook would probably not be the same website that it currently is, given the above changes to law and culture. Without advertising, for example, Facebook would need to find another way to keep Zuckerberg unnecessarily wealthy, especially with the need to fund a large global staff able to keep up with moderation requests. But assuming that that Zuckerberg didn’t just shut down his servers and go home, the resulting Facebook would be usable for its stated intent of bringing people together, rather than being the place where people have accounts because it’s the only website that your distant relative from overseas uses.

Or maybe I’m missing something. That’s what the comments are for…


Credits: The header image is a frame from Facebook: Transparency and Use of Consumer Data, Full Committee.