In thinking about my developer journal posts for July—particularly looking at holidays to use for the post titles—I noticed that last year, touching on the belated anniversary of Bloody Thursday, I mentioned the following.
Over at the Digital Comic Museum, where we make sure comics whose copyrights have lapsed have a home (though I still don’t understand the date cutoff), we’ve been discussing how to handle the fact that many older comic books don’t have the same cultural sensitivities that audiences expect today. So, I’ve been writing something that serves as a disclaimer and content advisory…but is also brief enough that the people who’ll benefit from reading it will see it as useful.
The post goes on to promise that I’d make sure that everybody can make use of what I wrote, and then I just…kind of forgot about everything.
So, I’m using that upcoming anniversary as a pretext to think about content advisories: What they’re for, who they’re for, and how to manage them without allowing someone to come to harm or feel like the writer is a condescending jerk trying to avoid responsibility.
Post-Traumatic Stress Disorder
Am I qualified to talk usefully about PTSD? I freely admit that I am not. Am I qualified to talk about its relevance to the topic at hand? I don’t think that’s nearly as high a bar to clear.
So, despite what right-wing pundits would prefer everybody to believe, when we talk about content advisories, content warnings, trigger warnings, or whatever other term might fit the bill, are just notes that the work is going to include some content that may stimulate a trauma response in a minority of the audience. Specifically, people suffering from post-traumatic stress may find that the content reminds them of the traumatizing event in such a way that it produces flashbacks or other panicked reactions.
Technically, anything can be somebody’s trigger, because any sense impression could be directly associated with the event that traumatized the person, in the same way that many people find foods comforting by their smells triggering memories of family dinners or similar events. But it’s obviously impossible to list every reference, nuance, and image in a work in a summary that doesn’t dwarf the original work, so it’s generally kept to common and well-known categories.
I don’t want to tell anybody how to interact with their audience, but if you write and are fine with soldiers, other victims of violence, and other traumatized folks being unable to finish your writing, you’re not going to be left with much of an audience. Most of them can get through a trigger, but they need to know that it’s coming.
I hear you, in the back, asking “isn’t this just coddling them?”
On the one hand, they’re injured, and will only heal with rest and treatment, though the injury is behavioral and chemical, rather than mechanical. If the same people had broken legs, nobody would think twice about offering them actual help up a flight of stairs, and the person merely warning them about the staircase would be a jerk, not accused of coddling. And on the other hand, a content warning is equivalent to a weather report, a note about conditions that people might want to consider. And as much as we might joke about TV meteorologists patronizingly reminding you to take your umbrella, I don’t think that any of us actually feel coddled just to hear about potential rainstorms while we’re checking e-mail.
I’ll return to this idea, later, but the talk about “coddling” makes it worth pointing out that PTSD isn’t the only issue that a reader might need to deal with. There are videos that you might not want to play with children in the room. There are books that you might not want your colleagues to know that you’re reading. A reader might prioritize certain topics over certain other topics. A reader might only have five minutes to spend before moving on to something else.
All of these (and more) are issues that might lead to certain kinds of warnings or advisories, so that the consumer can make a reasonable choice.
We could even go further, and talk about content advisories as a form of accessibility. By providing a heads-up to the audience about potential issues with the content, they can prepare to engage with it properly, in the same way that the morning news storm warning helps you plan your day in a way that doesn’t make you completely miserable.
Broader Content Advisories
The audience, then, can choose what to do with that additional information. Most people will ignore it, of course, because nothing in it applies to them. Most of the remainder will take a moment to prepare for encountering the provocation that they recognize, not much different from someone hearing that it’s going to rain and grabbing an umbrella on their way out. And the few that are left will want to avoid the work. But also, some people who don’t need the warning might find a reason to care.
If that’s confusing, consider a content advisory that just about everybody is probably familiar with, as a part of e-mail subject lines and article titles: NSFW, a warning that the contents may not be safe/suitable to expose in a professional environment. Most people aren’t in a position where that warning matters. Of the remainder, most need to prepare, by changing environments or checking to make sure nobody else will see. And the rest will just move on or delete. However, a percentage of the first group is going to say something along the lines of “I’m not in the mood for that” and move on, thankful that they were given the option to skip it.
As another analogous example of a content advisory that everybody understands and rarely finds worthy of comment, if you read my blog by going through the index, you’ll notice that every post has an approximate word count listed, after the summary, such as this post, which contains roughly 2400 words. That’s an advisory that serves a similar purpose: Based on how much time a reader has available when they see the post, they can choose to read it, save it for later when they can give it more attention, or skip it. Some might also have the time to read that many words, but not the interest in reading that many words from someone like me.
And as a final example has long been found in textbooks and certain kinds of manuals, there’s the shaded box that translates to something along the lines of “most people probably won’t care about this deeper topic, but we’d feel guilty if we left it out, because some of you will need to know it.” Computer Science textbooks will often use this to fit in a mathematical derivation of something. Product manuals might use them for troubleshooting advice.
In the Free Culture Book Club, I make a third use of the content advisory, which dovetails to some extent with the other two: Setting expectations to defuse arguments over the work itself. That is, at the same time that I want a traumatized person to know that “this story contains non-consensual touching,” I also don’t want to hear from the prissy types who want to object to the mere presence of profanities, sex, people of color, or whatever. I think of those advisories somewhat like a carnival ride sign, distinct from the “prepare yourself” angle.
You must be at least this mature to engage with this work.
In this context, we might say that maturity largely consists of accepting that a lot—but sure, definitely not all—of what people lump into a category of “morality” is completely arbitrary, and often chosen more to reinforce long-standing oppression than to improve society. In plain English, I’m not interested in a discussion about a story where one participant is just angry that (picking a random demographic…) bisexual people exist. Comparative legitimacy of reaction aside, it’s a similar situation, where both should know what they’re getting into, so that it doesn’t provoke a disproportionate or dangerous response.
My least-favorite offerings of the content advisory genre—entertaining as they might be—is what has been appearing recently on streaming services, to be played before “controversial” movies and television episodes, without just acknowledging that the video that is bigoted in some way. They tend to look something like the following, at least to my eye.
The following portrays certain people or cultures in a bad light. They were wrong to do that for reasons that we can’t be bothered to discuss, but because this is still highly lucrative content, we prefer to put the onus of education and understanding on our customers. You go create a “better future together,” while we stay here and count our money.
You can find something like it at almost any streaming service, though this one happens to be most directly patterned on Disney’s, since a monopoly that size could easily produce a spectacular series about the harm that media depictions have done to society and the opportunities they’ve missed, and make it a recommendation after every suspect title as well as using its marketing muscle to make it the “must-see” show on the service.
Regardless, you can see the smarmy attitude bleed through. The people who wrote it want you to know that the writers “were and still are wrong.” But they aren’t interested in detailing why, only that the wrongness shouldn’t be held against anybody profiting from the material today. There isn’t even any indication that they wouldn’t produce precisely the same thing today, given that the same studios frequently marginalize the representation of minority groups. And for all the praise of education and open discussion, they’re refusing to engage with either.
It reminds me of high school language classes, where students might conjugate a verb or thousand differently, based on the subject of the sentence.
|Pronoun (sing)||Pronoun (pl)||Verb Form|
|He or She||They||“were and are wrong”|
|You||You||“should educate and create a better future”|
|I||We||“freely profit from this wrongness”|
In many cases, it’s hard to interpret these fake advisories as saying anything other than “those of you traumatized by these pervasive stereotypes should watch this, and talk amongst your friends about your trauma, so they can treat you as an exhibit.” I can only hope that, one day, Disney will add disclaimers to the existing disclaimers, pointing out that asking people of color and those of gender and sexual minorities to educate the rest of us on bigotry “was wrong then and is wrong now.”
My Final Result
After a fair amount of editing to make it readable, and in the spirit of actually empowering readers to prepare to engage with the comics, rather than simply disclaiming responsibility for some female characters being drawn in or without brassieres or the occasional unpleasant death scene, this was the advisory that I wrote.
These materials were created a lifetime ago, when discourse often dismissed the voices of the vulnerable, so many are ugly reminders of the dangers of doing so. We believe censorship leads to forgetting the work done to improve and work still remaining, so books are presented in their entirety.
Ideally, someone would review this book to provide specific content warnings. But that is labor-intensive, so the best we can currently provide is: It might include/portray racism, sexism/misogyny, ableism, classism, body-shaming, homophobia, religious bigotry, graphic violence/abuse including wartime combat and blood, horror, death, self-harm, retribution, glorification of authority, and attacks on authority.
If you are concerned about topics that may trigger your anxiety, we invite you to ask about the book and concerns in the forum. We cannot guarantee a community member has your sensitivities, but we will not tolerate any attack on you for your concerns.
Yes, it hurt to not separate clauses with “that,” a choice that I make in most other writing. To me, the change makes it slightly harder to read, but including them bloated the word count to a point where I wouldn’t expect anybody to try reading it.
My personal target was 150 words, which I missed by a rounding error. However, it covers most of what I wanted. It explains why the stories are problematic and the reason to preserve and even highlight them. It also embraces the responsibility to improve, rather than disclaim responsibility for the content. Continuing, it admits failure in providing detail, but provides broad classes of issues that might hit too close to home for some readers. And it invites group discussion with the actual community, instead of telling the reader that any pain inflicted on them is their problem. At least, that’s what I tried to cram in there.
One important idea that I couldn’t fit into my (totally arbitrary, I should be clear) word-budget was a distinction in the last paragraph between avoiding material that might remind the reader of trauma and bracing for that reminder, but I also suspect that the readers can figure out which they need for themselves. I also wonder if the examples might be too specific or verbose, better replaced with something shorter and more general, such as “nearly any kind of violence, gore, politics, or representation.”
Of course, in an ideal situation—for example, in a multi-billion dollar media monopoly, or even as a system where we had thought to do this from the start—each work would have a custom advisory. Given infinite resources, we could say “this book contains a story that portrays the Japanese as inhuman creatures” or “that book contains the threat of an eye being pierced.” And that would remove the need for a blanket statement.
In any case, the team at the Museum hasn’t decided where and how best to integrate that advisory. However, I’ve also decided to release it under the same CC-BY-SA 4.0 license as the rest of the blog. Feel free to use it and adapt it to your needs, provided that you abide by the terms of the license.
I may consider creating a GitHub or GitLab repository for this—with maybe similar blocks of text to be added later—where people can contribute ideas and help trim down the editing. However, considering how long that it took me to post this at all…maybe don’t hold your breath waiting.
Tags: harm ethics safety