Three And a Half Thoughts on Twitter

There has been a lot of ink spilled about Elon’s takeover of Twitter, and his “Friday Morning Massacre,” and I’m unlikely to make any insights that haven’t been talked about before. There are also a number of thoughts I’m not going to publish here (yet), because I work in the industry, and I actually like my job right now.

I. Advertisers (and Apple) Want Content Moderation

All major social media platforms are supported by advertising. As has been repeated over and over, that means the actual customers of social media platforms are the advertisers, not the users actually posting their content. Advertisers really like content moderation, because it means there is a much lower risk that an ad for Tide appears next to hard-core pornography, or that the launch of a new Ford appears next to an anti-Semitic screed.

The other huge force in social media moderation policies is Apple. The iOS App Store has implicit standards for content moderation, and there have been major apps that have changed their policies because of Apple’s threats to de-list them from the app store without forward progress. Apple worries about both nudity (see Tumblr) and hate speech (see Parler).

In practice, this means that most main-stream social media sites end up having pretty close to the same content standards. Where you see variation, it’s mostly around nudity and pornography, with Facebook and Instagram taking a fairly hard-line approach and Twitter trying to do a complex, age-gated dance. (Long-term, Twitter’s strategy is unlikely to work, mostly because of Apple’s policies, and the difficulty of enforcing on CSAM when you allow adult sexual content.)

There’s a sense that there is a “global optima” around content moderation policies; the big sites all have policies that are not that much different from each other. But, once you think about who the true guardians of the policies are, the sameness seems almost inevitable.

II. Handel’s Law of the Inevitability of PR Fires

First, in order to have a social media platform with both users and advertisers, there needs to be content moderation. (See Thought One, directly above.) Now, the real problem with content moderation is that policy can’t just be “Take down bad stuff.” Instead, someone will have to very carefully define what bad stuff is, so that every time a moderator sees a piece of content, the moderator (no matter their personal or cultural norms) will make the exact same decision about the content. Hopefully the moderator will also make the same decision about closely related content1.

Second, this means that you will be writing very detailed and careful definitions about what is allowed and what is not allowed. In writing this, you will have to make a lot of judgment calls about details, and there is rarely a principled way to make these calls. Unfortunately, reasonable people can (and will) disagree about these judgment calls. To be clear, these are decisions that really could go either way; this is not allowing suicide promotion onto your site.

Finally, eventually, there will be a piece of content that falls on the wrong side of that line, and it will be posted by a high-profile source. The high-profile author complains about it, and because they are high-profile, they get media attention.

That’s why there is always the potential for a media fire.

N.B. I think this was originally formulated with JJ around Fall 2017, in Berlin. I’m not sure I’ve ever published this anywhere, so it may be someone else’s law now.

III. There Are No Engineering Problems Left

One of the real problems for Musk is that he thinks the problems at Twitter are amenable to engineering solutions. Many of the problems at Tesla and SpaceX are engineering problems: we understand the underlying physics and material science. The challenge is to design a solution within the triangle of time, cost, and quality. (where “quality” can encompass things like weight and strength, and of course, “cost” is both fixed and recurring costs.)

But, there aren’t significant engineering problems left at Twitter. All of the problems are policy problems, and for many of them, we don’t even really understand the full shape of the problem, much less have an underlying theory of what to do. Instead, you need to make decisions that will make a number of people angry and upset.

My suspicion is that this is the reason that Zuck has stepped back from the day-to-day workings of Facebook and Instagram and is instead focusing on the AR/VR work. AR/VR have serious engineering problems that need to be addressed before it’s at a price and quality level that is acceptable. I think Zuck finds those engineering issues much more fun than debating the nuances of moderation policies.

And a Half: The Swimming Pool Model of Social Media Exit

Bad content in social media is a bit like pissing in a swimming pool. We all know there is a certain amount of it happening, and we can accept that. However, when it starts to get worse, at least two things happen. First, it makes it a bit harder to tell when someone has taken a crap in the pool. Second, it takes more and more of urine-in-pool tolerance for people to willingly get into the pool. None of these are great outcomes.

  1. One thing I’ll say from work experience, the Cohen’s Kappa for content moderators is not as good as you would hope. It’s definitely greater than zero, but it’s very rarely greater than about .85 

Updated: