October 2018

I guess I’ll just write this as though I have reason to be believe that the people who write headlines for the New York Times read my blog.

For the record: I’m a subscriber, and I think the Times does some terrific reporting and investigative journalism — when they’re at their best, there’s no one better. That makes the unforced errors all the more disappointing.

Look at the top of today’s edition’s front page:

Top of New York Times front page for 2018/10/03.

First note the caption beneath the big color photo on the left, which says:

A migrant caravan headed north Monday from Tapachula, Mexico, where members had stopped after crossing in from Guatemala.

Now all the way over on the right, note the bold headline at the top of the rightmost column:

Trump Escalates Use of Migrants As Election Ploy

Issuing Dark Warnings

Stoking Voters’ Anxiety With Baseless Tale of Ominous Caravan

If you take the headline at face value, and then look over at the photo, you would naturally come to the conclusion that the New York Times is contradicting itself on its own front page.

It turns out that the article under the headline is indeed about a baseless tale — just not one about the existence of the caravan itself, even though that’s what the headline would imply to any casual reader:

President Trump on Monday sharply intensified a Republican campaign to frame the midterm elections as a battle over immigration and race, issuing a dark and factually baseless warning that “unknown Middle Easterners” were marching toward the American border with Mexico.

[emphasis mine]

In twenty words of headline, there wasn’t some way to fit something specific about the false claim in?

How about this:

Trump Falsely Implies Terrorism Threat From Caravan

“Unknown Middle Easterners”

Stoking Voters’ Anxiety With Baseless Claim About Migrant Caravan

There, did it in 19 words, one fewer than the number they used for a misleading and less informative headline.

Yes, by the way, you know and I know and the New York Times knows that “Middle Easterner” doesn’t mean “terrorist”. But it’s perfectly clear what Trump is doing here and the NYT shouldn’t shy away from describing it accurately… in the headline.

(Entirely separately from the above, there’s the question of why the New York Times is running a giant color photograph of the migrants above the fold on its front page, for the second time in the past few days. These caravans have been going on since 2010; they’re larger and more organized the last couple of years, but they’re not new. As an independent news outlet, why let a politican’s talking points drive cover art choices in the first place?)

Self-censored page of 'Green Illusions', by Ozzie Zehner
image credit

A particularly insidious problem with online social media platforms is biased and overly-restrictive ban patterns. When enough people report someone as violating the site’s Terms Of Service, the site will usually accept the reports at face value, because there simply isn’t time to evaluate all of the source materials and apply sophisticated yet consistent judgement.

No matter how large the company, even if it’s Facebook, there will simply never be enough staff to evaluate ban requests well. The whole way these companies are profitable is by maintaining low staff-to-user ratios. If policing user-contributed content requires essentially arbitrary increases in staff size, that’s a losing proposition, and the companies understandably aren’t going to go there.

One possible solution is for the companies to make better use of the resource that does increase in proportion to user base — namely, users!

When user B reports user Q as violating the site’s ToS, what if the site’s next step were to randomly select one or more other users (who have also seen the same material user B saw) to sanity-check the request? User B doesn’t get to choose who they are, and user B would be anonymous to them — the others wouldn’t know who made the ban request, only what the basis for the request is, that is, what user B claimed about user Q. The site would also put their actual Terms of Service conveniently in front of the checkers, to make the process as easy as possible.

Now, some percentage of the checkers would ignore the request and just not bother. That’s okay, though: if that percentage is high, that tells you something right there. If user Q is really violating the site’s ToS in some offensive way, there ought to be at least a few other people besides user B who think so, and some of them would respond when asked and support B’s claim. The converse case, in which user Q is perhaps controversial but is not violating the ToS, does not necessarily need to be symmetrically addressed here because the default is not to ban: freedom of speech implies a bias toward permitting speech when the case for suppressing it is not convincing. However, in practice, if Q is controversial in that way then some of the checkers would be motivated to respond because they realize the situation and want to preserve Q’s ability to speak.

The system scales very naturally. If there aren’t enough other people who have read Q’s post available to confirm or reject the ban, then it is also not very urgent to evaluate the ban in the first place — not many people are seeing the material anyway. ToS violations matter most when they are being widely circulated, and that’s exactly when there will be lots of users available to confirm them.

If user B issues too many ban requests that are not supported by a majority of randomly-selected peers, then the site could gradually downgrade the priority of user B’s ban requests generally. In other words, a site can use crowd-sourced checking both to evaluate a specific ban request and to generally sort people who request bans in terms of their reliability. The best scores would belong to those who are conservative about reporting and who only do so when (say) they see an actual threat of violence or some other unambiguous violation of the ToS. The worst scores would belong to those who issue ban requests against any speech they don’t like. Users don’t necessarily need to be told what their score is; only the site needs to know that.

(Of course, this whole mechanism depends on surveillance — on centralized tracking of who reads what. But let’s face it, that ship sailed long ago. While personally I’m not on Facebook, for that reason among many, lots of other people are. If they’re going to be surveilled, they should at least get some of the benefits!)

Perhaps users who consistently issue illegitimate ban requests should eventually be blocked from issuing further ban requests at all. This does not censor them nor interfere with their access to the site. They can still read and post all they want. The site is just informing them that the site doesn’t trust their judgement anymore when it comes to ban requests.

The main thing is (as I’ve written elsewhere) that right now there’s no cost for issuing unjustified ban requests. Users can do it as often as they want. For anyone seeking to no-platform someone else, it’s all upside and no downside. What is needed is to introduce some downside risk for attempts to silence.

Other ideas:

  • A site should look more carefully at others’ ban requests against material that someone else has already made a rejected ban request about, or that someone who has a poor ban-reliability score has requested a ban on, because there would be a higher likelihood that those other requests are also unjustified.

  • A lifetime (or per-year) limit on how many ban requests someone can issue.

  • Make ban requests publicly visible by default, with opt-out anonymity (that is, opt-in to be identified) for the requester.

Do you have other (hopefully better) ideas? I’d love to hear them in the comments.

If you think over-eager banning isn’t a real problem yet, remember that we have incomplete information as to how bad the problem actually is (though there is some evidence out there). By definition, you mostly don’t know what material you’ve been prevented from seeing.

New York City Subway, 14th Street Union Square platform curvature
image credit

More and more of the political left (which is where I sit, at least by American standards) seems to be abandoning the idea of freedom of speech as an inherent good, let alone as the essential liberty on which all other liberties depend.

Recently, someone I know and respect called repeatedly for Donald Trump to be banned from Twitter. He’s not alone. A lot of people want this, and they don’t want just Trump banned, they want many speakers banned from many popular platforms.

This is a worrying trend. The left may be about to finally gain some measure of political power in the United States, depending on the results of the November election. Yet right at that moment we are limiting our ability to have necessary debates and to even hear what people say. I’ll focus on one particular example in this post, but the problem is a general one. This narrowing would be bad under any circumstances, but it becomes worse when attached to power.

I’m not talking here about state censorship. A few people call for that too, but most people still understand why the state needs to be especially constrained in its ability to interfere with speech. I’m talking about no-platforming and campaigns for blanket shunning: that is, urging private-sector platforms to ban certain speakers, and shaming other people and organizations into ostracizing those speakers as individuals under all circumstances, even circumstances that are unrelated to the allegedly objectionable speech.

Are there narrow, consistent criteria we could use to decide when it’s appropriate to advocate non-state suppression of speech?

I think there are, and I think we’d be better off if we stuck to those criteria instead of the increasingly broad and subjective criteria many are using right now, most of which are based on empathy for those who are hurt by harmful speech. Certainly, speech can be harmful: the argument for freedom of speech has never been that there is no such thing as harmful speech, but rather that suppressing speech almost always leads to worse harm down the road. There are two reasons why “someone felt deeply hurt” is not a good test: one, it treats the speech and the speaker inconsistently by looking to others’ reactions as a guide (reactions which will vary from listener to listener), and two, sometimes useful speech may also be hurtful to some people — these two things are not contradictory, much as we might wish otherwise.

We need a better test.

Last year I read a good post by Valerie Aurora entitled The Intolerable Speech Rule: the Paradox of Tolerance for tech companies. The post links to a presentation she gives that’s worth watching. It’s thoughtful, aware of the tradeoffs involved in any kind of permitted-speech policy, and careful to distinguish between private actors (such as social media platforms) and the state.

Here is Valerie Aurora’s formula, phrased as a guideline for online platforms:

If the content or the client is:

  • Advocating for the removal of human rights
  • From people based on an aspect of their identity
  • In the context of systemic oppression primarily harming that group
  • In a way that overall increases the danger to that group

Then don’t allow them to use your products.

It’s a very specific, circumscribed rule. If I ran an online service, I’d try to follow it — but as conservatively as possible, because:

What about someone who tries, sincerely and non-threateningly, to discuss what is and is not a human right in the first place?

A friend of mine, Nina Paley, has been repeatedly no-platformed for doing exactly this. Nina is blunt and direct, because she has strong feelings on the issue she’s speaking about. But she threatens no one, and never tries to silence or dehumanize. She’s happy to engage with opposing views, and argues her own in good faith.

I’ll state Nina’s view only briefly here — if you want to know more about it, you’re better off getting it from her, and most of what I’m saying here is not about the substance of her view. Put simply, Nina doesn’t accept the argument that transgender women are women. Nina would like women’s-only spaces to be for women who were born women (or, as Nina resolutely calls them, “women”). Some people call this “Trans-Exclusionary Radical Feminism” and thus refer to Nina as a “TERF”. Nina prefers the term “gender-critical radical feminist”. At the very least, if you use the term “TERF”, be clear, as Nina always is, that the exclusion is from the set of human females, not from humanity itself. Half of humanity is already excluded from being female (so it’s clearly not dehumanizing). Nina’s argument is that if that half is making masculinity a toxic place to be, then the solution is to fix masculinity so people stop fleeing it.

I won’t go into detail about the substance of her argument; you should get it from Nina, not me. I’m sure you can come up with counter-arguments, too. I have done so with Nina, starting with the obvious: “Many trans people consistently report that they always felt that their body was the wrong sex — and they start expressing this when they’re young children, so it’s not just a retconned memory. There’s something real going on here.” Nina has interesting and probing responses to this, and you can ask her about them if you want; I was glad I did, because it led to an in-depth conversation.

But this post is not about the substance of Nina’s argument. It’s about freedom of speech: How can someone even have this conversation with Nina, or observe her having it with others, if platforms deny her the ability to speak?

For expressing this view, Nina was briefly banned by Facebook. Apparently, a bunch of people who disagree with her got together and reported her to Facebook as though she were spamming or in some way violating the site’s terms of service. That’s a straight-up dishonest tactic. That’s no-platforming.

Nina is a frequent and well-known speaker about art and copyright restrictions, but now is sometimes disinvited from speaking gigs because of her gender-critical radical feminism, even when that’s not the topic of the speech. She’s had a showing of her film canceled (her films are not about gender-critical radical feminism either). When a friend of hers tried to post screenshots of a Facebook thread showing the venue’s statement about the cancellation, plus the usual tons of debate in followup comments, those screenshots mysteriously vanished from imgur.com. So the person reposted the screenshots, and again they disappeared — again with no explanation or notice.

What the heck? Is someone working at imgur a secret censor?

I wanted to know more, so I asked Nina’s friend exactly what had happened and got this reply (you can skip the blow-by-blow if you want, but it’s worth reading it to feel what the experience of being no-platformed is like):

Timeline is this:

(1) Argument happens on the Arcadia cafe page. People are calling
for no-platforming, etc. It gets to hundreds of comments.

(2) Juicy drama of this sort often gets removed, so around 8:15 PM,
I decided to just take a bunch of screenshots. I have these on
my computer.

(3) Sure enough, later that evening (10 PM?) Arcadia removes the
event from their page, and with it all the comments.

(4) Nina asks me if I have screenshots, I tell her I do, and that
while they’re completely unedited (so non-anonymized or pasted
together) I can put it somewhere, I suggest imgur in an album,
which will be viewable if you know the exact URL but not
browsable from my name or anything (like all my other images,
same way).

(5) The next morning (day after the event) I put the images up in
the first album “Arcadia No-platforming.” Imgur interface is not
so friendly, to keep the images in order I have to upload them
one at a time. There are 64 images.

(6) The URL got shared on Facebook, I see some people viewed the
album. Next morning, I awake to find… all the images are GONE.
Completely gone from my account (not just taken out of the
album). The album is left, but it’s an empty shell, nothing in
it. I have NO notifications, no email, no nothing, just the
images are gone. I have the album still open from the previous
night with the image showing (cached in my browser) but if I try
to open, yep, it’s the usual standard “this image has been
removed or is no longer available” thing.

So I’m just… CURIOUS.

(7) I upload the images again (all 64 of them, again one at a time).
I put in a new album “Arcadia No-platforming Is Back.” I decided
that hey, let’s save this link to the completed new album to the
Internet Wayback Machine. I do this, and confirm that the images
are backed up over there (so they’re on the public internet now
in a place that isn’t imgur).

(8) Nina also takes the images from the new album, and puts on her
blog. So they’re available in a second place, that isn’t imgur.

(9) Overnight that night, the images are removed AGAIN. Once again,
the album is left as an empty shell, and all the images are
completely gone from my account. None of my other images (some
of which are waaaaayyyyyyyyyy more “offensive” than these
screenshots I might add, and which have been linked,
individually, on twitter by me) are disturbed at all. Just the
Arcadia facebook screenshots.

So yeah. CURIOUS.

(10) I get mad, and make the single image that just has the “stop
trying to censor this, the images are [elsewhere]” redirect
text on it. I upload that into both albums. Both albums have
been steadily getting views.

(11) That next night, someone removes the redirect image! Just wtf.
Again, it’s gone from my account, no notifications of any sort
to me AT ALL.

(12) I upload the single redirect image again, again put it in both
albums.

(13) Since yesterday, I check the albums periodically but whoever it
is has given up, the redirect image has stayed in there. Both
albums are getting views, still.

That’s about it. It’s just curious to me, because… I’ve never had
any images removed from my account before, and I have plenty of stuff
that anyone who can’t deal with “penis is male” would be far more
offended by.

I suspect that someone involved in the facebook comments thread got
upset and complained to imgur that their “personal data” was being
shared, or something.

Thing is, it was a public facebook page, public comments, open to the
entire world. Also, if I was officially violating a TOS, I’d expect
to get some sort of notification about it or a slap on the wrist or
some warning or something.

But yes, I suspect someone involved in the whole thing didn’t want
their comments put on display in a less than favorable light
(somewhere else that was linking to my album, since I didn’t post my
album anywhere myself) and sent a complaint, or something. But…
dunno.

Either way, both albums keep getting views, to that single image.
Just… weeeeird.

That’s what no-platforming looks like. At its best, which is still pretty bad, the platform will at least admit to the censorship and describe how the decision was made. At its worst, as appears to be the case with imgur.com, it looks the way censorship regimes usually look: information disappears, and there’s no explanation nor even acknowledgement that it happened. Everyone please move along; nothing to see here.

Ostracism is not an answer either.

I mentioned that Nina has had speaking gigs and showings of her films canceled. Perhaps you’re thinking “Hey, that’s different. That’s not no-platforming. That’s just someone not wanting to be associated with Nina’s views. People have the right to disassociate themselves — in fact, isn’t that what ‘freedom of association’ is all about?”

Sure, in some literal sense, that’s true. But it’s best to use this “freedom to ostracize” sparingly. Most disagreements do not need to rise to the level of not being able to be seen with someone at all. There is no need for people to assume that when you engage someone in an unrelated discussion or presentation, you also endorse everything else that person believes.

Worse, there is a dangerous feedback loop here. The less often venues present people whose views diverge from the venue’s, the more we start to think that when a venue does present someone the venue tacitly endorses everything that person thinks. The eventual result of this process is monoculture and an arms race of virtue-signaling, which is exactly what’s happening in certain quarters of the political left.

Here’s the working principle I would use (and I’d appreciate constructive feedback on it in the comments section):

If you already thought a person is worth presenting — or engaging in discussion with, or showing the artwork of, etc — then do so, unless that person has some unrelated public stance that clearly and unambiguously advocates violence or violates the “Intolerable Speech Rule” (that’s the Valerie Aurora test given earlier).

Nina Paley is justly famous for her articulate and persuasive arguments against copyright restrictions. She’s also justly famous as a filmmaker. If you’re looking for a speaker on the topic of copyright, or if you’re a venue that shows art films, you don’t need any special excuse to choose Nina Paley — she’s already on the short list.

So, given that, don’t not choose her just because she has other views that you might disagree with. As long as those views don’t qualify as “intolerable speech”, which they certainly do not, you’re not responsible for them. You’re not inviting her to be your CEO or the chair of your board of directors or something — those would create a meaningful, leadership-related association between your organization and Nina, and people could reasonably assume an implicit endorsement of, or at least lack of objection to, her views. In the absence of such a connection people shouldn’t make those assumptions, and you are free to make that clear.

To shun Nina’s contributions and works out of fear — that is, fear of being tainted by association with something Nina thinks, of being punished by the mob because you failed to shun Nina — is to make it that much harder for others to openly tolerate dissenting views. It’s passing the buck.

It also causes people to think Nina’s views are something other than what they are. All over the Internet you can find people calling her “transphobic”. This is pure libel: she is not, never has been, and such an attitude would be foreign to her nature. Nothing she has actually written or said would support the conclusion, either. But people believe it anyway, because they’ve seen other people saying it about Nina, and because they’ve seen venues that, besieged by the lie, believe it too and cancel appearances based on it.

When a venue cancels an appearance by Nina in response to false cries of “transphobia” or “hate speech” (more on that later), or a platform bans her for the same reason, it becomes party to the libel. It’s now part of the problem. Other people see the action and assume there must be some truth to the accusation — after all, why else would the post or the event have been canceled?

Please don’t contribute to this kind of mess, not with Nina or anyone else. Exercising “freedom of association” is not a free pass to slowly corrode someone’s reputation through inaction and invitations canceled or foregone. If you admire someone’s work, support her in that work.

Privilege, platforms, and using misrepresentation to silence.

One response to my concerns might be “Look, this is all easy for you to say, from your position of privilege as a white, straight, cis-gendered male citizen of the United States.”

I’m the first to admit my comfortable position. I’ve got it easy, and wish I could share that privilege with everyone. If I were transgender, if I didn’t have my identity constantly being reinforced and encouraged by the culture around me, I can see that I might be genuinely hurt by Nina’s position — I’m not actually sure I would be, and in fact there are transgender people who aren’t hurt & who speak out in support of Nina’s position, but I’ll certainly grant the possibility that I might be hurt.

However, the possibility of hurt feelings is not a reason to ban speech or ostracize the speaker. There is inevitably going to be disagreement about things that people take personally — for example, the question of whether others regarding you and treating you as the gender of your choice is a human right or not. The disagreements that matter are, by definition, the ones people care about. If we prohibit or shun speech that touches anything people are deeply invested in, we’ll all be left discussing the latest trends in shopping-mall interior decoration.

More importantly, once speech starts being restricted, it’s not the privileged who pay the price. As my friend Jeff Ubois put it: “It may not be possible think clearly about inclusion or freedom of association without freedom of expression. But freedom of expression is what some advocates for vulnerable people want to limit.”

Look again at Valerie Aurora’s formula (by the way, I don’t know whether Valerie herself would agree with any of this — these are my interpretations of her formula, not necessarily her interpretations):

If the content or the client is:

  • Advocating for the removal of human rights
  • From people based on an aspect of their identity
  • In the context of systemic oppression primarily harming that group
  • In a way that overall increases the danger to that group

Then don’t allow them to use your products.

Nina does none of the above, unless you think that declining to treat another person in the way that person wants to be treated is inherently a human rights violation. I do not. One might choose to treat certain people in the ways that they prefer, but someone else who does not make the same choice is not thereby guilty of violence or dehumanization. I can think of myself however I wish to think of myself, but I can’t dictate how others think of me, even if I am hurt when they don’t see me as I see myself.

Separate from the issue of ownership of identity, there’s also a fundamental issue of honesty here:

When people band together to get someone no-platformed, there’s usually fraud involved. The complainants have to falsely claim a violation of the platform’s terms of service, knowing that the site’s overworked staff won’t actually have time to look deeply into the matter and make a reasoned decision. When people demand that a venue cancel an event on the grounds that someone who is clearly not transphobic is transphobic, that’s a misrepresentation.

The no-platformers are not seeking honest debate; they’re seeking to remove a voice. It’s silencing.

Social media platforms, at least, could help solve this problem by improving their ban systems. Right now there is no cost to someone who fraudulently requests that another person be banned, or who even makes repeated ban requests against many targets. For the no-platformers, it’s all upside and no downside. Until the platforms introduce some downside risk to those who would silence others, some penalty for bad-faith ban requests, the censorship will continue. Yes, this would require the platforms to make some judgement calls, but after all those companies are already exercising judgement when they ban — they’re just doing it poorly.

The dangers of speech are not imaginary, of course. As much as I want to be a free-speech absolutist, even I can agree that some restrictions are necessary. Actual threats of violence, for example, justify restriction even by the state.

But private-sector venues and online platforms make their own terms, and they should try to live up to the free-speech principles they almost always claim to support. That includes measures to prevent coordinated no-platforming attacks from users bent on substituting their own speech code for the site’s terms of service. If it’s not a threat and it’s not seeking to endanger anyone through dehumanization, then let it stand. Real-world venues should err on the side of liberality and diversity. (And no, that doesn’t mean inviting Steve Bannon to headline your festival, but the reason not to invite him is because he’s a poor exponent of the ideas he claims to champion. A brief proximity to power is no reason to put a sloppy thinker on your short list in the first place.)

That friend who posted the screenshots also wrote: “… ‘hate speech’ codes only ever serve to protect the powerful”. I think that’s correct in the long run. Speech codes may give temporary comfort to some, but in the end systems of censorship will inevitably be turned against the weak by the strong.

My friend Smiljana, when we were discussing Nina’s no-platforming, said:

“We talk about identity so we don’t have to talk about class.”.

So true.