The default category for posts on this site; the equivalent of “Uncategorized” in a fresh WordPress installation :-).

% So, why are we not using \appdxsection here?
% Sit down, my child, and you shall hear a tale.  A tale of terrible
% danger and of great deeds by heroes who fought that danger but were
% ultimately defeated.  This is not a story with a happy ending.  Your
% father and I debated for a long time about whether you were old
% enough to hear it.  We eventually decided you were still too young,
% but you sneaked into my computer and are now reading it anyway, you
% little rascal!  Fine.  If you're old enough to break into the liquor
% cabinet, you're old enough to get drunk, as the saying goes.  (No
% one ever actually said this.  I just made it up.  But I'll bet that
% by this time next year it's trending on Twitter.)
% Anyway, here's the deal:
% If we just use "\appdxsection{Foo}\label{appdx:foo}" here, then the
% first appendix's name will be something like "Appendix G".  In other
% words, the section numbering that we've been using so far will
% continue right on in to the appendix letters.  If the last
% non-appendix section was Section 6, then the first appendix will get
% named for the 7th letter of the alphabet.
% That sucks, but we all know what the solution, right?  Just do:
%   \setcounter{section}{0}
%   (And actually, you can even combine that with another command,
%   "\renewcommand{\thesection}{\Alph{section}}", so that subsections
%   within the appendix get lovely appendix-y names like "A.1", "A.2",
%   "A.2.1", etc.)
% Ah, but there's a problem:
% Now if you say "Section \ref{appdx:foo}" anywhere in the document,
% the ref's *link* will actually point to the page where -- you
% guessed it -- Section 1 is!  So while the reference's text would
% look correct (saying "Appendix A" or whatever), if you click on it
% it mysteriously jumps to Section 1, and if you hover over it, in
% Evince or in any other PDF reader that supports preview popups, you
% see a popup showing Section 1's header.
% I don't have a solution for this.  Or rather, I finally decided to
% stop shaving the LaTeX yak and just do it manually, with a regular
% unnumbered section that has "Appendix A:" in its title explicitly.
% Sometimes the dragon wins.

I decided to try out this lossless text-compression demonstration site by Fabrice Bellard. It uses GPT-2 natural language generation and prediction to achieve compression. As sample text, I used the first paragraph of Donald Trump’s recent rally speech in Tulsa, Oklahoma. (I figured if anything can compress well using predictive machine learning, surely Trump’s speech patterns can.)

Here’s the compressor site, with most of the input and all of the output showing:

compression page with both input and output displayed

The output looks like a short string of Chinese characters because the compressed text is represented as a series of Unicode characters (encoding 15 bits of information per character — which makes the compression ratio displayed, 804/49, a bit misleading, since the characters on the bottom are twice as large as the characters on the top: 402/49 would be more more accurate, and still quite impressive).

Anyway, I naturally thought “Hmm! What would happen if I were to paste this presumably random Chinese output into Google Translate?”

I am a prisoner, and I am in a state of mind.

“I am a prisoner, and I am in a state of mind.”

Aren’t we all, Internet? Aren’t we all?

Have you noticed how Trump consistently says that “we can’t let the cure be worse than the problem“? (emphasis mine)

The usual stock phrase ends with the word “disease”. But Trump avoids the stock phrase, probably because he doesn’t want someone quoting it back at him sarcastically at the peak of the COVID-19 death toll. So in order to avoid reminding his listeners that it is, in fact, literally a disease we’re dealing with here, he twists a common saying.

Since Trump’s use of language is so frequently odd anyway, journalists rarely call out his misdirections or try to explain them. But even worse, they often cover for him. There was a particularly dramatic example of this recently:

On The Daily podcast with Michael Barbaro, New York Times journalist Maggie Haberman played audio of Trump saying “I don’t want the cure to be worse than the problem itself” (he always phrases it this way — he never says “disease” in that phrase) and then she did a really interesting thing. She repeated it back for the audience, but with the phrase corrected to its standard form:

“— in his words, the cure can’t be worse than the disease.”

(Here’s a transcript.)

Haberman wasn’t adding any information by rephrasing the President. She wasn’t summarizing a longer or more complex thing Trump said. She wasn’t providing needed context that the listener might not have. She just repeated Trump, with one important fix — and called her fixed version “his words”.

What is going on? It’s not a simple accident. The day before, Michael Barbaro himself did the same thing. He played audio of Trump using the same odd phrasing on a different occasion, and then Barbaro followed it up by similarly fixing the President’s words, albeit with “illness” instead of “disease”. (transcript here)

It’s as though the journalists know something is wrong, and instinctively want to fix it, so they generously clean up after the President, instead of simply pointing out how the President consistently mis-phrases a traditional saying. (Foreign journalists have noticed this tendency of American reporters to edit the President and thus mask what he’s actually saying.)

I’m not suggesting that reporters should indulge in speculation about the President’s motivations in behaving like this, even when those motivations are pretty clear. Instead, I’m suggesting that journalists should point out when something odd is going on — help the audience see patterns. As reporters, they’ve heard Trump use this strange phrasing multiple times; they know full well what is going on. But any given audience member might not have heard all those instances, and thus might not spot the pattern.

Instead of unconsciously correcting Trump, and thus normalizing him, just report on him and help people be aware of patterns. Listeners can come to their own conclusions about what the patterns mean, but no one is in a better position than journalists who cover Trump professionally to point out the patterns in the first place.

Don’t cover for.

Just cover.

(Note: See related Twitter threads here and here.)

Update (2019-11-25): Audrey Eschright has made a link roundup of “pieces I’ve been reading on the topic of modern free and open source software practices, licensing, and ethical concerns.” Thanks, Audrey! (Thanks also to Sumana Harihareswara, whose tweet alerted me to this fine development.)

Update (2019-10-23): Christie Koehler has written a great piece on this same topic: Open Source Licenses and the Ethical Use of Software. It’s much more in-depth than my treatment below; I highly recommend Christie’s post if you’re looking for a thorough examination of this trend.

I just wrote this in an email, and then realized it was basically already a blog post, so here it is. (Disclaimer: in this post, as on this blog generally, I’m speaking only for myself and not for my company or our clients.)

There’s been a lot of talk recently about creating software licenses that include an ethical-use-only clause. Here’s one example among many. There has even been talk about modifying some existing free software / open source software licenses to include such clauses. If I stopped to dig up source links for everything I’d never get this post done, but if you’re active in this field you’ve probably been seeing these conversations too. Feel free to supply links in the comments.

According to the current definition of free and open source software, such licenses would no longer be FOSS. Some people react to that by saying that maybe we need to update the definition of FOSS then, but that’s backwards — you can’t change a thing by changing what labels you call it by. The current definition of FOSS would still exist, and would still mean exactly what it means, whether one calls it “FOSS” or “broccoli” or “gezornenplatz”.

But even ignoring the nominalist arguments, I think these ethics-scoped licenses are, sadly, an unworkable idea on substantive grounds.

Aditya Mukerjee explained why very eloquently in this tweet thread, and you might want to read that first. I would add:

In practice, these kinds of clauses are time bombs that people either don’t hear ticking, in which case they get an unpleasant surprise later, or do hear ticking, in which case they avoid using any software under that license.

The conversations I’ve seen around these licenses seem to start from the position that all (ahem) reasonable people agree about what is ethical. But in fact there are serious and deep disagreements about what is ethical — even among people who would never have expected that they might disagree with each other, there are usually latent disagreements lurking. Here are a couple of examples, just to show how easy it is for this to happen:

1) Some people believe that copyright infringement is immoral. They think that copying without authorization, or at least doing so at scale, harms artists and other creators, and is thus unethical. Other people believe that putting restrictions on copying is inherently immoral — that no one should have a monopoly on the distribution of culture and information. (Note that this is wholly independent of attribution, of course — that’s a separate concern, and both sides here generally agree that misattribution is unethical because it is simply a type of fraud.)

So what happens when someone puts out a license with a clause saying that one may not use this software as part of a system that performs unauthorized copying? Sure, the license will mean it means and will be variably enforceable depending on the jurisdiction. But what I’m getting at is that there is no consensus at all, especially among the kinds of people likely to be pondering these questions in the first place, about whether the restriction would be ethical.

This example, far from being contrived, actually touches the proposed license referred to earlier. That license bases its “do no harm” clause on the Universal Declaration of Human Rights, in which see clause 27(2) — a clause that I do not agree is ethical and that, depending on how it is interpreted, may be in fundamental contradiction with free software licensing.

Next example…

2) Many vegetarians and vegans feel that killing animals for meat — and doing medical testing on animals, etc — is immoral. Most of those people live surrounded by meat-eaters, so they often don’t bring this up in conversation unless asked about it. But it’s only a matter of time before someone releases a license that prohibits the software from being used for any purpose that harms animals.

Oh wait, that already happened.

(To be fair, it looks like maybe that was really a click-through download EULA rather than the underlying software license, at least based on this archived page. It’s a little hard to tell — this was all around 2008, and the license is no longer easy to find on the Net. Which I think is likely to be the fate of most ethics-scoped licenses in the long run.)

Formally speaking, these kinds of ethical-use-only clauses violate both the Free Software Definition and the Open Source Definition. In the FSD, they prevent the software from being used “for any purpose”. In the OSD, they constitute a “field of use” restriction.

Now, you can make any license you want, and if you hire a good lawyer to do the drafting it may even be enforceable in some circumstances. But there is much less consensus around the world about what is “ethical” than many people wish. If this practice were normalized, we would quickly have software licenses that prohibit the software from being used in a system that encourages people to change or abandon their religion, or from being used to educate women, etc.

“Fine”, I hear you say. “I don’t have to use their software, then. But people who agree with my ethics will be free to use the software I release under licenses that enforce those ethics.” Except that no one will: the software won’t be adopted, except maybe by your friends. Anyone seriously thinking of using that software in production will run away as fast as they can from a license clause that opens them up to liability based on some judge’s interpretation of what constitutes a violation of someone else’s ethical guidelines. These licenses may look great on the runway, but they’ll never fly.

I think the FSD and the OSD (which are essentially the same idea expressed in different words) got it right the first time. Free software licenses accomplish some wonderful things, both for individual freedom and for non-monopolistic collaboration built around free-to-fork code. However, FOSS licenses can never provide a generally enforceable framework for ethical behavior. Attempts to make them do the latter not only fail (because the software won’t be widely adopted with non-FOSS license terms anyway) but also reduce the licenses’ effectiveness at doing what they were originally designed to do.

Portrait of Elizabeth Warren.

I’ve been “All In For Warren” for a while now. I expect a lot more people to join us after tonight’s debate :-), but just in case you’re still on the fence, here are four brief arguments Why Warren:

  • She’s making the other Democratic candidates better. She’s offering so much vision that the others are picking it up. The longer she stays in the race, the better the eventual nominee will be. (I think it will be her anyway, so this item is more of an insurance-policy argument.)
  • She has the right enemies. Seriously, ask yourself: can you name one enemy of Joe Biden’s? No, you can’t. When Joe Biden walks into a room, his goal is for everyone in that room to like him. That is not what we need in our next President. Elizabeth Warren has the enemies you’d hope she would have.
  • She understands what is needed, and she’s proposing to actually do it. Most candidates understand what is needed, but they don’t dare propose to actually do it, because they can’t afford to scare off the big-dollar donors. Elizabeth Warren decided not to pursue big-dollar donors from the beginning. That’s freed her up to offer up a spot-on diagnosis of how scaled-up capitalism has captured the state and made its values the state’s values, and she’s saying what needs to be done about that. She doesn’t mind offending the people who pushed us into unsustainable inequalities of wealth, power, and dignity.
  • If she’s campaigning for President, then we’ll probably have a better Senate too. The best presidential campaigns have coattails. Elizabeth Warren’s will be particularly long, because she’s offering so much for other candidates to grab on to.

Want to help? Come on in, the water’s fine!

Visual demonstration of Simpson's Paradox (adapted from https://en.wikipedia.org/wiki/File:Simpson%27s_paradox_continuous.svg)

Do any news organizations have a Numeracy Editor?

For fifteen years, the New York Times had a Public Editor, whose job was to visibly uphold journalistic ethics. The Public Editor would publicly discuss errors, biases, or gaps in the paper’s coverage. (Some other news organizations continue to have a public editor position, though I think it’s not widespread.)

I’d like to propose something narrower: a Numeracy Editor. The Numeracy Editor’s job would be to help reporters and columnists use numerical and statistical reasoning well.

I’ve been pondering this idea for a while, and finally decided to write about it after reading Vatsal G. Thakkar’s excellent NYT Op-Ed Bring Back the Stick Shift a couple of weeks ago. It’s a good piece, but at one point it veers into unexpected non sequitur in an attempt to use statistics to support its argument:

Backup cameras, mandatory on all new cars as of last year, are intended to prevent accidents. Between 2008 and 2011, the percentage of new cars sold with backup cameras doubled, but the backup fatality rate declined by less than a third while backup injuries dropped only 8 percent.

The more you read that, the less it means. For a three-year period, the percentage of new cars sold with backup cameras doubled from whatever it was before — without knowing what it was before, this doesn’t tell us anything: the result of doubling a very miniscule percentage would still be a miniscule percentage, for example. Meanwhile, during that same three-year period, fatalities due to backups declined by some amount (less than a third) from whatever the rate was before — again, we don’t know. So does that decline represent a greater decline in backup fatalities than should be expected from whatever percentage of cars on the road newly have backup cameras? Or a smaller decline? There is no way to say. Also, we don’t know what percentage of cars driving on the road are new cars, which is highly relevant here.

If the author was trying say to that fatalities should have declined more, this paragraph does not support that case, but it doesn’t support any other case either. It throws some statistics into the air, as if to see how the wind catches them, but they don’t connect to each other and they have no bearing on the question at hand. As my friend Tom put it, it’s just a “number casserole”.

I certainly don’t mean to pick on on Thakkar — again, I liked the piece — or on the New York Times. This sort of thing happens in many publications; you can see it all the time, in the regular reporting just as much as in opinion editorials.

But given that this was the New York Times Op-Ed page — a forum that presumably takes quality control and editorial standards seriously — it’s worth asking: how did such a problematic paragraph make it through the filters? I think the answer is that there is no editor whose reputation are self-respect are on the line when numerical clunkers slip through. A few grammatical or spelling errors and someone’s job is in danger, but even glaring errors of statistical reasoning are currently costless.

I get that journalists and their editors tend to have backgrounds in language, political science, history, and other fields that don’t emphasize math. And that’s fine: this isn’t an “everyone should learn more math” argument. There are only a finite number of days in anyone’s life, there isn’t time to learn everything, and people make the choices they make for reasons. That’s exactly why a Numeracy Editor is needed: it would be her job to own this problem, and along the way help journalists learn the math they need. The writers would start to be more careful just knowing that someone is watching. A Numeracy Editor would have caught the problem in that Op-Ed right away, and once spotted, it’s easy to explain; the conversation with the author can take place before publication, as with any other kind of editing. Many errors of numerical or statistical reasoning are easy to understand once they’re pointed out (although there are also subtler cases, such as Simpson’s Paradox, that occur in real-life, policy-relevant situations and need to be watched for).

Unlike Public Editor, Numeracy Editor need not be a public-facing role. The main point is to help writers and other editors use math appropriately and to prevent mistakes. If the editor also wants to conduct a public discussion about using numbers and graphs in journalism, that would be a great public service too, but it’s a bonus. The role could do a lot of good purely behind the scenes.

Numeracy Editor should be an easier position to hire for than the broader role of Public Editor has been, because it doesn’t require nearly as much journalistic experience (the Numeracy Editor isn’t making hard judgement calls about how much anonymous sourcing is acceptable in a story, for example) and because the advice it provides would be less controversial.

Anyway, I don’t run a newspaper; all I have is this blog. I’d love to hear from anyone who works in or near journalism what they think of this idea.

(You can respond in a comment, or in this Twitter thread, or in this Identi.ca thread.)

I guess I’ll just write this as though I have reason to be believe that the people who write headlines for the New York Times read my blog.

For the record: I’m a subscriber, and I think the Times does some terrific reporting and investigative journalism — when they’re at their best, there’s no one better. That makes the unforced errors all the more disappointing.

Look at the top of today’s edition’s front page:

Top of New York Times front page for 2018/10/03.

First note the caption beneath the big color photo on the left, which says:

A migrant caravan headed north Monday from Tapachula, Mexico, where members had stopped after crossing in from Guatemala.

Now all the way over on the right, note the bold headline at the top of the rightmost column:

Trump Escalates Use of Migrants As Election Ploy

Issuing Dark Warnings

Stoking Voters’ Anxiety With Baseless Tale of Ominous Caravan

If you take the headline at face value, and then look over at the photo, you would naturally come to the conclusion that the New York Times is contradicting itself on its own front page.

It turns out that the article under the headline is indeed about a baseless tale — just not one about the existence of the caravan itself, even though that’s what the headline would imply to any casual reader:

President Trump on Monday sharply intensified a Republican campaign to frame the midterm elections as a battle over immigration and race, issuing a dark and factually baseless warning that “unknown Middle Easterners” were marching toward the American border with Mexico.

[emphasis mine]

In twenty words of headline, there wasn’t some way to fit something specific about the false claim in?

How about this:

Trump Falsely Implies Terrorism Threat From Caravan

“Unknown Middle Easterners”

Stoking Voters’ Anxiety With Baseless Claim About Migrant Caravan

There, did it in 19 words, one fewer than the number they used for a misleading and less informative headline.

Yes, by the way, you know and I know and the New York Times knows that “Middle Easterner” doesn’t mean “terrorist”. But it’s perfectly clear what Trump is doing here and the NYT shouldn’t shy away from describing it accurately… in the headline.

(Entirely separately from the above, there’s the question of why the New York Times is running a giant color photograph of the migrants above the fold on its front page, for the second time in the past few days. These caravans have been going on since 2010; they’re larger and more organized the last couple of years, but they’re not new. As an independent news outlet, why let a politican’s talking points drive cover art choices in the first place?)

Self-censored page of 'Green Illusions', by Ozzie Zehner
image credit

A particularly insidious problem with online social media platforms is biased and overly-restrictive ban patterns. When enough people report someone as violating the site’s Terms Of Service, the site will usually accept the reports at face value, because there simply isn’t time to evaluate all of the source materials and apply sophisticated yet consistent judgement.

No matter how large the company, even if it’s Facebook, there will simply never be enough staff to evaluate ban requests well. The whole way these companies are profitable is by maintaining low staff-to-user ratios. If policing user-contributed content requires essentially arbitrary increases in staff size, that’s a losing proposition, and the companies understandably aren’t going to go there.

One possible solution is for the companies to make better use of the resource that does increase in proportion to user base — namely, users!

When user B reports user Q as violating the site’s ToS, what if the site’s next step were to randomly select one or more other users (who have also seen the same material user B saw) to sanity-check the request? User B doesn’t get to choose who they are, and user B would be anonymous to them — the others wouldn’t know who made the ban request, only what the basis for the request is, that is, what user B claimed about user Q. The site would also put their actual Terms of Service conveniently in front of the checkers, to make the process as easy as possible.

Now, some percentage of the checkers would ignore the request and just not bother. That’s okay, though: if that percentage is high, that tells you something right there. If user Q is really violating the site’s ToS in some offensive way, there ought to be at least a few other people besides user B who think so, and some of them would respond when asked and support B’s claim. The converse case, in which user Q is perhaps controversial but is not violating the ToS, does not necessarily need to be symmetrically addressed here because the default is not to ban: freedom of speech implies a bias toward permitting speech when the case for suppressing it is not convincing. However, in practice, if Q is controversial in that way then some of the checkers would be motivated to respond because they realize the situation and want to preserve Q’s ability to speak.

The system scales very naturally. If there aren’t enough other people who have read Q’s post available to confirm or reject the ban, then it is also not very urgent to evaluate the ban in the first place — not many people are seeing the material anyway. ToS violations matter most when they are being widely circulated, and that’s exactly when there will be lots of users available to confirm them.

If user B issues too many ban requests that are not supported by a majority of randomly-selected peers, then the site could gradually downgrade the priority of user B’s ban requests generally. In other words, a site can use crowd-sourced checking both to evaluate a specific ban request and to generally sort people who request bans in terms of their reliability. The best scores would belong to those who are conservative about reporting and who only do so when (say) they see an actual threat of violence or some other unambiguous violation of the ToS. The worst scores would belong to those who issue ban requests against any speech they don’t like. Users don’t necessarily need to be told what their score is; only the site needs to know that.

(Of course, this whole mechanism depends on surveillance — on centralized tracking of who reads what. But let’s face it, that ship sailed long ago. While personally I’m not on Facebook, for that reason among many, lots of other people are. If they’re going to be surveilled, they should at least get some of the benefits!)

Perhaps users who consistently issue illegitimate ban requests should eventually be blocked from issuing further ban requests at all. This does not censor them nor interfere with their access to the site. They can still read and post all they want. The site is just informing them that the site doesn’t trust their judgement anymore when it comes to ban requests.

The main thing is (as I’ve written elsewhere) that right now there’s no cost for issuing unjustified ban requests. Users can do it as often as they want. For anyone seeking to no-platform someone else, it’s all upside and no downside. What is needed is to introduce some downside risk for attempts to silence.

Other ideas:

  • A site should look more carefully at others’ ban requests against material that someone else has already made a rejected ban request about, or that someone who has a poor ban-reliability score has requested a ban on, because there would be a higher likelihood that those other requests are also unjustified.

  • A lifetime (or per-year) limit on how many ban requests someone can issue.

  • Make ban requests publicly visible by default, with opt-out anonymity (that is, opt-in to be identified) for the requester.

Do you have other (hopefully better) ideas? I’d love to hear them in the comments.

If you think over-eager banning isn’t a real problem yet, remember that we have incomplete information as to how bad the problem actually is (though there is some evidence out there). By definition, you mostly don’t know what material you’ve been prevented from seeing.

New York City Subway, 14th Street Union Square platform curvature
image credit

(Update: After this article was first published, this happened, and then later this, which I think demonstrates my point. With “allies” like these, who needs oppressors? I also wrote a followup post, Thinking Creatively About Ban Requests and Content Policing, that discusses some structural solutions online services could try.)

More and more of the political left — which is where I sit, at least by American standards — seems to be abandoning the idea of freedom of speech as an inherent good, let alone as the essential liberty on which all other liberties depend.

Recently, someone I know and respect called repeatedly for Donald Trump to be banned from Twitter. He’s not alone. A lot of people want this, and they don’t want just Trump banned, they want many speakers banned from many popular platforms.

This is a worrying trend. The left may be about to finally gain some measure of political power in the United States, depending on the results of the November election. Yet right at that moment we are narrowing our ability to have necessary debates and to even hear what people say. I’ll focus on one particular example in this post, but the problem is a general one. This narrowing would be bad under any circumstances, but it becomes worse when attached to power.

I’m not talking here about state censorship. A few people call for that too, but most people still understand why the state needs to be especially constrained in its ability to interfere with speech. I’m talking about no-platforming and campaigns for blanket shunning: that is, urging private-sector platforms to ban certain speakers, and shaming other people and organizations into ostracizing those speakers as individuals under all circumstances, even circumstances that are unrelated to the allegedly objectionable speech.

Are there narrow, consistent criteria we could use to decide when it’s appropriate to advocate non-state suppression of speech?

I think there are, and I think we’d be better off if we stuck to those criteria instead of the increasingly broad and subjective criteria many are using right now, most of which are based on empathy for those who are hurt by harmful speech. Certainly, speech can be harmful: the argument for freedom of speech has never been that there is no such thing as harmful speech, but rather that suppressing speech almost always leads to worse harm down the road. There are two reasons why “someone felt deeply hurt” is not a good test: one, it treats the speech and the speaker inconsistently by looking to others’ reactions as a guide (reactions which will vary from listener to listener), and two, sometimes useful speech may also be hurtful to some people — these two things are not contradictory, much as we might wish otherwise.

We need a better test.

Last year I read a good post by Valerie Aurora entitled The Intolerable Speech Rule: the Paradox of Tolerance for tech companies. The post links to a presentation she gives that’s worth watching. It’s thoughtful, aware of the tradeoffs involved in any kind of permitted-speech policy, and careful to distinguish between private actors (such as social media platforms) and the state.

Here is Valerie Aurora’s formula, phrased as a guideline for online platforms:

If the content or the client is:

  • Advocating for the removal of human rights
  • From people based on an aspect of their identity
  • In the context of systemic oppression primarily harming that group
  • In a way that overall increases the danger to that group

Then don’t allow them to use your products.

It’s a very specific, circumscribed rule. If I ran an online service, I’d try to follow it — but as conservatively as possible, because:

What about someone who tries, sincerely and non-threateningly, to discuss what is and is not a human right in the first place?

A friend of mine, Nina Paley, has been repeatedly no-platformed for doing exactly this. Nina is blunt and direct, because she has strong feelings on the issue she’s speaking about. But she threatens no one, and never tries to silence or dehumanize. She’s happy to engage with opposing views, and argues her own in good faith.

I’ll state Nina’s view only briefly here — if you want to know more about it, you’re better off getting it from her, and most of what I’m saying here is not about the substance of her view. Put simply, Nina doesn’t accept the argument that transgender women are women. Nina would like women’s-only spaces to be for women who were born women (or, as Nina resolutely calls them, “women”). Some people call this “Trans-Exclusionary Radical Feminism” and thus refer to Nina as a “TERF”. Nina prefers the term “gender-critical radical feminist”. At the very least, if you use the term “TERF”, be clear, as Nina always is, that the exclusion is from the set of human females, not from humanity itself. Half of humanity is already excluded from being female (so it’s clearly not dehumanizing). Nina’s argument is that if that half is making masculinity a toxic place to be, then the solution is to fix masculinity so people stop fleeing it.

I won’t go into detail about the substance of her argument; you should get it from Nina, not me. I’m sure you can come up with counter-arguments, too. I have done so with Nina, starting with the obvious: “Many trans people consistently report that they always felt that their body was the wrong sex — and they start expressing this when they’re young children, so it’s not just a retconned memory. There’s something real going on here.” Nina has interesting and probing responses to this, and you can ask her about them if you want; I was glad I did, because it led to an in-depth conversation.

But this post is not about the substance of Nina’s argument. It’s about freedom of speech: How can someone even have this conversation with Nina, or observe her having it with others, if platforms deny her the ability to speak?

For expressing this view, Nina was briefly banned by Facebook. Apparently, a bunch of people who disagree with her got together and reported her to Facebook as though she were spamming or in some way violating the site’s terms of service. That’s a straight-up dishonest tactic. That’s no-platforming.

Nina is a frequent and well-known speaker about art and copyright restrictions, but now is sometimes disinvited from speaking gigs because of her gender-critical radical feminism, even when that’s not the topic of the speech. She’s had a showing of her film canceled (her films are not about gender-critical radical feminism either). When a friend of hers tried to post screenshots of a Facebook thread showing the venue’s statement about the cancellation, plus the usual tons of debate in followup comments, those screenshots mysteriously vanished from imgur.com. So the person reposted the screenshots, and again they disappeared — again with no explanation or notice.

What the heck? Is someone working at imgur a secret censor?

I wanted to know more, so I asked Nina’s friend exactly what had happened and got this reply (you can skip the blow-by-blow if you want, but it’s worth reading it to feel what the experience of being no-platformed is like):

Timeline is this:

(1) Argument happens on the Arcadia cafe page. People are calling
for no-platforming, etc. It gets to hundreds of comments.

(2) Juicy drama of this sort often gets removed, so around 8:15 PM,
I decided to just take a bunch of screenshots. I have these on
my computer.

(3) Sure enough, later that evening (10 PM?) Arcadia removes the
event from their page, and with it all the comments.

(4) Nina asks me if I have screenshots, I tell her I do, and that
while they’re completely unedited (so non-anonymized or pasted
together) I can put it somewhere, I suggest imgur in an album,
which will be viewable if you know the exact URL but not
browsable from my name or anything (like all my other images,
same way).

(5) The next morning (day after the event) I put the images up in
the first album “Arcadia No-platforming.” Imgur interface is not
so friendly, to keep the images in order I have to upload them
one at a time. There are 64 images.

(6) The URL got shared on Facebook, I see some people viewed the
album. Next morning, I awake to find… all the images are GONE.
Completely gone from my account (not just taken out of the
album). The album is left, but it’s an empty shell, nothing in
it. I have NO notifications, no email, no nothing, just the
images are gone. I have the album still open from the previous
night with the image showing (cached in my browser) but if I try
to open, yep, it’s the usual standard “this image has been
removed or is no longer available” thing.

So I’m just… CURIOUS.

(7) I upload the images again (all 64 of them, again one at a time).
I put in a new album “Arcadia No-platforming Is Back.” I decided
that hey, let’s save this link to the completed new album to the
Internet Wayback Machine. I do this, and confirm that the images
are backed up over there (so they’re on the public internet now
in a place that isn’t imgur).

(8) Nina also takes the images from the new album, and puts on her
blog. So they’re available in a second place, that isn’t imgur.

(9) Overnight that night, the images are removed AGAIN. Once again,
the album is left as an empty shell, and all the images are
completely gone from my account. None of my other images (some
of which are waaaaayyyyyyyyyy more “offensive” than these
screenshots I might add, and which have been linked,
individually, on twitter by me) are disturbed at all. Just the
Arcadia facebook screenshots.

So yeah. CURIOUS.

(10) I get mad, and make the single image that just has the “stop
trying to censor this, the images are [elsewhere]” redirect
text on it. I upload that into both albums. Both albums have
been steadily getting views.

(11) That next night, someone removes the redirect image! Just wtf.
Again, it’s gone from my account, no notifications of any sort
to me AT ALL.

(12) I upload the single redirect image again, again put it in both

(13) Since yesterday, I check the albums periodically but whoever it
is has given up, the redirect image has stayed in there. Both
albums are getting views, still.

That’s about it. It’s just curious to me, because… I’ve never had
any images removed from my account before, and I have plenty of stuff
that anyone who can’t deal with “penis is male” would be far more
offended by.

I suspect that someone involved in the facebook comments thread got
upset and complained to imgur that their “personal data” was being
shared, or something.

Thing is, it was a public facebook page, public comments, open to the
entire world. Also, if I was officially violating a TOS, I’d expect
to get some sort of notification about it or a slap on the wrist or
some warning or something.

But yes, I suspect someone involved in the whole thing didn’t want
their comments put on display in a less than favorable light
(somewhere else that was linking to my album, since I didn’t post my
album anywhere myself) and sent a complaint, or something. But…

Either way, both albums keep getting views, to that single image.
Just… weeeeird.

That’s what no-platforming looks like. At its best, which is still pretty bad, the platform will at least admit to the censorship and describe how the decision was made. At its worst, as appears to be the case with imgur.com, it looks the way censorship regimes usually look: information disappears, and there’s no explanation nor even acknowledgement that it happened. Everyone please move along; nothing to see here.

Ostracism is not an answer either.

I mentioned that Nina has had speaking gigs and showings of her films canceled. Perhaps you’re thinking “Hey, that’s different. That’s not no-platforming. That’s just someone not wanting to be associated with Nina’s views. People have the right to disassociate themselves — in fact, isn’t that what ‘freedom of association’ is all about?”

Sure, in some literal sense, that’s true. But it’s best to use this “freedom to ostracize” sparingly. Most disagreements do not need to rise to the level of not being able to be seen with someone at all. There is no need for people to assume that when you engage someone in an unrelated discussion or presentation, you also endorse everything else that person believes.

Worse, there is a dangerous feedback loop here. The less often venues present people whose views diverge from the venue’s, the more we start to think that when a venue does present someone the venue tacitly endorses everything that person thinks. The eventual result of this process is monoculture and an arms race of virtue-signaling, which is exactly what’s happening in certain quarters of the political left.

Here’s the working principle I would use (and I’d appreciate constructive feedback on it in the comments section):

If you already thought a person is worth presenting — or engaging in discussion with, or showing the artwork of, etc — then do so, unless that person has some unrelated public stance that clearly and unambiguously advocates violence or violates the “Intolerable Speech Rule” (that’s the Valerie Aurora test given earlier).

Nina Paley is justly famous for her articulate and persuasive arguments against copyright restrictions. She’s also justly famous as a filmmaker. If you’re looking for a speaker on the topic of copyright, or if you’re a venue that shows art films, you don’t need any special excuse to choose Nina Paley — she’s already on the short list.

So, given that, don’t not choose her just because she has other views that you might disagree with. As long as those views don’t qualify as “intolerable speech”, which they certainly do not, you’re not responsible for them. You’re not inviting her to be your CEO or the chair of your board of directors or something — those would create a meaningful, leadership-related association between your organization and Nina, and people could reasonably assume an implicit endorsement of, or at least lack of objection to, her views. In the absence of such a connection people shouldn’t make those assumptions, and you are free to make that clear.

To shun Nina’s contributions and works out of fear — that is, fear of being tainted by association with something Nina thinks, of being punished by the mob because you failed to shun Nina — is to make it that much harder for others to openly tolerate dissenting views. It’s passing the buck.

It also causes people to think Nina’s views are something other than what they are. All over the Internet you can find people calling her “transphobic”. This is pure libel: she is not, never has been, and such an attitude would be foreign to her nature. Nothing she has actually written or said would support the conclusion, either. But people believe it anyway, because they’ve seen other people saying it about Nina, and because they’ve seen venues that, besieged by the lie, believe it too and cancel appearances based on it.

When a venue cancels an appearance by Nina in response to false cries of “transphobia” or “hate speech” (more on that later), or a platform bans her for the same reason, it becomes party to the libel. It’s now part of the problem. Other people see the action and assume there must be some truth to the accusation — after all, why else would the post or the event have been canceled?

Please don’t contribute to this kind of mess, not with Nina or anyone else. Exercising “freedom of association” is not a free pass to slowly corrode someone’s reputation through inaction and invitations canceled or foregone. If you admire someone’s work, support her in that work.

Privilege, platforms, and using misrepresentation to silence.

One response to my concerns might be “Look, this is all easy for you to say, from your position of privilege as a white, straight, cis-gendered male citizen of the United States.”

I’m the first to admit my comfortable position. I’ve got it easy, and wish I could share that privilege with everyone. If I were transgender, if I didn’t have my identity constantly being reinforced and encouraged by the culture around me, I can see that I might be genuinely hurt by Nina’s position — I’m not actually sure I would be, and in fact there are transgender people who aren’t hurt & who speak out in support of Nina’s position, but I’ll certainly grant the possibility that I might be hurt.

However, the possibility of hurt feelings is not a reason to ban speech or ostracize the speaker. There is inevitably going to be disagreement about things that people take personally — for example, the question of whether others regarding you and treating you as the gender of your choice is a human right or not. The disagreements that matter are, by definition, the ones people care about. If we prohibit or shun speech that touches anything people are deeply invested in, we’ll all be left discussing the latest trends in shopping-mall interior decoration.

More importantly, once speech starts being restricted, it’s not the privileged who pay the price. As my friend Jeff Ubois put it: “It may not be possible think clearly about inclusion or freedom of association without freedom of expression. But freedom of expression is what some advocates for vulnerable people want to limit.”

Look again at Valerie Aurora’s formula (by the way, I don’t know whether Valerie herself would agree with any of this — these are my interpretations of her formula, not necessarily her interpretations):

If the content or the client is:

  • Advocating for the removal of human rights
  • From people based on an aspect of their identity
  • In the context of systemic oppression primarily harming that group
  • In a way that overall increases the danger to that group

Then don’t allow them to use your products.

Nina does none of the above, unless you think that declining to treat another person in the way that person wants to be treated is inherently a human rights violation. I do not. One might choose to treat certain people in the ways that they prefer, but someone else who does not make the same choice is not thereby guilty of violence or dehumanization. I can think of myself however I wish to think of myself, but I can’t dictate how others think of me, even if I am hurt when they don’t see me as I see myself.

Separate from the issue of ownership of identity, there’s also a fundamental issue of honesty here:

When people band together to get someone no-platformed, there’s usually fraud involved. The complainants have to falsely claim a violation of the platform’s terms of service, knowing that the site’s overworked staff won’t actually have time to look deeply into the matter and make a reasoned decision. When people demand that a venue cancel an event on the grounds that someone who is clearly not transphobic is transphobic, that’s a misrepresentation.

The no-platformers are not seeking honest debate; they’re seeking to remove a voice. It’s silencing.

Social media platforms, at least, could help solve this problem by improving their ban systems. Right now there is no cost to someone who fraudulently requests that another person be banned, or who even makes repeated ban requests against many targets. For the no-platformers, it’s all upside and no downside. Until the platforms introduce some downside risk to those who would silence others, some penalty for bad-faith ban requests, the censorship will continue. Yes, this would require the platforms to make some judgement calls, but after all those companies are already exercising judgement when they ban — they’re just doing it poorly.

The dangers of speech are not imaginary, of course. As much as I want to be a free-speech absolutist, even I can agree that some restrictions are necessary. Actual threats of violence, for example, justify restriction even by the state.

But private-sector venues and online platforms make their own terms, and they should try to live up to the free-speech principles they almost always claim to support. That includes measures to prevent coordinated no-platforming attacks from users bent on substituting their own speech code for the site’s terms of service. If it’s not a threat and it’s not seeking to endanger anyone through dehumanization, then let it stand. Real-world venues should err on the side of liberality and diversity. (And no, that doesn’t mean inviting Steve Bannon to headline your festival, but the reason not to invite him is because he’s a poor exponent of the ideas he claims to champion. A brief proximity to power is no reason to put a sloppy thinker on your short list in the first place.)

That friend who posted the screenshots also wrote: “… ‘hate speech’ codes only ever serve to protect the powerful”. I think that’s correct in the long run. Speech codes may give temporary comfort to some, but in the end systems of censorship will inevitably be turned against the weak by the strong.

My friend Smiljana, when we were discussing Nina’s no-platforming, said:

“We talk about identity so we don’t have to talk about class.”.

So true.