Author: Karl Fogel

Thanks to user lamayonnaise in this Reddit, I was able to solve the problem described below, which I encountered when upgrading a Debian GNU/Linux box from old stable (9.x, a.k.a. “stretch”) to new stable (10.0, a.k.a. “buster”). I’ve also seen this when upgrading from ‘stable’ to ‘testing’ — presumably the solution below would work there too.

Here’s what the problem looks like — full transcript, out of consideration for search engine indexes:

root# apt-get dist-upgrade
Reading package lists... Done
Building dependency tree       
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
 guile-2.2-libs : Depends: libtinfo6 (>= 6) but it is not installed
 libedit2 : Depends: libtinfo6 (>= 6) but it is not installed
 libllvm7 : Depends: libtinfo6 (>= 6) but it is not installed
 libncurses6 : Depends: libtinfo6 (= 6.1+20181013-2) but it is not installed
 libreadline7 : Depends: libtinfo6 (>= 6) but it is not installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
root# 

Hmmm, that doesn’t look good. I tried following the advice given there, but it didn’t work:

root# apt --fix-broken install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  guile-2.2-libs libncurses6 libpython3.7-minimal libsasl2-modules libzstd1
  mariadb-common python3.7-minimal
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libtinfo6
The following NEW packages will be installed:
  libtinfo6
0 upgraded, 1 newly installed, 0 to remove and 1326 not upgraded.
47 not fully installed or removed.
Need to get 0 B/325 kB of archives.
After this operation, 534 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
apt-listchanges: Can't set locale; make sure $LC_* and $LANG are correct!
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_ALL to default locale: No such file or directory
Setting up libpam0g:amd64 (1.3.1-5) ...
locale: Cannot set LC_ALL to default locale: No such file or directory
Checking for services that may need to be restarted...awk: error while loading shared libraries: libtinfo.so.6: cannot open shared object file: No such file or directory
Checking init scripts...
awk: error while loading shared libraries: libtinfo.so.6: cannot open shared object file: No such file or directory
dpkg: error processing package libpam0g:amd64 (--configure):
 subprocess installed post-installation script returned error exit status 127
Errors were encountered while processing:
 libpam0g:amd64
E: Sub-process /usr/bin/dpkg returned an error code (1)
root# 

Okay, hmmm, what about trying the same but with apt-get instead of apt? Let’s see:

root# apt-get --fix-broken install
Reading package lists... Done
Building dependency tree       
Reading state information... Done
Correcting dependencies... Done
The following packages were automatically installed and are no longer required:
  guile-2.2-libs libncurses6 libpython3.7-minimal libsasl2-modules libzstd1
  mariadb-common python3.7-minimal
Use 'sudo apt autoremove' to remove them.
The following additional packages will be installed:
  libtinfo6
The following NEW packages will be installed:
  libtinfo6
0 upgraded, 1 newly installed, 0 to remove and 1326 not upgraded.
47 not fully installed or removed.
Need to get 0 B/325 kB of archives.
After this operation, 534 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
apt-listchanges: Can't set locale; make sure $LC_* and $LANG are correct!
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
locale: Cannot set LC_ALL to default locale: No such file or directory
Setting up libpam0g:amd64 (1.3.1-5) ...
locale: Cannot set LC_ALL to default locale: No such file or directory
Checking for services that may need to be restarted...awk: error while loading shared libraries: libtinfo.so.6: cannot open shared object file: No such file or directory
Checking init scripts...
awk: error while loading shared libraries: libtinfo.so.6: cannot open shared object file: No such file or directory
dpkg: error processing package libpam0g:amd64 (--configure):
 subprocess installed post-installation script returned error exit status 127
Errors were encountered while processing:
 libpam0g:amd64
E: Sub-process /usr/bin/dpkg returned an error code (1)

Nope.

All right, then. Let’s do it manually:

I’m not sure it was necessary, but at this point I ensured the locale by checking that the uncommented line “en_US.UTF-8 UTF-8” was present in /etc/locale.gen, running the command locale-gen as root, logging out and logging back in, and confirming the locale with locale -a.

Again, that locale dance may not have been necessary. What was necessary were the next steps:

Visit the Debian package pages for libtinfo6 and libpam0g, download the amd64 versions (using the sha256sum command to check the downloaded files against the SHA256 fingerprint listed at the bottoms of the Debian package pages), then install them manually:

root# dpkg -i libtinfo6_6.1+20181013-2_amd64.deb
root# dpkg -i libpam0g_1.3.1-5_amd64.deb

Those commands succeeded, and I confirmed that the packages were now installed:

root# apt-get install libtinfo6
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libtinfo6 is already the newest version (6.1+20181013-2).
libtinfo6 set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 1325 not upgraded.
root# apt-get install libpam0g
Reading package lists... Done
Building dependency tree       
Reading state information... Done
libpam0g is already the newest version (1.3.1-5).
0 upgraded, 0 newly installed, 0 to remove and 1325 not upgraded.
root# 

Now the box was in working order again, and I could finish the dist-upgrade:

root# apt-get dist-upgrade
[...zillions of lines of package names omitted...]
1325 upgraded, 390 newly installed, 19 to remove and 0 not upgraded.
Need to get 41.7 MB/1,155 MB of archives.
After this operation, 1,054 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
[...zillions of lines of success omitted...]

Portrait of Elizabeth Warren.

I’ve been “All In For Warren” for a while now. I expect a lot more people to join us after tonight’s debate :-), but just in case you’re still on the fence, here are four brief arguments Why Warren:

  • She’s making the other Democratic candidates better. She’s offering so much vision that the others are picking it up. The longer she stays in the race, the better the eventual nominee will be. (I think it will be her anyway, so this item is more of an insurance-policy argument.)
  • She has the right enemies. Seriously, ask yourself: can you name one enemy of Joe Biden’s? No, you can’t. When Joe Biden walks into a room, his goal is for everyone in that room to like him. That is not what we need in our next President. Elizabeth Warren has the enemies you’d hope she would have.
  • She understands what is needed, and she’s proposing to actually do it. Most candidates understand what is needed, but they don’t dare propose to actually do it, because they can’t afford to scare off the big-dollar donors. Elizabeth Warren decided not to pursue big-dollar donors from the beginning. That’s freed her up to offer up a spot-on diagnosis of how scaled-up capitalism has captured the state and made its values the state’s values, and she’s saying what needs to be done about that. She doesn’t mind offending the people who pushed us into unsustainable inequalities of wealth, power, and dignity.
  • If she’s campaigning for President, then we’ll probably have a better Senate too. The best presidential campaigns have coattails. Elizabeth Warren’s will be particularly long, because she’s offering so much for other candidates to grab on to.

Want to help? Come on in, the water’s fine!

Visual demonstration of Simpson's Paradox (adapted from https://en.wikipedia.org/wiki/File:Simpson%27s_paradox_continuous.svg)

Do any news organizations have a Numeracy Editor?

For fifteen years, the New York Times had a Public Editor, whose job was to visibly uphold journalistic ethics. The Public Editor would publicly discuss errors, biases, or gaps in the paper’s coverage. (Some other news organizations continue to have a public editor position, though I think it’s not widespread.)

I’d like to propose something narrower: a Numeracy Editor. The Numeracy Editor’s job would be to help reporters and columnists use numerical and statistical reasoning well.

I’ve been pondering this idea for a while, and finally decided to write about it after reading Vatsal G. Thakkar’s excellent NYT Op-Ed Bring Back the Stick Shift a couple of weeks ago. It’s a good piece, but at one point it veers into unexpected non sequitur in an attempt to use statistics to support its argument:

Backup cameras, mandatory on all new cars as of last year, are intended to prevent accidents. Between 2008 and 2011, the percentage of new cars sold with backup cameras doubled, but the backup fatality rate declined by less than a third while backup injuries dropped only 8 percent.

The more you read that, the less it means. For a three-year period, the percentage of new cars sold with backup cameras doubled from whatever it was before — without knowing what it was before, this doesn’t tell us anything: the result of doubling a very miniscule percentage would still be a miniscule percentage, for example. Meanwhile, during that same three-year period, fatalities due to backups declined by some amount (less than a third) from whatever the rate was before — again, we don’t know. So does that decline represent a greater decline in backup fatalities than should be expected from whatever percentage of cars on the road newly have backup cameras? Or a smaller decline? There is no way to say. Also, we don’t know what percentage of cars driving on the road are new cars, which is highly relevant here.

If the author was trying say to that fatalities should have declined more, this paragraph does not support that case, but it doesn’t support any other case either. It throws some statistics into the air, as if to see how the wind catches them, but they don’t connect to each other and they have no bearing on the question at hand. As my friend Tom put it, it’s just a “number casserole”.

I certainly don’t mean to pick on on Thakkar — again, I liked the piece — or on the New York Times. This sort of thing happens in many publications; you can see it all the time, in the regular reporting just as much as in opinion editorials.

But given that this was the New York Times Op-Ed page — a forum that presumably takes quality control and editorial standards seriously — it’s worth asking: how did such a problematic paragraph make it through the filters? I think the answer is that there is no editor whose reputation are self-respect are on the line when numerical clunkers slip through. A few grammatical or spelling errors and someone’s job is in danger, but even glaring errors of statistical reasoning are currently costless.

I get that journalists and their editors tend to have backgrounds in language, political science, history, and other fields that don’t emphasize math. And that’s fine: this isn’t an “everyone should learn more math” argument. There are only a finite number of days in anyone’s life, there isn’t time to learn everything, and people make the choices they make for reasons. That’s exactly why a Numeracy Editor is needed: it would be her job to own this problem, and along the way help journalists learn the math they need. The writers would start to be more careful just knowing that someone is watching. A Numeracy Editor would have caught the problem in that Op-Ed right away, and once spotted, it’s easy to explain; the conversation with the author can take place before publication, as with any other kind of editing. Many errors of numerical or statistical reasoning are easy to understand once they’re pointed out (although there are also subtler cases, such as Simpson’s Paradox, that occur in real-life, policy-relevant situations and need to be watched for).

Unlike Public Editor, Numeracy Editor need not be a public-facing role. The main point is to help writers and other editors use math appropriately and to prevent mistakes. If the editor also wants to conduct a public discussion about using numbers and graphs in journalism, that would be a great public service too, but it’s a bonus. The role could do a lot of good purely behind the scenes.

Numeracy Editor should be an easier position to hire for than the broader role of Public Editor has been, because it doesn’t require nearly as much journalistic experience (the Numeracy Editor isn’t making hard judgement calls about how much anonymous sourcing is acceptable in a story, for example) and because the advice it provides would be less controversial.

Anyway, I don’t run a newspaper; all I have is this blog. I’d love to hear from anyone who works in or near journalism what they think of this idea.

(You can respond in a comment, or in this Twitter thread, or in this Identi.ca thread.)

I guess I’ll just write this as though I have reason to be believe that the people who write headlines for the New York Times read my blog.

For the record: I’m a subscriber, and I think the Times does some terrific reporting and investigative journalism — when they’re at their best, there’s no one better. That makes the unforced errors all the more disappointing.

Look at the top of today’s edition’s front page:

Top of New York Times front page for 2018/10/03.

First note the caption beneath the big color photo on the left, which says:

A migrant caravan headed north Monday from Tapachula, Mexico, where members had stopped after crossing in from Guatemala.

Now all the way over on the right, note the bold headline at the top of the rightmost column:

Trump Escalates Use of Migrants As Election Ploy

Issuing Dark Warnings

Stoking Voters’ Anxiety With Baseless Tale of Ominous Caravan

If you take the headline at face value, and then look over at the photo, you would naturally come to the conclusion that the New York Times is contradicting itself on its own front page.

It turns out that the article under the headline is indeed about a baseless tale — just not one about the existence of the caravan itself, even though that’s what the headline would imply to any casual reader:

President Trump on Monday sharply intensified a Republican campaign to frame the midterm elections as a battle over immigration and race, issuing a dark and factually baseless warning that “unknown Middle Easterners” were marching toward the American border with Mexico.

[emphasis mine]

In twenty words of headline, there wasn’t some way to fit something specific about the false claim in?

How about this:

Trump Falsely Implies Terrorism Threat From Caravan

“Unknown Middle Easterners”

Stoking Voters’ Anxiety With Baseless Claim About Migrant Caravan

There, did it in 19 words, one fewer than the number they used for a misleading and less informative headline.

Yes, by the way, you know and I know and the New York Times knows that “Middle Easterner” doesn’t mean “terrorist”. But it’s perfectly clear what Trump is doing here and the NYT shouldn’t shy away from describing it accurately… in the headline.

(Entirely separately from the above, there’s the question of why the New York Times is running a giant color photograph of the migrants above the fold on its front page, for the second time in the past few days. These caravans have been going on since 2010; they’re larger and more organized the last couple of years, but they’re not new. As an independent news outlet, why let a politican’s talking points drive cover art choices in the first place?)

Self-censored page of 'Green Illusions', by Ozzie Zehner
image credit

A particularly insidious problem with online social media platforms is biased and overly-restrictive ban patterns. When enough people report someone as violating the site’s Terms Of Service, the site will usually accept the reports at face value, because there simply isn’t time to evaluate all of the source materials and apply sophisticated yet consistent judgement.

No matter how large the company, even if it’s Facebook, there will simply never be enough staff to evaluate ban requests well. The whole way these companies are profitable is by maintaining low staff-to-user ratios. If policing user-contributed content requires essentially arbitrary increases in staff size, that’s a losing proposition, and the companies understandably aren’t going to go there.

One possible solution is for the companies to make better use of the resource that does increase in proportion to user base — namely, users!

When user B reports user Q as violating the site’s ToS, what if the site’s next step were to randomly select one or more other users (who have also seen the same material user B saw) to sanity-check the request? User B doesn’t get to choose who they are, and user B would be anonymous to them — the others wouldn’t know who made the ban request, only what the basis for the request is, that is, what user B claimed about user Q. The site would also put their actual Terms of Service conveniently in front of the checkers, to make the process as easy as possible.

Now, some percentage of the checkers would ignore the request and just not bother. That’s okay, though: if that percentage is high, that tells you something right there. If user Q is really violating the site’s ToS in some offensive way, there ought to be at least a few other people besides user B who think so, and some of them would respond when asked and support B’s claim. The converse case, in which user Q is perhaps controversial but is not violating the ToS, does not necessarily need to be symmetrically addressed here because the default is not to ban: freedom of speech implies a bias toward permitting speech when the case for suppressing it is not convincing. However, in practice, if Q is controversial in that way then some of the checkers would be motivated to respond because they realize the situation and want to preserve Q’s ability to speak.

The system scales very naturally. If there aren’t enough other people who have read Q’s post available to confirm or reject the ban, then it is also not very urgent to evaluate the ban in the first place — not many people are seeing the material anyway. ToS violations matter most when they are being widely circulated, and that’s exactly when there will be lots of users available to confirm them.

If user B issues too many ban requests that are not supported by a majority of randomly-selected peers, then the site could gradually downgrade the priority of user B’s ban requests generally. In other words, a site can use crowd-sourced checking both to evaluate a specific ban request and to generally sort people who request bans in terms of their reliability. The best scores would belong to those who are conservative about reporting and who only do so when (say) they see an actual threat of violence or some other unambiguous violation of the ToS. The worst scores would belong to those who issue ban requests against any speech they don’t like. Users don’t necessarily need to be told what their score is; only the site needs to know that.

(Of course, this whole mechanism depends on surveillance — on centralized tracking of who reads what. But let’s face it, that ship sailed long ago. While personally I’m not on Facebook, for that reason among many, lots of other people are. If they’re going to be surveilled, they should at least get some of the benefits!)

Perhaps users who consistently issue illegitimate ban requests should eventually be blocked from issuing further ban requests at all. This does not censor them nor interfere with their access to the site. They can still read and post all they want. The site is just informing them that the site doesn’t trust their judgement anymore when it comes to ban requests.

The main thing is (as I’ve written elsewhere) that right now there’s no cost for issuing unjustified ban requests. Users can do it as often as they want. For anyone seeking to no-platform someone else, it’s all upside and no downside. What is needed is to introduce some downside risk for attempts to silence.

Other ideas:

  • A site should look more carefully at others’ ban requests against material that someone else has already made a rejected ban request about, or that someone who has a poor ban-reliability score has requested a ban on, because there would be a higher likelihood that those other requests are also unjustified.

  • A lifetime (or per-year) limit on how many ban requests someone can issue.

  • Make ban requests publicly visible by default, with opt-out anonymity (that is, opt-in to be identified) for the requester.

Do you have other (hopefully better) ideas? I’d love to hear them in the comments.

If you think over-eager banning isn’t a real problem yet, remember that we have incomplete information as to how bad the problem actually is (though there is some evidence out there). By definition, you mostly don’t know what material you’ve been prevented from seeing.

New York City Subway, 14th Street Union Square platform curvature
image credit

(Update: A few months after this article, this happened, which I think demonstrates my point. With “allies” like these, who needs oppressors? I also wrote a followup post, Thinking Creatively About Ban Requests and Content Policing, that discusses some structural solutions online services could try.)

More and more of the political left — which is where I sit, at least by American standards — seems to be abandoning the idea of freedom of speech as an inherent good, let alone as the essential liberty on which all other liberties depend.

Recently, someone I know and respect called repeatedly for Donald Trump to be banned from Twitter. He’s not alone. A lot of people want this, and they don’t want just Trump banned, they want many speakers banned from many popular platforms.

This is a worrying trend. The left may be about to finally gain some measure of political power in the United States, depending on the results of the November election. Yet right at that moment we are limiting our ability to have necessary debates and to even hear what people say. I’ll focus on one particular example in this post, but the problem is a general one. This narrowing would be bad under any circumstances, but it becomes worse when attached to power.

I’m not talking here about state censorship. A few people call for that too, but most people still understand why the state needs to be especially constrained in its ability to interfere with speech. I’m talking about no-platforming and campaigns for blanket shunning: that is, urging private-sector platforms to ban certain speakers, and shaming other people and organizations into ostracizing those speakers as individuals under all circumstances, even circumstances that are unrelated to the allegedly objectionable speech.

Are there narrow, consistent criteria we could use to decide when it’s appropriate to advocate non-state suppression of speech?

I think there are, and I think we’d be better off if we stuck to those criteria instead of the increasingly broad and subjective criteria many are using right now, most of which are based on empathy for those who are hurt by harmful speech. Certainly, speech can be harmful: the argument for freedom of speech has never been that there is no such thing as harmful speech, but rather that suppressing speech almost always leads to worse harm down the road. There are two reasons why “someone felt deeply hurt” is not a good test: one, it treats the speech and the speaker inconsistently by looking to others’ reactions as a guide (reactions which will vary from listener to listener), and two, sometimes useful speech may also be hurtful to some people — these two things are not contradictory, much as we might wish otherwise.

We need a better test.

Last year I read a good post by Valerie Aurora entitled The Intolerable Speech Rule: the Paradox of Tolerance for tech companies. The post links to a presentation she gives that’s worth watching. It’s thoughtful, aware of the tradeoffs involved in any kind of permitted-speech policy, and careful to distinguish between private actors (such as social media platforms) and the state.

Here is Valerie Aurora’s formula, phrased as a guideline for online platforms:

If the content or the client is:

  • Advocating for the removal of human rights
  • From people based on an aspect of their identity
  • In the context of systemic oppression primarily harming that group
  • In a way that overall increases the danger to that group

Then don’t allow them to use your products.

It’s a very specific, circumscribed rule. If I ran an online service, I’d try to follow it — but as conservatively as possible, because:

What about someone who tries, sincerely and non-threateningly, to discuss what is and is not a human right in the first place?

A friend of mine, Nina Paley, has been repeatedly no-platformed for doing exactly this. Nina is blunt and direct, because she has strong feelings on the issue she’s speaking about. But she threatens no one, and never tries to silence or dehumanize. She’s happy to engage with opposing views, and argues her own in good faith.

I’ll state Nina’s view only briefly here — if you want to know more about it, you’re better off getting it from her, and most of what I’m saying here is not about the substance of her view. Put simply, Nina doesn’t accept the argument that transgender women are women. Nina would like women’s-only spaces to be for women who were born women (or, as Nina resolutely calls them, “women”). Some people call this “Trans-Exclusionary Radical Feminism” and thus refer to Nina as a “TERF”. Nina prefers the term “gender-critical radical feminist”. At the very least, if you use the term “TERF”, be clear, as Nina always is, that the exclusion is from the set of human females, not from humanity itself. Half of humanity is already excluded from being female (so it’s clearly not dehumanizing). Nina’s argument is that if that half is making masculinity a toxic place to be, then the solution is to fix masculinity so people stop fleeing it.

I won’t go into detail about the substance of her argument; you should get it from Nina, not me. I’m sure you can come up with counter-arguments, too. I have done so with Nina, starting with the obvious: “Many trans people consistently report that they always felt that their body was the wrong sex — and they start expressing this when they’re young children, so it’s not just a retconned memory. There’s something real going on here.” Nina has interesting and probing responses to this, and you can ask her about them if you want; I was glad I did, because it led to an in-depth conversation.

But this post is not about the substance of Nina’s argument. It’s about freedom of speech: How can someone even have this conversation with Nina, or observe her having it with others, if platforms deny her the ability to speak?

For expressing this view, Nina was briefly banned by Facebook. Apparently, a bunch of people who disagree with her got together and reported her to Facebook as though she were spamming or in some way violating the site’s terms of service. That’s a straight-up dishonest tactic. That’s no-platforming.

Nina is a frequent and well-known speaker about art and copyright restrictions, but now is sometimes disinvited from speaking gigs because of her gender-critical radical feminism, even when that’s not the topic of the speech. She’s had a showing of her film canceled (her films are not about gender-critical radical feminism either). When a friend of hers tried to post screenshots of a Facebook thread showing the venue’s statement about the cancellation, plus the usual tons of debate in followup comments, those screenshots mysteriously vanished from imgur.com. So the person reposted the screenshots, and again they disappeared — again with no explanation or notice.

What the heck? Is someone working at imgur a secret censor?

I wanted to know more, so I asked Nina’s friend exactly what had happened and got this reply (you can skip the blow-by-blow if you want, but it’s worth reading it to feel what the experience of being no-platformed is like):

Timeline is this:

(1) Argument happens on the Arcadia cafe page. People are calling
for no-platforming, etc. It gets to hundreds of comments.

(2) Juicy drama of this sort often gets removed, so around 8:15 PM,
I decided to just take a bunch of screenshots. I have these on
my computer.

(3) Sure enough, later that evening (10 PM?) Arcadia removes the
event from their page, and with it all the comments.

(4) Nina asks me if I have screenshots, I tell her I do, and that
while they’re completely unedited (so non-anonymized or pasted
together) I can put it somewhere, I suggest imgur in an album,
which will be viewable if you know the exact URL but not
browsable from my name or anything (like all my other images,
same way).

(5) The next morning (day after the event) I put the images up in
the first album “Arcadia No-platforming.” Imgur interface is not
so friendly, to keep the images in order I have to upload them
one at a time. There are 64 images.

(6) The URL got shared on Facebook, I see some people viewed the
album. Next morning, I awake to find… all the images are GONE.
Completely gone from my account (not just taken out of the
album). The album is left, but it’s an empty shell, nothing in
it. I have NO notifications, no email, no nothing, just the
images are gone. I have the album still open from the previous
night with the image showing (cached in my browser) but if I try
to open, yep, it’s the usual standard “this image has been
removed or is no longer available” thing.

So I’m just… CURIOUS.

(7) I upload the images again (all 64 of them, again one at a time).
I put in a new album “Arcadia No-platforming Is Back.” I decided
that hey, let’s save this link to the completed new album to the
Internet Wayback Machine. I do this, and confirm that the images
are backed up over there (so they’re on the public internet now
in a place that isn’t imgur).

(8) Nina also takes the images from the new album, and puts on her
blog. So they’re available in a second place, that isn’t imgur.

(9) Overnight that night, the images are removed AGAIN. Once again,
the album is left as an empty shell, and all the images are
completely gone from my account. None of my other images (some
of which are waaaaayyyyyyyyyy more “offensive” than these
screenshots I might add, and which have been linked,
individually, on twitter by me) are disturbed at all. Just the
Arcadia facebook screenshots.

So yeah. CURIOUS.

(10) I get mad, and make the single image that just has the “stop
trying to censor this, the images are [elsewhere]” redirect
text on it. I upload that into both albums. Both albums have
been steadily getting views.

(11) That next night, someone removes the redirect image! Just wtf.
Again, it’s gone from my account, no notifications of any sort
to me AT ALL.

(12) I upload the single redirect image again, again put it in both
albums.

(13) Since yesterday, I check the albums periodically but whoever it
is has given up, the redirect image has stayed in there. Both
albums are getting views, still.

That’s about it. It’s just curious to me, because… I’ve never had
any images removed from my account before, and I have plenty of stuff
that anyone who can’t deal with “penis is male” would be far more
offended by.

I suspect that someone involved in the facebook comments thread got
upset and complained to imgur that their “personal data” was being
shared, or something.

Thing is, it was a public facebook page, public comments, open to the
entire world. Also, if I was officially violating a TOS, I’d expect
to get some sort of notification about it or a slap on the wrist or
some warning or something.

But yes, I suspect someone involved in the whole thing didn’t want
their comments put on display in a less than favorable light
(somewhere else that was linking to my album, since I didn’t post my
album anywhere myself) and sent a complaint, or something. But…
dunno.

Either way, both albums keep getting views, to that single image.
Just… weeeeird.

That’s what no-platforming looks like. At its best, which is still pretty bad, the platform will at least admit to the censorship and describe how the decision was made. At its worst, as appears to be the case with imgur.com, it looks the way censorship regimes usually look: information disappears, and there’s no explanation nor even acknowledgement that it happened. Everyone please move along; nothing to see here.

Ostracism is not an answer either.

I mentioned that Nina has had speaking gigs and showings of her films canceled. Perhaps you’re thinking “Hey, that’s different. That’s not no-platforming. That’s just someone not wanting to be associated with Nina’s views. People have the right to disassociate themselves — in fact, isn’t that what ‘freedom of association’ is all about?”

Sure, in some literal sense, that’s true. But it’s best to use this “freedom to ostracize” sparingly. Most disagreements do not need to rise to the level of not being able to be seen with someone at all. There is no need for people to assume that when you engage someone in an unrelated discussion or presentation, you also endorse everything else that person believes.

Worse, there is a dangerous feedback loop here. The less often venues present people whose views diverge from the venue’s, the more we start to think that when a venue does present someone the venue tacitly endorses everything that person thinks. The eventual result of this process is monoculture and an arms race of virtue-signaling, which is exactly what’s happening in certain quarters of the political left.

Here’s the working principle I would use (and I’d appreciate constructive feedback on it in the comments section):

If you already thought a person is worth presenting — or engaging in discussion with, or showing the artwork of, etc — then do so, unless that person has some unrelated public stance that clearly and unambiguously advocates violence or violates the “Intolerable Speech Rule” (that’s the Valerie Aurora test given earlier).

Nina Paley is justly famous for her articulate and persuasive arguments against copyright restrictions. She’s also justly famous as a filmmaker. If you’re looking for a speaker on the topic of copyright, or if you’re a venue that shows art films, you don’t need any special excuse to choose Nina Paley — she’s already on the short list.

So, given that, don’t not choose her just because she has other views that you might disagree with. As long as those views don’t qualify as “intolerable speech”, which they certainly do not, you’re not responsible for them. You’re not inviting her to be your CEO or the chair of your board of directors or something — those would create a meaningful, leadership-related association between your organization and Nina, and people could reasonably assume an implicit endorsement of, or at least lack of objection to, her views. In the absence of such a connection people shouldn’t make those assumptions, and you are free to make that clear.

To shun Nina’s contributions and works out of fear — that is, fear of being tainted by association with something Nina thinks, of being punished by the mob because you failed to shun Nina — is to make it that much harder for others to openly tolerate dissenting views. It’s passing the buck.

It also causes people to think Nina’s views are something other than what they are. All over the Internet you can find people calling her “transphobic”. This is pure libel: she is not, never has been, and such an attitude would be foreign to her nature. Nothing she has actually written or said would support the conclusion, either. But people believe it anyway, because they’ve seen other people saying it about Nina, and because they’ve seen venues that, besieged by the lie, believe it too and cancel appearances based on it.

When a venue cancels an appearance by Nina in response to false cries of “transphobia” or “hate speech” (more on that later), or a platform bans her for the same reason, it becomes party to the libel. It’s now part of the problem. Other people see the action and assume there must be some truth to the accusation — after all, why else would the post or the event have been canceled?

Please don’t contribute to this kind of mess, not with Nina or anyone else. Exercising “freedom of association” is not a free pass to slowly corrode someone’s reputation through inaction and invitations canceled or foregone. If you admire someone’s work, support her in that work.

Privilege, platforms, and using misrepresentation to silence.

One response to my concerns might be “Look, this is all easy for you to say, from your position of privilege as a white, straight, cis-gendered male citizen of the United States.”

I’m the first to admit my comfortable position. I’ve got it easy, and wish I could share that privilege with everyone. If I were transgender, if I didn’t have my identity constantly being reinforced and encouraged by the culture around me, I can see that I might be genuinely hurt by Nina’s position — I’m not actually sure I would be, and in fact there are transgender people who aren’t hurt & who speak out in support of Nina’s position, but I’ll certainly grant the possibility that I might be hurt.

However, the possibility of hurt feelings is not a reason to ban speech or ostracize the speaker. There is inevitably going to be disagreement about things that people take personally — for example, the question of whether others regarding you and treating you as the gender of your choice is a human right or not. The disagreements that matter are, by definition, the ones people care about. If we prohibit or shun speech that touches anything people are deeply invested in, we’ll all be left discussing the latest trends in shopping-mall interior decoration.

More importantly, once speech starts being restricted, it’s not the privileged who pay the price. As my friend Jeff Ubois put it: “It may not be possible think clearly about inclusion or freedom of association without freedom of expression. But freedom of expression is what some advocates for vulnerable people want to limit.”

Look again at Valerie Aurora’s formula (by the way, I don’t know whether Valerie herself would agree with any of this — these are my interpretations of her formula, not necessarily her interpretations):

If the content or the client is:

  • Advocating for the removal of human rights
  • From people based on an aspect of their identity
  • In the context of systemic oppression primarily harming that group
  • In a way that overall increases the danger to that group

Then don’t allow them to use your products.

Nina does none of the above, unless you think that declining to treat another person in the way that person wants to be treated is inherently a human rights violation. I do not. One might choose to treat certain people in the ways that they prefer, but someone else who does not make the same choice is not thereby guilty of violence or dehumanization. I can think of myself however I wish to think of myself, but I can’t dictate how others think of me, even if I am hurt when they don’t see me as I see myself.

Separate from the issue of ownership of identity, there’s also a fundamental issue of honesty here:

When people band together to get someone no-platformed, there’s usually fraud involved. The complainants have to falsely claim a violation of the platform’s terms of service, knowing that the site’s overworked staff won’t actually have time to look deeply into the matter and make a reasoned decision. When people demand that a venue cancel an event on the grounds that someone who is clearly not transphobic is transphobic, that’s a misrepresentation.

The no-platformers are not seeking honest debate; they’re seeking to remove a voice. It’s silencing.

Social media platforms, at least, could help solve this problem by improving their ban systems. Right now there is no cost to someone who fraudulently requests that another person be banned, or who even makes repeated ban requests against many targets. For the no-platformers, it’s all upside and no downside. Until the platforms introduce some downside risk to those who would silence others, some penalty for bad-faith ban requests, the censorship will continue. Yes, this would require the platforms to make some judgement calls, but after all those companies are already exercising judgement when they ban — they’re just doing it poorly.

The dangers of speech are not imaginary, of course. As much as I want to be a free-speech absolutist, even I can agree that some restrictions are necessary. Actual threats of violence, for example, justify restriction even by the state.

But private-sector venues and online platforms make their own terms, and they should try to live up to the free-speech principles they almost always claim to support. That includes measures to prevent coordinated no-platforming attacks from users bent on substituting their own speech code for the site’s terms of service. If it’s not a threat and it’s not seeking to endanger anyone through dehumanization, then let it stand. Real-world venues should err on the side of liberality and diversity. (And no, that doesn’t mean inviting Steve Bannon to headline your festival, but the reason not to invite him is because he’s a poor exponent of the ideas he claims to champion. A brief proximity to power is no reason to put a sloppy thinker on your short list in the first place.)

That friend who posted the screenshots also wrote: “… ‘hate speech’ codes only ever serve to protect the powerful”. I think that’s correct in the long run. Speech codes may give temporary comfort to some, but in the end systems of censorship will inevitably be turned against the weak by the strong.

My friend Smiljana, when we were discussing Nina’s no-platforming, said:

“We talk about identity so we don’t have to talk about class.”.

So true.

I’m using the Display Posts Shortcode to have a post that just shows a listing of all the posts on rants.org. This is so I can generate a function that provides interactive completion on the names of all my posts, in the “Completable Web Pages (CWP)” system of my .emacs.

Here are the posts:

Here’s what “normalization” means:

Trump praises this bizarre offer from Putin in which U.S. gets to question the 12 Russians the U.S. DoJ accuses of engaging in election subterfuge, in exchange for Russia questioning William Browder and former ambassador Michael McFaul. News outlets go nuts about the proposal, bringing on talking heads to discuss what a crazy idea this is. The anchors and talking heads use inflammatory language about “handing over” Browder or McFaul, or letting Russia “interrogate” the subjects, etc.

But none of these commentators or news anchors (so far as I’ve seen) mentions the obvious: that Browder and McFaul are private citizens, so there is no legal principle under which the U.S. executive branch could compel them to show up for the questioning anyway.

Do you see what’s happening here? Trump talks about this swap as though it’s a thing he could do. He knows it’ll never happen. But as people talk about what a terrible idea the swap is, they unwittingly accepting the premise that citizens are property for all-powerful rulers to use as pawns in the first place! They push back on Trump’s specific proposal, but they first grant his assumptions & worldview in order to do it.

That’s normalization.

Open Source Archetypes report cover.

I don’t usually blog about work stuff here, but some things are worth making an exception for.

James Vasile and I wrote a new report, Open Source Archetypes: A Framework for Purposeful Open Source, for Mozilla. We’ve just co-published it with them, and — not to put too fine a point on it — we’re pleased as punch.

The report offers a high-level typology of open source projects, enumerating the various kinds of projects and what situations each kind is suited for. We’re planning to do a version 2.0 with Mozilla, after feedback rolls in, so any thoughts folks have about v1.0 will be used to improve v2.0.

Many thanks to Mozilla, and in particular to Patrick Finch, for commissioning the report and arranging many informative staff interviews, and then for excellent editing and feedback on the early drafts.

Link fest:

Mozilla Firefox logo

I’m mystified by some Firefox browser privacy policies, and I wonder if anyone can help me understand them better.

I hadn’t been following browser HTTP referrer policy closely. I knew that referrals were sent, and that had always vaguely puzzled me from a privacy perspective, but I assumed that Smart People Were Working On It, and that there were probably reasons why things are the way they are. After reading this post on the Mozilla Security Blog yesterday, I suddenly wished I’d been following things more closely. The post is meant to tell us about how Firefox is getting better about privacy. But after reading it, I feel worse about privacy than I did before reading it. Here’s a summary of what the post says:

When you follow a link on web page X to go to web page Y, your browser sends Y’s server an indication that you were referred to Y by X. (This information is sent in the “HTTP Referer” [sic] header, for those keeping score at home; yes, it is probably the most famous misspelling in all of Web standards.) The referral information typically includes the entire URL of the page you’re coming from, that is, the site address and path of X. For example, for this post the site address is “www.rants.org” and the path is “/2018/02/a-mystery-firefox-and-user-privacy/“.

Okay, pausing for a moment to ask the obvious first question:

Can I turn this off in my browser settings? Because maybe I consider that information private and don’t want to tell one web site what other web site I’m coming from.

Answer: not unless you have a Ph.D. in Firefox Studies. At least, in the “Preferences → Privacy Settings” menu of Firefox 52.5, there is no identifiable option for controlling this. You can do it via about:config, by setting Network.http.sendRefererHeader to 0 instead of the default 2, but that way of setting preferences won’t fly for the majority of users. There really should be a way to do it from Firefox’s normal preferences dashboard.

Continuing with the post:

As of Firefox 59, when you’re browsing in Private Mode, Firefox will not send the path portion of the referrer information.

Well, uh, okay, that’s an improvement, I guess. But then why even send the origin site at all, even without the path? Shouldn’t “Private” mean private? In Private Browsing Mode, I would expect no referral information to be sent at all. Then, to make matters worse, a bit later the post says:

In Firefox Regular and Private Browsing Mode, if a site specifically sets a more restrictive or more liberal Referrer Policy than the browser default, the browser will honor the websites [sic] request since the site author is intentionally changing the value.

Now I’m even more confused. Why would the site author get to decide what the value should be? At all, I mean, but especially in Private Browsing Mode! I thought the whole point of Private Browsing Mode was that the browser user would decide that. Browser users are often in an adversarial relationship with site authors. The browser should take the user’s side in that relationship, every time.

I must be missing something here. Education welcome. (The answer might be somewhere under this post, but I haven’t found it yet.)