My response to ITAPS’s comments on the Federal Source Code Policy is posted here.
My article Dissecting The Myth That Open Source Software Is Not Commercial is now up at the IEEE Software Blog. (Comments over there, please, not here.)
Many thanks to editor Stefano Zacchiroli for editing, and for suggesting an article in the first place.
If you encountered this error when trying to clone the Redis repository from GitHub recently, there is a solution. The error looks like this:
$ git clone https://github.com/antirez/redis Cloning into 'redis'... remote: Counting objects: 42713, done. remote: Compressing objects: 100% (33/33), done. remote: Total 42713 (delta 15), reused 0 (delta 0), pack-reused 42680 Receiving objects: 100% (42713/42713), 19.29 MiB | 6.81 MiB/s, done. error: object 1f9ef1b6556b375d56767fd78bf06c7d90e9abea: \ zeroPaddedFilemode: contains zero-padded file modes fatal: Error in object fatal: index-pack failed $
The problem is that your ~/.gitconfig file probably has this setting:
[transfer] fsckObjects = true
…and/or perhaps these settings:
[fetch] fsckobjects = true [receive] fsckObjects = true
Solution: set the value(s) to false while you clone Redis, then set them back to true afterwards.
Pro non-tip: you might think that running
$ git config --global fsck.zeroPaddedFilemode ignore
so as to get
[fsck] zeroPaddedFilemode = ignore
in your .gitconfig would solve this problem in a nice targeted way, but it won’t, so don’t bother. See here for some discussion about that.
(This post is part of my SOLVED as a Service series, in which I post solutions to technical problems with open source software that I use. The point is the next time I encounter the same problem and do an Internet search, my own post will come up; this has now actually happened several times. If these posts help others, that’s a bonus.)
Wow. I had no idea this could happen!
(Rest of this post is by Michael Albaugh, except for the parts that quote me.)
From: Michael Albaugh Subject: Re: Wait, what? Can speakers pick up radio by themselves? To: Karl Fogel Cc: The Usual Suspects Date: Fri, 11 Dec 2015 10:03:22 -0800
Disclaimer: It has been quite a while since I had to deal with this stuff for pay, and my amateur license expired so long ago they recycled my call.
On Dec 11, 2015, at 9:13 AM, Karl Fogel wrote:
This is happening, this is literally happening right now:
I have plugged my phone headset (which double as my desk headphones) into my computer speakers. This a standard pair of small standalone computer speakers, one of which plugs into the computer’s sound port with a standard 2.5mm connector, and the other speaker connected to the first. The first speaker also has a headset jack and a volume control on the front.
It presumably also has a power supply. That is, these are amplified speakers.
With my headset plugged into that speaker’s jack, and the speaker volume turned all the way
down, I can hear a radio station playing in the headset, faintly and with some staticy fuzz, but clearly. I don’t know which station it is, but sometimes the pop music stops and an announcer comes on (I can’t quite hear what he is saying, though I might be able to catch it next time he comes on).
This is not surprising. What you have is some consumer-grade cables (i.e. not particularly designed to reduce the reception of stray signals at all cost, or any cost) plugged into a device with some non-linear components (inherently, such as transistor and diodes, or unintentionally, such as inductors with other than air cores) and including a means to amplify the result. That is, you have a crystal radio hiding in your amplifier.
See also “Why do I get the local radio station on my fillings?”
However, if I turn the volume knob on the speaker
upat all, then the station fades out and I get silence.
Or, you have shifted the sum of the intended input and the signal that being “detected” out of the range of the non-linearity.
If I unplug the speakers from the computer, then I don’t hear the station anymore.
Here I am leaning more on speculation, but perhaps the speakers are sensing the (lack of) DC bias on their input and shutting down the output.
So my… computer is acting like a radio?
Actually, I suspect that your speakers are. You should immediately rush out and buy various models of Bose, Harmon Kardon, and Beats by Apple speakers and repeats the experiment. 🙂
Why? And why is it only audible when the speaker’s volume is turned
In related news, perhaps you missed the hack that was in the news a short while back. If you have your Siri, Google, or Cortana “assistant” enabled to work without pressing anything, and you have a wired handsfree header plugged into your phone, then someone can inject audio into your phone and say “Siri, post all my photos to Instagram”. or “Siri, find goat porn”.
In older news, back when phones were always wired, heavy enough to be a murder weapon, radio stations that didn’t want their “personalities” to have to drive out to a shack in the marshes would lease lines from the phone company, running from their handsomely appointed studios to that shack. These lines would run through one or more phone company facilities. In one such facility (cough — [[redacted]] — cough) some of the workers had connected a speaker across the line as it went through, so they could have music in their workplace. One day, a worker experienced one of those WTF moments, and verbalized the feeling. Of course, every speaker is a microphone, and the exclamation was sent out over the air, causing a fair bit of consternation, agitated phone calls, and denials from the on-air host. Not to mention a mad scramble to disconnect that speaker and look innocent.
Welcome to the future, here’s your whoopee Cushion
Update 2015-12-03: I just found out from a response tweet from @jacobian that the user flogging is apparently a requirement of the PCI standards, and thus many online services are essentially forced into it. Would love feedback or further information from anyone familiar with how PCI standards get baked.
Calling all designers of online systems that do user authentication… Wait, that could be shorter:
Calling all designers of online systems:
Please stop locking out users after three failed login attempts.
That security measure is left over from the days of Unix consoles that were just dumb terminals connected to a server somewhere else in the building. It makes less and less sense in the modern era. These days, large distributed botnets are engaged in constant automated login attempts against all publicly reachable online services of any size, using guessed username/password combinations, on the principle that only a tiny fraction of the attempts need to succeed for the effort to be worthwhile. The result is that users with strong passwords but human-readable usernames are penalized for being the target of failed hacking attempts.
It happened to me recently:
From: Karl Fogel To: Mailing List Of Various Techie Friends Subject: Speaking of passwords I just found out from a rep that the reason Wells Fargo Bank kept resetting my (incredibly secure) online access password, thus forcing me to do a password reset dance about twice a month, is that online accounts get automatically locked after three failed login attempts. Since my username was "karlfogel" -- it's changed to something less guessable now -- some jerk with a botnet was causing Wells Fargo to lock me out on a regular basis, presumably by trying a username generated from my real name and passwords that were various combinations of my birthday, relative's names, etc. The same is probably happening to thousands of other customers. After all, the hackers only need a tiny number of successes. I wonder if Wells Fargo has really thought carefully about the usefulness of a 3-failures lockout policy in the modern era of distributed attacks against your entire user base. This was not a topic I felt it profitable to take up with the phone rep, though. *cough cough*.
Every time you force your users to do a password reset dance, which usually involves some kind of email confirmation step, you are decreasing their security. First, because if a user is forced to change her password frequently, she is likely to start making passwords that are easier to remember, because why invest in memorizing a hard password if one is just going to have to reset it soon anyway? Second, and more importantly, because you are giving hackers the power to lock someone out of their own online account, which creates two vulnerabilities: one, now the hacker has an additional attack surface (the user’s email account), and two, your user support staff also becomes an attack surface because the hacker can now call up and impersonate the legitimate user, saying “Help, I’m locked out of my account” — a fact that the support rep can easily confirm, and which will lend credibility to the hacker’s attempt at social engineering.
Just as a general principle, it’s usually not good to allow attackers to change the behavior of the system for legitimate users. When you allow that, you give the hackers more material to work with, and they will always be more imaginative than your programmers or your support reps, because once they sense that they have a good target, they can spend all day thinking about how to approach it.
It’s fine to have a delay between login attempts. Maybe it’s even okay to increase the delay somewhat when there are a suspicious number of failed login attempts for a given user (although I’m not sure about that, and it is a minor violation of the general principle above).
If you want to help users who have weak passwords, have your security team run guess-in attempts itself (without the rate-limiting), or even run cracking attempts against the password database itself, and then follow up with the users whose passwords fail the test. You can just let them know the next time they log in, or if you want to provide especially deluxe service, follow up via an automated phone call or something. Don’t let them know by email, though: it’s not a great idea to send cleartext email across the Internet telling someone their password is insecure.
But don’t treat failed login attempts as special events that need some kind of reaction. They are more like spam: inevitable, ubiquitous, and best handled in ways that have no effect on the target. It’s not your users’ fault that people are trying to hack into their accounts, so don’t punish them for it.
Addendum: One of my friends on that mailing list followed up with this story:
Anthem Blue Cross, in order to let you make online payments, redirects you to a random payment processing with a scammy-sounding domain name, which tells you the following: 1. You need to make a new account with us because we're not tied into Anthem's database. Because, you know, I can personally take credit cards - but that's apparently beyond the capability of the largest health insurance provider in the country. 2. By the way, you might already have an account with us from some other place, so you'll have to log in with that account instead. No, we can't tell you whether that's the case or not. 3. You must choose a password between 5 and 8 characters. I'm not kidding. 5-8 characters. I make my payments over the phone.
Update 2015-12-01: How could I have forgotten to mention that there’s a donor match going on right now? If you become one of the next 50 new Conservancy supporters, a donor is matching up to $6000! Please help Conservancy get every dollar they can from this generous donor.
Few organizations are as effective per dollar as the Software Freedom Conservancy.
The list of what they’ve done in 2015 alone is impressive — and that’s before you consider how small a staff they do it with.
You’ll notice that link was actually to their 2015 fundraiser page, which explains why they need to raise money now, and exactly what they plan to do with it. (Did I mention high marks for transparency?)
Today, for Giving Tuesday, I became a Conservancy Supporter again, and hope you’ll consider doing the same. The free software movement doesn’t run on good will. It runs on dedicated people giving their all, and those who do it full-time need support from everyone who understands why this movement is important.
I’ll keep this short, because the very best thing you can do right now is go watch this 18-minute video of Nina Paley giving a talk at TEDxMaastricht about exactly why she is a copyright abolitionist and how copyright abolition starts at home, especially for artists. It is by far the best, most eloquent explanation I’ve seen yet of the harm copyright causes to artists and audiences and how liberation is possible:
If you’re one of the “copy-curious” — people who feel something is wrong with the current copyright system, but who worry about abandoning it wholesale because “how will artists make a living” and other similar questions the intellectual monopoly industry wants circling around in your head — then this talk is for you.
It’s less than 20 minutes. You will be mesmerized. And, like Nina’s audience at the talk, you will come out of it truly understanding the copyright abolition position and why an artist of Nina Paley’s caliber holds it.
Link to it: questioncopyright.org/copyright_is_brain_damage.
Please share widely!
I got a treeware letter recently from Experian explaining how one of their servers had been hacked and how my private data (name, address, Social Security number, phone number, birth date, etc) was likely obtained by criminal resellers. The letter was a little more euphemistic than that, but that’s basically what Experian was admitting. To make up for this incident, they were offering me a free two-year membership in their “ProtectMyID elite credit monitoring and identity theft resolution services”.
Now, one might, in these circumstances, ask oneself “Why would I want to enroll in an identity protection service offered by the very company that just admitted they compromised my identity when their server got hacked?”
Fortunately, their own FAQ addresses this question forthrightly:
Q: Since Experian was compromised; can it effectively offer credit monitoring?
A: Absolutely. This was an isolated incident of one server and one client’s data. The consumer credit bureau was not accessed in this incident and no other clients’ data was involved.
Well, that makes the decision easy. I don’t blame them for getting hacked — that could happen to anyone. But no way am I trusting my private data to people who use a semicolon where they should use a comma!
On a private mailing list, a friend recently asked this:
Playing devil’s advocate here: what privacy are you trying to protect? Is it very important to you that websites not know what sort of products you’re interested in (and if so, why)? Or is it that you simply find targeted ads annoying?
I ask as someone who spent four years trying to help websites show less annoying ads.
Below is my response (after someone else on the list said “Sorely tempted to exfiltrate the hell out of this. Can we have it on a web page please?”):
I think Eben Moglen’s observation that privacy is really an ecological concept, not a transactional one, is the best answer to this. Thinking of privacy primarily in terms of the relationship between the user and various commercial third-parties misses the point. This post gives the relevant passage from Eben (it’s not long, and there’s a link to his full talk):
He has also pointed out that these days it’s an explicit goal of the U.S. government to have and maintain the social graph of everyone. That is, all the relationships, to the highest degree of accuracy and resolution possible. So the information Google and other online services collect is now potential data for that graph. It’s already both subpoena’d at some times and surreptitiously exfiltrated at others (though Google has done admirable work trying to prevent the second; how successful that has been, we can’t know, but it probably has had some limiting effect).
My point is: all that data we’re collecting, once it exists, it’s valuable to more parties than the ones who originally collected it. And by the Ashley Madison Principle, there’s no such thing a confidential dataset. There are only datasets that have not yet been involuntarily shared, and those which have been. There is no guarantee you will be able to tell which category your particular dataset falls into.
So when you ask “Is it very important to you that websites not know what sort of products you’re interested in?”, you’re framing an ecological question in a transactional way. This unintentionally transforms the question from the one we should care about to the one collectors of large-scale data would prefer we ask :-).
I realize, of course, that there is a tradeoff here. Google really can improve the quality of ads — quality as seen not just from the advertizer’s point of view, but even from the user’s point of view — by tracking and analyzing everything everyone does. The benefits are near-term and (for Google and the advertizers) centralized; the costs are long-term and decentralized. But that doesn’t mean the costs aren’t significant. It’s very similar to the economics of a lot of environmental pollution, actually, which is partly why “ecological” is such a good word here. I think in some ways it’s almost the definition of an ecosystem to say it is a system from which short-term, easily measurable benefits can be extracted for particular members at long-term, hard-to-measure (but real) costs for all members. Privacy turns out to be such a system.
Does that help?
Update Nov 2015: Many thanks to Twitter engineer Eitan Adler for grabbing this one by the horns and steering it skillfully and persistently through the support team. My friend’s problem is now solved.
Note: If you’re from Twitter Inc., please contact me. If you work at Twitter and you know how to fix the problem described in this post (or even if you don’t work at Twitter but you know how to fix it) please feel free to contact me privately about this. It should be pretty easy to prove my friend’s identity in whatever way is needed. I’m kfogel on Keybase.
A friend of mine has a Twitter “Verified Account”. This means he’s a well-known enough public figure (which he is) for Twitter to have verified his identity. His Twitter page has a little blue checkmark, which indicates that Twitter is vouching that this person is who you think he is.
The only problem is, his account got hacked.
Not hacked directly. Instead, the hackers used social-engineering to dupe his email provider into giving the hackers control of my friend’s email account. Then in his Twitter account, they pretended to be him claiming to have lost his password, so they could do Twitter’s mailback-confirmation dance to have themselves emailed a password reset link. That password reset link, of course, went to the hacked email account, so then they had his Twitter account too.
My friend is a normal computer user, but is not otherwise particularly technical, and he asked me for help getting back control of his account.
My first thought was that Twitter, since it provides verified accounts in the first place, would also provide some special means of recovering such accounts. After all, they’re vouching for the identity. The sorts of public figures who get verified accounts are also more likely targets for getting hacked, so it would make sense for Twitter to have some recovery mechanism that is specific to verified accounts, some kind of recovery red carpet.
But if so, I haven’t found it yet. As far as I can tell, once someone gets control of the email address associated with a Twitter account, they effectively can take over that Twitter account and there is no to get it way back, even for “verified” accounts. (No, my friend had not set up any phone-number-based confirmation, just his email address.)
Here’s the the only account recovery screen I can get to; I haven’t found any path for holders of verified accounts, other than this path (click to enlarge):
(I’m not mentioning my friend’s name here because I don’t want to out this effort to the hackers.)