Gosh, more sunny Portland. And a super stimulating day. And secrets! And, unfortunately, not enough work on SULACO BLUE. On we go.
1.0 Sharing Heartbleed
As an indication of what the general public is thinking about, I got a semi-urgent voicemail from my father-in-law this morning that warranted a callback. It wasn't about Heartbleed, but it was about Windows XP being EOLed. So there's that, I guess. (I explained that he needn't worry - his laptop is on Windows 7, I think).
Anyway: Heartbleed. A piece of more-or-less backbone infrastructure powering most of the internet, and genuinely an infrastructural problem, a structural weakness due to, frankly, using tools that just aren't up to the job. There's no excuse *these days* for the type of bug that caused Heartbleed (which, quickly, if you're not familiar, is a weakness in the software that a lot of the internet uses to provide secure, encrypted communication that allows that secure information to leak out in unpredictable ways that can be taken advantage of by unscrupulous people).
(Why's it called Heartbleed? Because it's reliant upon a "heartbeat" feature built into the software that provides those secure encryption services we rely on).
There's a thread I want to pull on here. First was the quick agreement that the *branding* of the Heartbleed bug was pretty impressive, better than a lot of startups manage these days. For the community and audience it was intended for, the Heartbleed site did a good job of communicating the problem and what to do about it. I know that seems like a weird thing, but here I'm using the term branding to talk about the whole package and manner in which what needed to be communicated was communicated. As a whole, the team did a good job.
And then you keep pulling on that thread and what came out today were a steady slew of disclosures and emails from services that had patched, or fixed, the Heartbleed vulnerability. Those emails need to do a bunch of jobs: communicate that everything's OK, communicate what the problem was and, if you need to do anything (like change a password), what that is and how to do it.
I thought the If/this/then/that email did a pretty good job - which I'd love to point to on their blog, but it looks like while they've sent out an email, they haven't updated their blog. Now, IFTTT isn't a particularly mass consumer-facing company. They're definitely in the enthusiast area. But, let's look at their copy. It's:
- clearly states the problem: "A major vulnerability in the technology that powers encryption across much of the internet was discovered this week."
The only real piece of jargon that I'd say applies would be the worlds "vulnerability" and "encryption" - but again, I think this is appropriate to the audience that IFTTT has. I'd probably replace "vulnerability" with "weakness" and "encryption" with "secure communication".
- says how IFTTT responded: "Like many other teams, we took immediate action to patch the vulnerability in our infrastructure."
I'd probably simplify here to "Like many other teams, we took immediate action to fix the weakness."
- and the result: "IFTTT is no longer vulnerable."
- and then what you need to do: "Though we have no evidence of malicious behavior, we've taken the extra precaution of logging you out of IFTTT on the web and mobile. We encourage you to change your password not only on IFTTT, but everywhere, as many of the services you love were affected."
There's also a way to get in touch with IFTTT that they append - an email address - at the bottom of their message.
All in all, a good example of timely communication about a serious issue.
Of course all this is well and good. But we have managed code these days. We continue to see silly bugs like those in Heartbleed that, technologically speaking, are beneath us: they should not be happening. And yet they still do. And they don't exclusively happen to closed-source products, the many eyes that made bugs shallow *still* didn't catch something like Heartbleed even though in retrospect it looks obvious.
But this is what it feels like to live with brittle infrastructure. A vulnerability gets disclosed and the internet community, such as it is, races to fix it, everywhere. I'm trying to think of analogies and it's a bit like, but not quite, like a massive car recall, only for *roads* and not for cars. Or God, imagine if someone came up with a "concrete" vulnerability and suddenly we had to all go around patching our *buildings*. It's that far down the stack that it's a component that's distributed in infrastructure that needs fixing. And I'm not sure how we deal with this. I remember ages ago reading about crack CERT teams and now it feels like, even more than before, you kind of need a Global Frequency-esque zero-day patch team, able to fly around the world at a moment's notice to fix the doddering Internet. Only, how in practice do you co-ordinate concentrated-in-effort-but-distributed-in-effect work like this? There's no universal back-door. Hilariously, I saw a wag comment on Twitter about the possibility of using Heartbleed itself as a remote-code-execution-privilege-escalation-beachhead to use the vulnerability to patch itself. But perhaps that's what a zero-day Global Frequency white hat team would do. We break into other peoples' computers: to fix them.
And then from that thread: zero-day existential threats to humanity. Thinking about them in terms of software patching and vulnerability disclosures: is there a group of people who're concerned with such things? How might they act? A group of people - shadowy, and no doubt well-funded - whose responsibility is to act against new systemic, zero-day critical bugs in human civilization. Not so much protecting us, Global Frequency-style against the *secret* threats in our lives, the detritus left over by overly enthusiastic government-funded TLAs, but more the bottom-up protection. The ten year old in Mumbai who's just discovered that pushing this-bit-here leaves civilization hanging by its fingertips off an existential cliff-edge. And these zero-days that I'm thinking about aren't the "I've just invented a new thing" but rather the systemic vulnerability kind: the "hey, it turns out that evolution is pretty bad at making sure security holes are patched."
Something I think I'm going to keep ruminating on.
That's it for today! As ever, love the notes and love the senders-of-notes.