Lessons From the Signal Leak
I’ve found a lot of the coverage about the Trump administration’s accidental leak of Signal messages to The Atlantic frustrating. To read most of the coverage, the major mistake was Mike Waltz’s inclusion of Jeffery Goldberg in the conversation, and the big questions are about the impact of this particular leak. I have two alternative takeaways, one about the general security attitude, and one about Signal itself.
First the broad one.
Two of the things to consider when evaluating the security of a system are the capability and motivation of the assumed attacker. Most of us are worried about relatively unsophisticated adversaries that don’t actually care that much about us in particular. We want to guard against the cyber-criminal out to get a credit card number, or a telco that wants to sell our info to advertisers. If we’re harder to hack than then next guy, they’ll just move on.
It’s clearly a different case when, for example, law enforcement gets interested in you in specific: the adversary (“The Law”) is now motivated to expend significant, directed effort, and can bring in reasonably sophisticated resources, such as the FBI, to break into your device and messages. If this is your adversary, your job is much harder, and their rate of success goes up substantially.
But we’re talking about people like the Vice President, Secretary of Defense, and Director of National Security: people who would be at the top of any US adversaries’ “to bug” list. The motivation is extreme. And, particularly with Russia and China, we’re talking about highly sophisticated attackers.
And so it is a reasonable assumption that any commodity device, like those running Signal, owned by these individuals have been compromised, and that every conversation they have on them is being scooped up by Beijing and Moscow. For the same reason, it’s a reasonable assumption that their personal laptops, cars, and homes have all been bugged.
This is why the government has separate systems and physical locations to hold this kind of conversation. An isolated, stripped-down, purpose-built system would be much easier to secure than even a minimal Android or iOS device.
We know about this particular case because Mr. Waltz accidentally included Mr. Goldberg on the conversation, leaking the whole thing to The Atlantic. But this was a minor snafu in the grand scheme of things. The big mistake, made by all of the people in the group, was having the conversation on a commodity platform to begin with. And while we know about this particular conversation, we don’t know how many others these individual have broadcast to America’s adversaries.
This was dumb – potentially criminally dumb – behavior by officials who should have known better, and should disqualify all of these individuals from handling classified information in the future.
Beyond this, I think there is a lesson to be learned from the accidental inclusion of Mr. Goldberg on this conversation, and it’s not that Mr. Waltz is an idiot (even if he may be): it’s about user interfaces and Signal’s security model. I don’t know whether Mr. Waltz included the wrong Jeffery Goldberg in the conversation, or just fat-fingered his contacts list, but either way, Signal couldn’t have warned him that what he was doing was dumb, because Signal doesn’t have any notion of an organization or its security boundaries: people are just people.
Indeed, if this group had used Slack for the conversation, or if they were jointly editing a Google Doc, the system would have almost certainly been locked down to avoid the accidental inclusion of any individual outside of the organization. Adding a person from The Atlantic would have at least provided a warning that this was a bad idea, and Mr. Waltz would have almost certainly not made the error.
I’m a fan of (and a donor to) Signal, but the lack of these organizational boundaries is a good argument against its organizational use. And I suspect that for Signal, this is just fine: that’s not the use case they’re targeting.