Self-destructing messages don’t protect against the recipient – that was never the point

Posted on Oct 16, 2016 by Rick Falkvinge

This week, Signal finally introduced self-destructing messages. Regrettably, many seem to miss the point of what they’re for. The point of a self-destructing message is not to protect against the recipient, it’s to protect the message from being read by somebody else than the recipient much later if the device is lost, seized, or otherwise compromised.

Signal has long been the go-to secure messaging for privacy activists – for long enough that I used to recommend it as TextSecure and RedPhone, before it merged to one app and changed names to Signal. The one lacking feature has been self-destructing messages, which is why I used Telegram in the most sensitive of environments, despite Telegram’s encryption being significantly weaker and not entirely best practice.

But as of last week, Signal finally added self-destructing messages. Unfortunately, most people seem to be missing the point as to their immense value, and even the Signal pages talk of “data hygiene” and a way to “keep message history tidy”, as if the self-destruct was mostly about not cluttering your phone memory with old messages.

The point of self-destructing messages is to prevent somebody else than the recipient from reading them, in case the device is compromised by an adversary later or much later. The intended recipient is not the adversary.

The point was never a SnapChat-like “you can see this once and never again” thing. After all, you’re sending a message to somebody and intending them to read it. Do you want to prevent them from re-experiencing reading a message? That’s… esoteric, but doesn’t really have a lot of security value: the message is in the recipient’s memory, and you don’t have access to changing their memory.

In other words, self-destructing messages are not intended to protect against the recipient reading it more than once. Such a notion has little value. You want the recipient to read it, after all; that’s the whole point of sending the self-destructing message to the recipient in the first place.

Take the following scenario, for example: Alice sends something personally sensitive but completely legal to Bob’s phone. Bob comes under suspicion of a crime, and police seize his phone. The messages in Bob’s phone become part of the court case and therefore the public record, now readable by anyone. Bob is later completely acquitted of the crime, having been the wrong person charged. Despite everybody in law enforcement doing what they were supposed to, Alice’s sensitive messages have now become public knowledge. (And even if the messages are excluded from the public court case, law enforcement officers will still have read them to determine this fact.)

This is the scenario that self-destructing messages prevent. This is why you should be using them.

And of course, in less… hospitable regimes, it may be illegal just to discuss the world we live in. Self-destruct comes in handy there, too. Legal isn’t always ethical, and ethical isn’t always legal.

Privacy remains your own responsibility.

Comments are closed.

1 Comments

  1. Antimon555

    Sorry for long comment, but I feel that this needs to be discussed, or at least thought about.

    While not about self-destructing messages in particular, there are a few things about Signal I have questions about, regarding its security, or rather the security of what’s surrounding it.

    Google being Google, and both Google and Apple being American companies, how can we trust the operating systems themselves? Android is Open Source, but how many clever, code-studying people have actually sat down and had a thorough look that it doesn’t read raw voice data on its way into the Signal app, sends it to Google, and so to the NSA? I know that you, Rick, have been writing about the “OK, Google” function, I’d imagine that would be a good place to hide such sneaky code… Or for that matter, has anyone even read Google’s and Apple’s EULAs completely, to make sure they don’t do so with the users “permission”?

    While I’ve heard many good and very few bad things regarding privacy about Signal and its developers, I can’t help but wonder: Why make an end-to-end encryption app dependent on Google services, and throw out people trying to remove that dependency? See Wikipedia’s article on Signal_(software) > Limitations > Android specific. After all, requiring users who want privacy to agree to Google’s terms and conditions is contradictory.
    Also, requiring verification by telephone number seems suspicious. Sure it can be a security benefit to know that you are being called or texted by the one you think, but at the same time going from authentication to identification is usually the signal that a service isn’t good for privacy.

    These questions may in part be explained by it being made easy-to-use – which is great for getting people to encrypt in the first place – but to me they raise warning flags.

    8 years ago