Privacy Theater – How Privacy Companies Market and What Actually Protects You

Posted on Dec 12, 2018 by Derek Zimmer

Bruce Schneier famously popularized the concept of “security theater” in his 2003 book Beyond Fear.

The concept of security theater explains how various nations will engage in “feel good” activities that appear to improve the security of their citizens when in practice, it does little to nothing to actually improve public safety.

An apt example is the TSA in the United States. It was formed as a knee-jerk reaction shortly after the Sept 11th 2001 terrorist attacks. The agency was a colossal government effort to create a national body to standardize security and more rigorously train staff to improve airline (and other transport) safety.

In practice, in order to prevent a terrorist from getting a weapon onto a plane and killing hundreds of people, you now have hundreds of people standing in line at an airport in an insecure area. This was taken advantage of in Belgium with the Brussels airport bombing that created a huge number of casualties for people waiting in the security line.

This highlights the dynamic world of security. What can nations do to actually protect us, and what is reasonable for them to do? What practices are a waste of resources that could be more effective if they were spent elsewhere?

Security Theater meets Privacy Theater

This concept broadly applies to privacy as well. There are things that we can do to reasonably protect our privacy from intruders, but what steps actually make a difference, and what steps make us feel good about improving our privacy but ultimately do not prevent our information from leaking to whoever wants to listen?

When a government comes up with a new cyber policy, who can we trust to give us an accurate run-down of how these policies will impact our society?

Which technologies enable us to maintain our private lives? Attorney client privilege, doctor patient confidentiality, whistle-blower protections, media source protection, private conversations, and more rely on these technologies being rock solid.

Governmental Assurances and Standards Bodies

One way that these technologies are policed is by government agencies and international standards bodies. This includes the NSA, ANSSI, NIST, IETF, IEEE, and many many others. I specifically mentioned the NSA first as it highlights a fundamental problem with the assurances that we get from groups like these. Their motivations are directly in conflict with security and privacy in general, so there’s no reason to blindly trust that what they want is what is best for your privacy. Governments are known to conduct security research in order to hoard vulnerabilities that they discover as cyberweapons rather than close the security holes and improve the internet as a whole. Further, “data” is the new oil. So when a standards body made up of companies like Facebook, Mindshare, and Google set security and privacy standards, often times they steer away from tech that is “too strong” as it would prevent their own data businesses from vacuuming up all of that sweet data money.

This situation isn’t helped by the fact that computer security is unbelievably complicated, and that it is further complicated by the way that computers read code. When a programmer writes code, they create their application in a language that people understand, called the source code. This code is then ran through a special application called a compiler that converts this “human language” code into “computer language” code. This process is so complicated that it is almost impossible to fully work backwards and go from compiled code back to source code.

Companies and organizations hide their code by only providing the completed and already compiled code, and this makes it so that their code can’t be fully checked for vulnerabilities or that their security is properly implemented. You just have to “trust them.” In a world where we have demonstrated that you shouldn’t blindly trust these people with your privacy.

This takes us to the issue of Open Source. Open Source code has the original source code available for anyone to read, and they can compile it into machine code themselves. This is crucially important because it allows a process of public peer review, where people can actually look at your code and verify that it isn’t farming your data or poorly engineered.

Complexity

Even if you use open source software, we have the constantly moving target of security, and keeping up with the latest technologies and deciding when it is time to deprecate features and adopt new ones.

An example of this issue is the TLS 1.3 standard. Transport Layer Security (TLS) is a worldwide standard for cryptography that decides how it is performed. In the last few years there has been a hard push to improve the strength of cryptography and to make sure it is widely deployed. The Let’s Encrypt project has brought us to an era where secure sites are more common than insecure ones, and by default those sites tend to use strong security without having to do much configuration as a user or an administrator. However, with all of this progress, shortcuts were retained for maintaining a secure session by saving your keys on the client and server. This creates security and privacy challenges needlessly and these features are overwhelmingly used by web sites and web apps. We did a fantastic job of improving the standards, and then walked all of that progress back.

Security and privacy standards that have shortcuts are not real advancements.

Timing is Everything

Knowing when it is time to stop using a standard is as important as knowing when it is time to adopt new ones. In an ideal world, we would have bulletproof security and privacy and we wouldn’t need to stay on top of the constant changes. Any company worth its weight in salt has people who can evaluate new technologies and decide when it is time to jump.

Examples in the real world are the SHA-1 hash standard and the SHA-3 hash standard. SHA stands for the Secure Hash Algorithm. It is an international standard that is handled by NIST. There are currently three hashing standards adopted by NIST, SHA-1, SHA-2, and SHA-3.

Hashing is supposed to take some kind of input and generate a hash from that data, which looks like a random string of letters and numbers. This string of numbers is unique to whatever you put into the function to create the hash.

There’s some features of a hash that make them interesting for security.

1. Every hash should be different. Here is an example of a broken hash algorithm (that used to be a standard) giving us a collision (two different pieces of data giving us the same hash result). This means that you know that the data that you have is exactly what was intended to be sent to you, and not something else that gives you the same result.

2. Putting in the same data should give you the same hash result. This allows you to verify that the data you have is what the sender intends it to be.

These two factors combine to give you a powerful tool for checking for authenticity and integrity. You can see if the data has been tampered with or somehow corrupted or changed.

Recently, a research team found a practical SHA-1 collision. Meaning that with some number crunching, you can create two different pieces of data that have the same hash. This is crucial because it means that SHA-1 is no longer as safe as it once was. It does take tremendous computing power and costs thousands of dollars to generate a collision, but it can be done.

That leaves us with SHA-2 and SHA-3. SHA-2 has no known collisions and SHA-3 is newly adopted by the NIST contest to choose a new algorithm. So do you want to adopt the newest algorithm, or do you want to go with the tried and true SHA-2 with no known problems? Or do you want to keep using SHA-1? After all, your data isn’t worth millions.

There’s a lot of things to consider. How hard is it to change algorithms? How badly do outsiders want your data? Is battery drain a concern? Performance?

In the VPN world, there’s a lot of talk about a new simple protocol being developed called WireGuard. It is shiny, new, fast, and strong encryption. It is very simple code and easy to review, making it an attractive app.

When is the right time to offer WireGuard to your customers? Probably not while the developers still say that you shouldn’t.

So What is Marketing and What is Real Privacy?

“Military Grade Encryption,” “Surf Anonymously,” “Audited,” “Strongest Privacy,” “No-Logs.” Which of these terms mean something and which ones are fluff terms trying to drag in more customers?

When you’re looking at any service, there’s a few points that you can take seriously and a few things that aren’t as significant.

“Military Grade Encryption, 256-bit encryption, 4096-bit encryption, unbreakable, strong, AES-256 encryption that never skips leg day.”

These are all marketing fluff terms that alone are largely meaningless. Encryption depends on properly implemented software, combined with strong algorithms, delivered to the customer in a secure way.

More important terms for actually improving safety are things like having an open source client, secure development practices, regular security review by independent experts.

If your privacy service does not make their source code available, that is a cause for concern. (Private Internet Access has committed to opening up the source code for all of our applications next year, and some of our apps are available right now.) If your privacy service asks for permissions that it does not appear to need to have, that is another red flag. If your privacy service is homed in a jurisdiction where it has to log by law, that is a concern.

We are Based in <Privacy Country>

Another thing many people overlook is that simply creating a corporation overseas usually doesn’t unburden you from the legal environment of where the owners and stakeholders in the company reside. If five Americans create a company in Panama, that company is still beholden to the United States. If this were not the case, everyone would start companies in the Caymans. The US has rules about foreign controlled corporations and they apply for a lot of things other than taxes. Virtually every major nation has rules like these to prevent all kinds of shady business practices. So if a company claims to be in the British Virgin Islands, the Cayman’s, Jamaica, the Seychelles or Panama but they wont tell you where any of their actual employees reside… It’s marketing.

Security Badges

All the security badges usually tell you is that the security team from the company in question (security team is generous, this is almost always automated and no real person looks at anything) did a sweep of the systems from the outside and found no obvious problems. They do not look into the actual setup or settings of the services that they certify, and it is largely a rubber stamp that will only catch the most glaring and obvious issues.

These badges are the equivalent “The virus scanner says everything is fine!” of the e-commerce world. Virus scanners rarely catch any real viruses anyway. These automated sweeps are not even close to a guarantee of safety.

Audits

There are two types of audits that are going around in the privacy world. Security audits of servers and source code to verify that they are safe (a normal security practice and a good idea!), and audits to determine how user data is handled. A “log” audit is problematic for a bunch of reasons, the most obvious of them being that an independent auditor can only look at a service as a snapshot. They cannot verify if before or after the audit if things are changed to enable logging, telemetry, or analytics. This means that a dishonest service can disable their logging, bring in auditors, get a good review, and simply turn the logs back on.

Another way to find out if a VPN is really logging is to search the web for <VPN Provider> DMCA notice, to see if users are getting copyright notices forwarded to them by their VPN provider. This tells you if your provider is logging user information in a way that can identify individuals. If you see one or two it could be user error, they did something dumb and forgot to turn their VPN on. If you see a huge number of complaints, especially if the notices are coming from the VPN provider and not their internet provider, then that VPN is logging.

A checklist for your service:

Do you know who runs the service? Or anything about the staff? If they claim to be in an exotic location, can you Google Maps their offices or find evidence that their employees live there?

How long has the company been around?

Is their software open source? Have you seen their github? (You don’t need to be able to code to at least verify that some code is out there.)

Do they have security personnel on staff? Does their code get security review? Has the company ever had a security or privacy incident in the past?

Do they claim that they don’t retain or share data? Do their users agree? Is there any other evidence affirming or denying this?