Deep fakes: how immutable blockchain-based life logs could combat them, and the implications for privacy

Posted on Jan 19, 2019 by Glyn Moody

The idea of deep fakes – AI-assisted fake videos – first entered the mainstream around a year ago. After an initial burst of interest, people stopped searching for the term, although the technology behind the idea certainly hasn’t gone away. A couple of weeks ago, a video was circulating that appeared to show President Trump sticking his tongue out and licking his lips during his address to the nation. An editor at the local Fox affiliate Q13 was later fired over the incident. The video is not particularly sophisticated – it also made the colors in the video look more saturated, so that the president’s skin and hair have an orange hue. However, it’s a useful reminder that manipulation of videos is now easy, and potentially brings with it risks – not least for privacy.

Many of those implications were explored last year in a paper by two academics, Robert Chesney and Danielle Keats Citron. As the title indicates, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security” is wide-ranging, and includes a discussion of privacy. One obvious threat is that deep fake videos showing compromising behavior might be created for the purpose of blackmail:

Blackmailers might use deep fakes to extract something of value from people, even those who might normally have little or nothing to fear in this regard, who quite reasonably doubt their ability to debunk the fakes persuasively, or who fear in any event that any debunking would fail to reach far and fast enough to prevent or undo the initial damage.

It doesn’t matter how well someone protects details about their personal life. Deep fake technology is not limited by the facts, and so can simply create invented incidents apparently involving the victim. As AI technology advances, and hardware prices fall, so it will become more difficult to disprove convincing deep fake videos, especially for ordinary people of limited means and technical ability.

However, the general public is unlikely to be a major target of deep fakes simply because the potential damage to their reputation is limited, reducing the value that might be extracted from victims. That highlights the real problem: that deep fakes will be used against high-profile individuals – politicians and other public figures. The paper lists some plausible possibilities:

Fake videos could feature public officials taking bribes, displaying racism, or engaging in adultery.

Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.

Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both.

These and other threats are fairly obvious. Much less clear is how society might counter them. To their credit, the academics spend many pages exploring technological, legal, regulatory, military (sic) and market solutions. The last of these is probably of most interest to readers of this blog. As the researchers point out, in a world where producing deep fakes is quick and easy, at-risk individuals will need a way to counter the diffusion of such videos by being able to demonstrate credibly their real location, words, and deeds at a given moment:

We predict the development of a profitable new service: immutable life logs or authentication trails that make it possible for a victim of a deep fake to produce a certified alibi credibly proving that he or she did not do or say the thing depicted.

From a technical perspective, such services will be made possible by advances in a variety of technologies including wearable tech; encryption; remote sensing; data compression, transmission, and storage; and blockchain-based record-keeping. That last element will be particularly important, for a vendor hoping to provide such services could not succeed without earning a strong reputation for the immutability and comprehensiveness of its data; the service otherwise would not have.

Such services on their own will not be enough. It is critically important to rebut the deep fakes video quickly, with certified life logs that prove the victim was elsewhere at the time. Left too long, and the lie will take root, and no amount of evidence will undo it. The academics suggest that life-log companies will need to work closely with social media companies to ensure quick and effective dissemination of the digital alibi.

This leads to a rather odd situation where politicians and public figures might find themselves obliged to carry out constant surveillance of themselves in order to have authentication trails for any point in time. Furthermore, they will need to be ready to provide those possibly intimate life logs to social media services for the latter to spread them as widely as possible. In other words, people occupying positions of power will end up having even less privacy than they do now.

Security is naturally a concern. Since large quantities of video data must be recorded and stored indefinitely, this will require specialized facilities. However, such data would also be highly attractive to criminals and foreign governments, since it could provide access to important insights into public figures, potential material for blackmail, and even classified information. Keeping so much data safe would be an enormous challenge.

Another issue, especially in the EU, is the impact these life-log services would have on the privacy of others – family, friends, colleagues – whose own lives would be recorded, at least in part, whether they wanted that or not. It’s hard to see how that could be compliant with the GDPR.

If these problems mean that life-log systems – however interesting in theory – are simply impractical, what are the alternatives? The academic paper’s analysis doesn’t offer much hope for quick or easy solutions of any kind. That’s a serious concern, because a world routinely flooded with embarrassing or intimate deep fakes would blur any sense of what is private and what is public. The risk is that privacy would become some quaint, old-fashioned concept with no real meaning in this AI-powered world.

Featured image from MyNorthwest.