Markpainting: how to fight deceptive AI-manipulated images with a sprinkle of digital cleverness

Posted on Jun 14, 2021 by Glyn Moody
paint palette

One way of looking at privacy is in terms of controlling personal data. That means stopping others from accessing existing private information or collecting new personal data; and not allowing the creation of false information that relates to a person. The first of these can be thought of as “classical” privacy – making sure that personal data is only available to people or companies that have permission. Collecting new data involves things like facial recognition, which allows governments or companies to gain often valuable information about a person by tracking where they are or what they are doing.

Creating false personal information may have existed in the past in the form of rumors, slander and libel, but it is becoming a more serious problem today because computers make it hard to tell false personal data from the real thing. Perhaps the most dramatic example of this is the new class of deepfake videos, which use artificial intelligence to place someone else’s face on pre-existing videos. These are still relatively crude, and can often be spotted by direct inspection.

However, that’s not the case for computer-modified images, which require far less processing power than videos, and where techniques have evolved considerably over the years. A particularly important kind of modification is known as “inpainting” – filling in missing or excised parts of a picture. Originally an analog technique used to repair damaged paintings, for example, it is now a routine option for programs like Photoshop. One of the reasons that digital inpainting is so good today is that AI techniques such as artificial neural networks are commonly applied. This allows a program to use other parts of an image to fill in – or replace – particular sections. For example, it is now relatively straightforward to remove a particular person from a photo, replacing them with a suitable background.

This is clearly problematic for privacy, since it allows the meaning of visual images to be changed radically – either by removing or adding elements. The ready availability on the Internet of images of people makes it impossible to stop this kind of manipulation. But at the very least, it would be good to have tools that allow it to be controlled in certain circumstances – for example, when photos of important or potentially sensitive events are concerned. A new paper describes a technique that offers some protection in this regard. Along the way, it shows how the artificial intelligence techniques routinely used to subvert images can themselves be subverted. Here’s the key idea behind what the researchers call “markpainting”, by analogy with “inpainting”:

Inpainting is a complex task, with neural networks trained to manipulate images of arbitrary size and with arbitrary patches. Furthermore, modern inpainters can fill irregular holes. As they are trying to be semantically aware and display both local and global consistency, they need to understand the global scenery [that is, of the whole image] well. That in turn makes them dependent not only on the area around the patch, but on the whole image. Imagine trying to fill in a hole around the squirrel eye depicted in Figure 4 [shown in the paper]. Here, local information (shown in pink) would suggest that it has to be filled with fur. Global information (shown in orange) on the other hand, should tell the inpainter that the picture features a squirrel in a particular pose and that an eye should be located there. As illustrated in the gradient visualization in Figure 4, gradients focus on both the area around the eye and the rest of the image. This dependency on global information makes inpainting both complex and prone to manipulation. The markpainter does not need to concentrate their perturbation around the patch area but can scatter it all over the image.

The “perturbation” mentioned there is specially-generated data that is spread around the image in such a way as to trick the AI system into making it part of the patch, even though it actually has nothing to do with the missing part. Because it is distributed around the image, it does not cause any major change visible to the naked eye; it is only picked up by the inpainting program because of the way the latter draws on global data. It is thus possible to flag up otherwise invisible changes to an image, for example by causing a warning mark to be displayed in the modified image. Even though there are a number of widely-used inpainting techniques and programs, the same perturbation data can be used for all of them to generate additional elements of the modified image. This means that a single image with markpainting perturbations will be protected against overt additions or deletions produced by a wide range of inpainting tools.

Markpainting protection is not perfect. It might be possible to find inpainting tools using novel AI techniques that are not misled by the perturbations hidden in the image. Similarly, it would be possible to write a specialized tool. But the real problem today is that image manipulation through inpainting has become almost trivially easy to carry out thanks to widely-available software tools. At the very least, the proposed markpainting technique puts back some of the previous difficulty of carrying out such image manipulations. It also offers the hope that no matter how sophisticated AI techniques might become, and no matter how convincing their modified images seem to the human eye, there will be countervailing AI methods that can help to reveal their use. It means that the fight to preserve privacy is not hopeless, however bleak the situation might seem.

Featured image by Alexander Lesnitsky.