Image-Scaling Attacks in Machine Learning

Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck
Technische Universität Braunschweig

Introduction

Machine learning has made remarkable progress in the last years, yet its success has been overshadowed by different attacks that can thwart its correct operation. While a large body of research has studied attacks against learning algorithms, vulnerabilities in the preprocessing for machine learning have received little attention so far.

Image-scaling attacks allow an adversary to manipulate images unnoticeably, such that they change their content after downscaling. Such attacks are a considerable threat, as scaling as pre-processing step is omnipresent in computer vision. Moreover, these attacks are agnostic to the learning model, features, and training data, affecting any learning-based system operating on images.

Image-scaling attack example

Take, for instance, the example above. The adversary can take an arbitrary source image, here a do-not-enter sign, and a no-parking sign as target image. The attack generates an image A by slightly changing the source image. This attack image still looks like the source image. However, if this attack image is downscaled later, we will obtain an output image that looks like the target image. This output image is then passed to a machine learning system. So while we see the source image, the ML system obtains the target image. This allows a variety of attacks that we discuss below.

All in all, scaling attacks have a severe impact on the security of ML, and are simple to realize in practice with common libraries like TensorFlow. Our work provides the first comprehensive analysis of these attacks, including a root-cause analysis and effective defenses.

Examples

Some examples for image-scaling attacks. Click on each image for further information.

What happens with this image after downscaling?

Can we rely on machine learning to detect objectionable content?

Can we trust the training data used for self-driving cars?

Details

Applications

Image-scaling attacks are of particular concern in all security-related applications where images are processed. The attacker can create an arbitrary, unexpected output image after downscaling that is processed by a system.

In the context of machine learning, the attacks can be used for poisoning attacks during training as well as misleading classifiers during prediction. Compared to adversarial examples, both attacks accomplish the same goal. However, image-scaling attacks considerably differ in the threat model: The attacks are model-independent and do not depend on knowledge of the learning model, features or training data. The attacks are effective even if neural networks were robust against adversarial examples, as the downscaling can create a perfect image of the target class.

Root-cause

Our analysis reveals that scaling attacks are possible, since many implemented algorithms do not equally consider all pixels in the source image to calculate its scaled version. The adversary can thus only modify a small portion of pixels with high weights for downscaling and leaves the rest of the image untouched.

Consider the figure here that depicts a one-dimensional scaling operation. A window is moved over the source signal s. Each pixel in this window is multiplied by the respective weight at this position. The first pixel in the output image is the result from the third and fourth pixel in s, while the second pixel from the output is only estimated from the seventh pixel in s.

Only those pixels close to the kernel's center receive a high weighting, whereas all other pixels play a limited role for scaling. The step width exceeds the window width, so that some pixels are even ignored. Only three out of nine pixels are considered for computing the scaled output. Consequently, the adversary only needs to modify those pixels with high weights to control the scaling and leaves the rest of the image untouched. This strategy achieves both goals of the attack. First, modifying the few considered pixels leads to the targeted output image after downscaling. Second, the attack image visually matches the source image. The attack's success thus depends on the sparsity of pixels with high weight.

More information

If you want to find out more about image-scaling attacks, our publicly available USENIX paper presents the attack in detail, the underlying root-cause and possible defenses.

Read The Paper

Below you can find more information about related work as well as the code to create own attack examples or to test our defenses.

Publications

In the following, we present all relevant publications to the topic of image-scaling attacks and defenses.

This is the first paper about image-scaling attacks. The authors present the attack algorithm in detail and demonstrate with some examples that various scaling algorithms are vulnerable. Furthermore, they present a method to derive the scaling parameters from remote black-box systems (i.e. the algorithm and target image size). This allows an attacker to perform image-scaling attacks without detailed knowledge of the target system.

Read The Paper

This work is the first comprehensive analysis of image-scaling attacks. Our paper addresses the following points:

Root-cause

We conduct the first in-depth analysis of image-scaling attacks and identify the root-cause in theory and practical implementations. Our work thus explains why image-scaling attacks are possible, and thus allows developers to check quickly if a scaling algorithm is vulnerable to these attacks.

Prevention defenses

We introduce defenses to prevent attacks from the very beginning. To this end, we derive requirements for secure scaling and use these to validate the robustness of existing algorithms. In addition, two defenses are proposed that can be easily integrated into existing machine-learning workflows.

Evaluation

We empirically analyze scaling algorithms of popular imaging libraries (OpenCV, Pillow and TensorFlow) under attack. We demonstrate the effectivity of our defense against adversaries of different strengths (non-adaptive and adaptive attackers).

Read The Paper

This work extends our examination of image-scaling attacks. It provides the first analysis on the combination of data poisoning and image-scaling attacks. All in all, the following key points are addressed:

Data poisoning

We provide the first analyis on data poisoning attacks that are combined with image-scaling attacks. Backdoor attacks and clean-label poisoning attacks are considered. Our results show that an adversary can conceal the trigger of backdoors as well as hide the overlays of clean-label poisoning more effectively than before.

Detection defenses

We evaluate current detection methods for image-scaling attacks and show that these fail in the poisoning scenario.

Read The Paper

Code

The implementation is available at the following github repository.

If you're using our code, please cite our USENIX paper. You may use the following BibTex entry:

@INPROCEEDINGS{QuiKleArp+20,
  author = {Erwin Quiring and David Klein and Daniel Arp and Martin Johns and Konrad Rieck},
  title = {Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning},
  booktitle = {Proc. of USENIX Security Symposium},
  year = {2020},
}    

FAQ

Some questions you might have:

Based on our theoretical and empirical results, you have two options. Either you use a robust scaling algorithm or our developed image reconstruction methods. Both options prevent the attack without changing the workflow.

Robust scaling algorithms

Based on our root-cause analysis, we identify a few secure scaling implementations that withstand image-scaling attacks. First, you may use Area scaling which is typically implemented in many scaling libraries. Second, you can use Pillow's scaling algorithms (but not Pillow's nearest scaling).

Image reconstruction

We introduce a simple median-based filter that reconstructs the pixels manipulated by an image-scaling attack. This filter can be easily used in front of any scaling algorithm and does not change the API of machine-learning pipelines. Compared to robust scaling algorithms, the filter has the advantage that it repairs the prediction so that the image obtains the prediction of its actual source image again.

If the run-time overhead of a defense is one of the important criterions, we also examine a random-filter as defense, with some trade-off between visual quality and runtime.

For more information, please look at our USENIX 2020 paper.

Scaling attacks are possible whenever a downsampling takes place. Thus, other media signals, such as audio or video, can also be vulnerable. In the context of audio, a low-pass filter is often implemented that should prevent an audio-scaling attack. If no filter is implemented and not all samples are equally processed (see the root-cause for scaling attacks), an attack is likely possible. It is definitely an interesting question for future work if audio or video systems are vulnerable to scaling attacks.

The short answer: Yes, it is.

In our USENIX and DLS paper, we've evaluated TensorFlow 1.13. In the meantime, TensorFlow 2.0 has been released. A quick analyis shows that image-scaling attacks are still possible with the default parameters. In particular, version 2.0 has introduced a new parameter antialias:

antialias=false

This is the default value in tf.image.resize. In this case. the resize operation corresponds to the resize operation from TensorFlow 1.13 / 1.14. As a result, nearest, bilinear and bicubic scaling remain vulnerable against image-scaling attacks.

antialias=true

In this case, TensorFlow scales images similar to Pillow. Thus, bilinear and bicubic scaling are robust against scaling attacks. However, nearest scaling is still vulnerable, as the antialias parameter has no effect here. Consider that setting this parameter to true changes the scaling output and may have an impact on your neural network's performance.

Contact

Erwin Quiring

Institute of System Security

David Klein

Institute for Application Security

Daniel Arp

Institute of System Security

Martin Johns

Institute for Application Security

Konrad Rieck

Institute of System Security