TSFool: Crafting Highly-Imperceptible Adversarial Time Series through Multi-Objective Gray-Box Attack

Abstract

Recent years have witnessed the success of recurrent neural network (RNN) models in time series classification (TSC). However, neural networks (NNs) are vulnerable to adversarial samples, and cause real-life adversarial attacks that undermine AI technologies. But to date, most existing attacks target at feed-forward NNs and image recognition tasks, and they do not perform well on RNN-based TSC. This is due to the cyclical computation of RNN, which prevents direct model differentiation. In addition, the high visual sensitivity of time series data to perturbations also poses challenges to the conventional local objective optimization of adversarial samples. In this paper, a gray-box method called TSFool is proposed to efficiently craft highly-imperceptible adversarial time series for RNN-based TSC. We propose a novel global optimization objective known as “Camouflage Coefficient” to capture the imperceptibility of adversarial samples from the perspective of class distribution. Based on this, we refine the adversarial attack as a multi-objective optimization problem to enhance the perturbation quality. Furthermore, to speed up this optimization process, we also propose a representation model for RNN to capture deeply embedded vulnerable samples whose features deviate from the latent manifold. Experiments on 11 UCR and UEA datasets are conducted to showcase that TSFool significantly outperforms five white-box or black-box benchmark methods in terms of effectiveness and imperceptibility from real-world human studies.

Publication
ArXiv preprint
Yanyun Wang
Yanyun Wang
Research Assistant

My research interests include adversarial attack, robust machine learning and trustworthy AI.