Discrete adversarial attacks and submodular optimization with applications to text classification

Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros Dimakis, Inderjit Dhillon, Michael Witbrock

Abstract:   Adversarial examples are carefully constructed modifications to an input that completely change the output of a classifier but are imperceptible to humans. Despite these successful attacks for continuous data (such as image and audio samples), generating adversarial examples for discrete structures such as text has proven significantly more challenging. In this paper we formulate the attacks with discrete input on a set function as an optimization task. We prove that this set function is submodular for some popular neural network text classifiers under simplifying assumption. This finding guarantees a 1 − 1/e approximation factor for attacks that use the greedy algorithm. Meanwhile, we show how to use the gradient of the attacked classifier to guide the greedy search. Empirical studies with our proposed optimization scheme show significantly improved attack ability and efficiency, on three different text classification tasks over various baselines. We also use a joint sentence and word paraphrasing technique to maintain the original semantics and syntax of the text. This is validated by a human subject evaluation in subjective metrics on the quality and semantic coherence of our generated adversarial text.

Download: pdf

Citation

  • Discrete adversarial attacks and submodular optimization with applications to text classification (pdf, software)
    Q. Lei, L. Wu, P. Chen, A. Dimakis, I. Dhillon, M. Witbrock.
    In The Conference on Systems and Machine Learning (SysML), April 2019. (Oral)

    Bibtex: