Adversarial Attacks and Defences

Published: 09 Oct 2015 Category: deep_learning

Papers

Intriguing properties of neural networks

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images

Explaining and Harnessing Adversarial Examples

Distributional Smoothing with Virtual Adversarial Training

Confusing Deep Convolution Networks by Relabelling

Exploring the Space of Adversarial Images

Learning with a Strong Adversary

Adversarial examples in the physical world

DeepFool: a simple and accurate method to fool deep neural networks

Adversarial Autoencoders

Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization

(Deep Learning’s Deep Flaws)’s Deep Flaws (By Zachary Chase Lipton)

Deep Learning Adversarial Examples – Clarifying Misconceptions

Adversarial Machines: Fooling A.Is (and turn everyone into a Manga)

How to trick a neural network into thinking a panda is a vulture

Assessing Threat of Adversarial Examples on Deep Neural Networks

Safety Verification of Deep Neural Networks

Adversarial Machine Learning at Scale

Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks

https://arxiv.org/abs/1704.01155

Parseval Networks: Improving Robustness to Adversarial Examples

Towards Deep Learning Models Resistant to Adversarial Attacks

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

One pixel attack for fooling deep neural networks

Enhanced Attacks on Defensively Distilled Deep Neural Networks

https://arxiv.org/abs/1711.05934

Adversarial Attacks Beyond the Image Space

https://arxiv.org/abs/1711.07183

On the Robustness of Semantic Segmentation Models to Adversarial Attacks

https://arxiv.org/abs/1711.09856

Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser

https://arxiv.org/abs/1712.02976

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

https://arxiv.org/abs/1712.02779

Training Ensembles to Detect Adversarial Examples

https://arxiv.org/abs/1712.04006

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

Where Classification Fails, Interpretation Rises

Query-Efficient Black-box Adversarial Examples

https://arxiv.org/abs/1712.07113

Adversarial Examples: Attacks and Defenses for Deep Learning

Wolf in Sheep’s Clothing - The Downscaling Attack Against Deep Learning Applications

https://arxiv.org/abs/1712.07805

Note on Attacking Object Detectors with Adversarial Stickers

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Awesome Adversarial Examples for Deep Learning

https://github.com/chbrian/awesome-adversarial-examples-dl

Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

https://arxiv.org/abs/1712.05526

Exploring the Space of Black-box Attacks on Deep Neural Networks

https://arxiv.org/abs/1712.09491

Adversarial Patch

https://arxiv.org/abs/1712.09665

Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

https://arxiv.org/abs/1801.00553

Spatially transformed adversarial examples

https://arxiv.org/abs/1801.02612

Generating adversarial examples with adversarial networks

Adversarial Spheres

LaVAN: Localized and Visible Adversarial Noise

Adversarial Examples that Fool both Human and Computer Vision

On the Suitability of Lp-norms for Creating and Preventing Adversarial Examples

https://arxiv.org/abs/1802.09653

Protecting JPEG Images Against Adversarial Attacks

Sparse Adversarial Perturbations for Videos

https://arxiv.org/abs/1803.02536

DeepDefense: Training Deep Neural Networks with Improved Robustness

Improving Transferability of Adversarial Examples with Input Diversity

Adversarial Attacks and Defences Competition

Semantic Adversarial Examples

https://arxiv.org/abs/1804.00499

Generating Natural Adversarial Examples

An ADMM-Based Universal Framework for Adversarial Attacks on Deep Neural Networks

  • intro: Northeastern University & MIT-IBM Watson AI Lab & IBM Research AI
  • keywords: Deep Neural Networks; Adversarial Attacks; ADMM (Alternating Direction Method of Multipliers)
  • arxiv: https://arxiv.org/abs/1804.03193

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

https://arxiv.org/abs/1804.03286

VectorDefense: Vectorization as a Defense to Adversarial Examples

On the Limitation of MagNet Defense against L1-based Adversarial Examples

https://arxiv.org/abs/1805.00310