Nemesis ’18

1st Workshop on
Recent Advances in
Adversarial Machine Learning

co-located with ECML/PKDD 2018

Friday, September 14, 2018

Dublin, Ireland


Adversarial attacks of Machine Learning systems have become an undisputable threat. Attackers can compromise the training of Machine Learning models by injecting malicious data into the training set (so-called poisoning attacks), or by crafting adversarial samples that exploit the blind spots of Machine Learning models at test time (so-called evasion attacks). Adversarial attacks have been demonstrated in a number of different application domains, including malware detection, spam filtering, visual recognition, speech-to-text conversion, and natural language understanding. Devising comprehensive defences against poisoning and evasion attacks by adaptive adversaries is still an open challenge. Thus, gaining a better understanding of the threat by adversarial attacks and developing more effective defence systems and methods is paramount for the adoption of Machine Learning systems in security-critical real-world applications.

The Nemesis ’18 tutorial and workshop aims to bring together researchers and practitioners to discuss recent advances in the rapidly evolving field of Adversarial Machine Learning. Particular emphasis will be on:

  • Reviewing both theoretical and practical aspects of Adversarial Machine Learning;
  • Sharing experience from Adversarial Machine Learning in various business applications, including (but not limited to): malware detection, spam filtering, visual recognition, speech-to-text conversion and natural language understanding;
  • Discussing adversarial attacks both from a Machine Learning and Security/Privacy perspective;
  • Gaining hands-on experience with the latest tools for researchers and developers working on Adversarial Machine Learning;
  • Identifying strategic areas for future research in Adversarial Machine Learning, with a clear focus on how that will advance the security of real-world Machine Learning applications against various adversarial threats.


Workshop chair

Program committee chairs

Program committee

  • Naveed Akhtar, University of Western Australia
  • Pin-Yu Chen, IBM Research
  • David Evans, University of Virginia
  • Alhussein Fawzi, DeepMind
  • Kathrin Grosse, CISPA, Saarland Informatics Campus
  • Tianyu Gu, Uber ATG
  • Aleksander Madry, MIT
  • Jan Hendrik Metzen, Bosch Center for AI
  • Luis Munoz-Gonzalez, Imperial College London
  • Florian Tramer, Stanford University
  • Valentina Zantedeschi, Jean Monnet University
  • Xiangyu Zhang, Purdue University

Call for Papers

There is an exploding body of literature on Adversarial Machine Learning, however, several key questions remain unanswered:

  • What is the reason for the existence of adversarial examples and their transferability between different Machine Learning models?
  • How can the space of adversarial examples be characterized, in particular, relative to the data manifold and learned representations of the data?
  • Are there provable limitations of the robustness guarantees that adversarial defences can provide, in particular in the case of white-box attacks or adaptive adversaries?
  • How strong is the adversarial threat for data modes other than images, e.g., text or speech?
  • How to design defences that address threats from combinations of poisoning and evasion attacks?


  • Paper submission deadline:
    Monday, July 2, 2018
    Monday, July 16, 2018
  • Notification of acceptance:
    Monday, July 23, 2018
    Monday, Aug 6, 2018
  • Camera-ready version due:
    Monday, Aug 27, 2018
    Monday, Sep 3, 2018
  • Workshop date:
    Friday, Sep 14, 2018

Topics of Interest

The workshop will solicit contributions including (but not limited to)
the following topics

Theory of adversarial machine learning

Adversarial attacks

Adversarial defences

Applications and demonstrations

  • Space of adversarial examples
  • Transferability
  • Learning theory
  • Data privacy
  • Metrics of adversarial robustness
  • Data poisoning
  • Evasion
  • Model theft
  • Attacks for different data modes, in particular text / natural language understanding
  • Attacks by adaptive adversaries
  • Data poisoning
  • Evasion
  • Model theft
  • Model hardening
  • Input data preprocessing
  • Robust model architectures
  • Defences against adaptive adversaries
  • Real-world examples and use cases of adversarial threats and defences against those

Submission Format

Submission Format

The workshop invites two types of submissions: full research papers and extended abstracts. Accepted full research contributions will be published by Springer in the workshop’s proceedings. Extended abstracts are meant to cover preliminary research ideas and results. Submissions will be evaluated on the basis of significance, originality, technical quality and clarity. Only work that has not been previously published in proceedings and is not under review will be considered.

Papers must be written in English and formatted according to the Springer LNCS guidelines. Author instructions, style files and copyright form can be downloaded here. Full research papers must be up to ten pages long (excluding references). Extended abstracts must be up to six pages long (excluding references). To be considered, papers must be submitted before the deadline (see Important Dates section). Electronic submissions will be handled via Easy Chair. Submissions should include the authors’ names and affiliations, as the review process is single-blind. For each accepted paper, at least one author must attend the workshop and present the paper.



The objective of the tutorial is provide a comprehensive introduction to adversarial machine learning. In the first part, we provide a general overview of the field and formalize the threat vectors, each exemplified with specific attacks and defences. The second part of the tutorial will provide hands-on experience with the recently released Adversarial Robustness Toolbox, an open source Python library containing state-of-the-art adversarial attacks, defences and metrics for assessing the robustness of Machine Learning models. The tutorial targets both newcomers in Adversarial Machine Learning, but also experienced researchers and developers who have experience with other similar tools and are familiar with workflows for developing, testing and deploying defences against adversarial attacks.

Tutorial slides ]


Time Program Presenter
09:00–09:05 Welcome to the tutorial Organizers


Overview of Adversarial Machine Learning (Part 1) Ian Molloy
10:40–11:00 Coffee break  
11:00–12:00 Overview of Adversarial Machine Learning (Part 2) Mathieu Sinn
12:00–13:00 Hands-on session with Adversarial Robustness Toolbox Irina Nicolae
13:00–14:00 Lunch  
14:00–14:05 Welcome to the workshop Organizers
14:05–15:05 Keynote presentation Battista Biggio
Pattern Recognition and Applications (PRA) Lab
15:10–15:40 “Label Sanitization against Label Flipping Poisoning Attacks” [PDF] Andrea Paudice, Luis Muñoz-González and Emil C. Lupu
15:40–16:00 Coffee break  
16:00–16:30 “Limitations of the Lipschitz constant as a Defense Against Adversarial Examples” [PDF] Todd Huster, Cho-Yu Jason Chiang and Ritu Chadha
16:30–17:00 “Detecting Potential Local Adversarial Examples for Human-Interpretable Defense [PDF] Xavier Renard, Thibault Laugel, Marie-Jeann Lesot, Christophe Marsala and Marcin Detyniecki
17:00–17:30 “Understanding Adversarial Space through the Lens of Attribution” [PDF] Mayank Singh, Nupur Kumari, Abhishek Sinha and Balaji Krishnamurthy
17:30–17:35 Closing remarks Organizers

Venue and Registration

Venue and Registration

This workshop is co-located with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases.

For information about the venue, please visit the ECML/PKDD 2018 website.

All participants need to register. Information about registration and fees can be found here.