deep-scanner.png

Deep Scanner

A tale of adversarial attacks & out-of-distribution detection stories in the activation space

Overview

Most deep learning models require ideal conditions and rely on the assumption that test and production data comes from the in-distribution samples from the training data. However, most real world examples don’t follow this pattern. Test data can differ from the training data because of adversarial perturbations, new classes, generated content, noise, or other distribution changes. These shifts in the input data can lead to classifying unknown types — classes that do not appear during training — as known with high confidence. Additionally, adversarial perturbations in input data can cause a sample to be incorrectly classified. In this project, we discuss approaches using group-based and individual-subset scanning methods from the anomalous pattern detection domain and how they can be applied to off-the-shelf deep learning models.

IJCAI2020.png
IJCAI2022.png
IJCAI2022_2.jpeg

Invited talks

  • (English) Foundational Robustness of Foundation Models Tutorial @ NeurIPS 2022 - Panel Discussion - 🗓 December 2022.
  • (Español) PyData Panamá - Charla Invitada: Detección de valores atípicos en modelos de aprendizaje profundo - 🗓 Noviembre 2022.
  • (English) ODSC West 2022 - Invited talk: A Tale of Adversarial Attacks & Out-of-Distribution Detection Stories in the Activation Space - November 2022.
  • (English) LatinX in CV Workshop at ECCV 2022 - Keynote: Towards novelty characterization of creative processes in the activation space of generative models - October 2022.
  • (Español) SciPy Latinamerica 2022 - Keynote: Hacia modelos de aprendizaje automático más robustos y equitativos - Septiembre 2022.
  • (English) AdvML Frontiers @ ICML 2022 - Keynote: A tale of adversarial attacks & out-of-distribution detection stories in the activation space - July 2022.
  • (English) New Frontiers Workshop Series on Generative AI (WEITA) - Invited talk: Towards novelty characterization of creative processes via pattern detection in the activation space of generative models - July 2022.
  • (English) IEEE Women in Engineering International Leadership Conference 2022 - Towards Fairness & Robustness in Machine Learning for Dermatology - June 2022.
  • (English) 3rd MICCAI Workshop on Domain Adaptation and Representation Transfer (DART) - Keynote: Towards Fairness & Robustness in Machine Learning for Dermatology.
  • (Español) FemIT 2021 - Keynote: Erase una vez en una galaxia muy muy lejana, entre ataques adversarios y detección de valores atípicos en sistemas de aprendizaje profundo.
  • (English) LatinX in AI (LXAI) Research at ICML 2021 - Keynote: A tale of adversarial attacks & out-of-distribution detection stories.
  • (English) Data Science For Social Good (DSSGx UK) 2021 - Invited talk: Towards Fairness & Robustness in Machine Learning for Dermatology.
  • (English) TrustML Series - A tale of adversarial attacks & out-of-distribution detection stories, 2021.