IBM at AAAI 2023

  • Washington, D.C., USA and virtual
This event has ended.

About

IBM is proud to sponsor AAAI 2023. We invite all attendees to visit us during the event in booth 119 at the Walter E. Washington Convention Center in Washington, D.C.

We look forward to meeting you at the event and telling you more about our latest work and career opportunities at IBM Research. Our team will be presenting a series of workshops, papers and demos related to a broad range of AI topics such as foundation models, trustworthy AI, natural language processing and understanding, knowledge and reasoning, AI automation, human-centered AI, and federated learning.

Read our accepted papers:

Visit us in the Exhibit Hall at Booth 119 View the booth demo schedule here: https://ibm.biz/AAAI23_BoothDemos

For presentation times of workshops, demos, papers, and tutorials see the agenda section below. (Note all times are displayed in your local time).

Why attend

Join conversations on machine learning best practices, attend education tutorials, and participate in workshops. Meet with IBM recruiting and hiring managers about future job opportunities or 2023 summer internships.

Explore all current IBM Research job openings.

Featured positions to learn more about at AAAI:

We look forward to meeting and seeing you in Washington, D.C.!

Stay connected with us for career opportunities: https://ibm.biz/connectwithus

Agenda

  • The goal of this tutorial is to elucidate the unique and novel connections between algorithmic fairness and the rich literature on adversarial machine learning. Compared to other tutorials on AI fairness, this tutorial will emphasize the connection between recent advances in fair learning and adversarial robustness literature. The range of the presented techniques will cover a complete fairness pipeline, starting from auditing ML models for fairness violations, post-processing them to rapidly alleviate bias, and re-training or fine-tuning models to achieve algorithmic fairness. Each topic will be presented in the context of adversarial ML, specifically, (i) connections between fair similarity metrics for individual fairness and adversarial attack radius, (ii) auditing as an adversarial attack, (iii) fair learning as adversarial training, (iv) distributionally robust optimization for group fairness. We will conclude with (v) a summary of the most recent advances in adversarial ML and its potential applications in algorithmic fairness.

    The tutorial is designed for a broad range of audiences, including researchers, students, developers, and industrial practitioners. Basic knowledge of machine learning and deep learning is preferred but not required. All topics will be supported with relevant background and demonstrations on varying real data use-cases utilizing Python libraries for fair machine learning.

    Mikhail Yurochkin (IBM); Yuekai Sun; Pin-Yu Chen (IBM)

  • The AI4BPM Bridge at AAAI 2023 brings together academics and industry professionals working at the intersection of artificial intelligence and business process management under the same roof. The event will include invited talks, poster sessions, tutorials, student outreach, meet and mingle opportunities, hands-on system demonstrations, and much more!

    Tathagata Chakraborti (IBM); Vatche Isahagian (IBM); Andrea Marrella; Chiara Di Francescomarino; Jung koo Kang (IBM); Yara Rizk (IBM)

  • Asset Health and Monitoring is an emerging AI Application that aims to deliver efficient AI-powered solutions to various industrial problems such as anomaly detection, failure pattern analysis, etc. In this lab-based tutorial, we present a web-based time series anomaly detection tool – a new scikit-learn compatible toolkit specialized for the time series-based anomaly detection problem. The key focus of our tutorial includes the design and development of an anomaly detection pipeline, a zero-configuration interface for automated discovery of an anomaly pipeline for any given dataset (univariate and multi-variate), a set of 5 frequently used workflow empirically derived from past experiences, a scalable technique for conducting efficient pipeline execution. We extensively tested deployed anomaly detection services using multiple datasets with varying time-series data characteristics.

    Dhaval Patel (IBM)

  • We propose an NL paradigm and platform for the construction of business automation rules that incorporate a constrained natural language (CNL) – a domain-specific highly consumable language to validate and review synthesized code. Our approach utilizes LLMs to translate business rules described in NL into CNL for human review which can then be transpiled into the business automation code of the rule engine. To address challenges in the translation from NL to CNL, we utilize several techniques such as constrained decoding, fine-tuning and prompt engineering.

    Michael Desmond (IBM); Vatche Isahagian (IBM); Vinod Muthusamy (IBM); Evelyn Duesterwald (IBM)

Upcoming events

View all events