Data and AI Security
As organizations move to the hybrid cloud, they must protect sensitive data and comply with regulations that allow them to take advantage of AI. We’re designing systems to monitor and protect data, building trust in AI through robust evaluation, certification, and hardening against attacks.
Our work
An open-source toolkit for debugging AI models of all data types
Technical noteKevin Eykholt and Taesung Lee- Adversarial Robustness and Privacy
- AI Testing
- Data and AI Security
Simplifying cloud security policies with AI
Technical noteJulian Stephen and Shriti Priya- Cloud Security
- Data and AI Security
- Security
Protect yourself from bad neighbors in the cloud
Technical noteMichael V. Le, Salman Ahmed, Dan Williams, and Hani Jamjoom- Data and AI Security
- Security
AI diffusion models can be tricked into generating manipulated images
NewsKim Martineau- AI
- AI Testing
- Data and AI Security
- Foundation Models
- Generative AI
- Security
Saška Mojsilović wants to channel AI for good. She may also make you rethink sour cabbage
NewsKim Martineau- AI Transparency
- Data and AI Security
- Explainable AI
- Fairness, Accountability, Transparency
- Machine Learning
What is synthetic data?
ExplainerKim Martineau- AI
- Data and AI Security
- Machine Learning
- See more of our work on Data and AI Security
Publications
- Ehud Aharoni
- Nir Drucker
- et al.
- 2023
- CCS 2023
- Daniel Gibert
- Giulio Zizzo
- et al.
- 2023
- CCS 2023
- Pradip Bose
- Jennifer Dworak
- et al.
- 2023
- MICRO 2023
- Ehud Aharoni
- Moran Baruch
- et al.
- 2023
- ESORICS 2023
- Leonard Dervishi
- Wenbiao Li
- et al.
- 2023
- ISMB 2023
- Ehud Aharoni
- Allon Adir
- et al.
- 2023
- PETS 2023
IBM Solution: IBM Cloud Pak for Data
Our research is regularly incorporated into new security features for IBM Cloud Pak for Data.
Tools + code
ART: Adversarial Robustness Toolbox
A Python library for machine learning security that enables developers and researchers to defend and evaluate machine learning models and applications against the adversarial threats of evasion, poisoning, extraction, and inference.
View project →AI Privacy 360
Tools to support the assessment of privacy risks of AI-based solutions, and to help them adhere to any relevant privacy requirements. Tradeoffs between privacy, accuracy, and performance can be explored at different stages in the machine learning lifecycle.
View project →Diffprivlib: The IBM Differential Privacy Library
A general-purpose library for experimenting with, investigating, and developing applications in differential privacy.
View project →IBM Federated Learning - Community Edition
A Python framework for federated learning in an enterprise environment.
View project →HELayers – Community Edition
SDKs for computing on encrypted data without decrypting it, provided via Docker container. Equipped with C++ and Python API’s and includes Jupyter Notebooks and VS Code IDEs with demonstrations, tutorials and documentation for AI/ML and encrypted search applications. Support for Linux, Intel, MacOS and s390x platforms.
View project →