On the Scalability of Certified Adversarial Robustness with Generated Data

Altstidl TR, Dobre D, Kosmala A, Eskofier B, Gidel G, Schwinn L (2024)


Publication Language: English

Publication Type: Conference contribution, Original article

Publication year: 2024

Series: Advances in Neural Information Processing Systems

Book Volume: 37

Pages Range: 102255-102278

Conference Proceedings Title: Advances in Neural Information Processing Systems 37

Event location: Vancouver, BC, Canada CA

URI: https://proceedings.neurips.cc/paper_files/paper/2024/file/b96ce7d38339874a8704e8895f743284-Paper-Conference.pdf

Open Access Link: https://proceedings.neurips.cc/paper_files/paper/2024/file/b96ce7d38339874a8704e8895f743284-Paper-Conference.pdf

Abstract

Certified defenses against adversarial attacks offer formal guarantees on the robustness of a model, making them more reliable than empirical methods such as adversarial training, whose effectiveness is often later reduced by unseen attacks. Still, the limited certified robustness that is currently achievable has been a bottleneck for their practical adoption. Gowal et al. and Wang et al. have shown that generating additional training data using state-of-the-art diffusion models can considerably improve the robustness of adversarial training. In this work, we demonstrate that a similar approach can substantially improve deterministic certified defenses but also reveal notable differences in the scaling behavior between certified and empirical methods. In addition, we provide a list of recommendations to scale the robustness of certified training approaches. Our approach achieves state-of-the-art deterministic robustness certificates on CIFAR-10 for the () and () threat models, outperforming the previous results by  and  percentage points, respectively. Furthermore, we report similar improvements for CIFAR-100.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Altstidl, T.R., Dobre, D., Kosmala, A., Eskofier, B., Gidel, G., & Schwinn, L. (2024). On the Scalability of Certified Adversarial Robustness with Generated Data. In Advances in Neural Information Processing Systems 37 (pp. 102255-102278). Vancouver, BC, Canada, CA.

MLA:

Altstidl, Thomas Robert, et al. "On the Scalability of Certified Adversarial Robustness with Generated Data." Proceedings of the Neural Information Processing Systems (NeurIPS), Vancouver, BC, Canada 2024. 102255-102278.

BibTeX: Download