ICLR 2022
Workshop paper

Robust Randomized Smoothing via Two Cost-Effective Approaches


Randomized smoothing has recently attracted attentions in the field of adversarial robustness to provide provable robustness guarantees on smoothed neural network classifiers. However, existing works show that vanilla randomized smoothing usually does not provide good robustness performance and often requires (re)training techniques on the base classifier in order to boost the robustness of the resulting smoothed classifier. In this work, we propose two cost-effective approaches to boost the robustness of randomized smoothing while preserving its standard performance. In the first approach, we propose a new robust training method AdvMacer that combines adversarial training and maximizing robustness certificate for randomized smoothing. We show that AdvMacer can improve the robustness performance of randomized smoothing classifiers compared to SOTA baselines. The second approach introduces a post-processing method named EsbRS which greatly improves the robustness certificate based on model ensembles. We explore different aspects of model ensembles that has not been studied by prior works and propose a mixed design strategy to further improve robustness of the ensemble.