Rank Method Standard
accuracy
Robust
accuracy
Extra
data
Architecture Venue
1 A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
Ensemble of networks each of which are pruned to 95% sparsity
96.56% 92.78% × WideResNet-18-2 NeurIPS 2021
2 NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections 96.73% 92.78% × WideResNet-28-4 arXiv, Feb 2022
3 Defending Against Image Corruptions Through Adversarial Augmentations
Uses extra data indirectly via a super resolution and autoencoder networks that were pre-trained on other datasets.
94.93% 92.17% ResNet-50 arXiv, Apr 2021
4 A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
Trained with AugMix and pruned to 95% sparsity
96.66% 90.94% × WideResNet-18-2 NeurIPS 2021
5 A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
Ensemble of binary weight networks each of which are pruned to 95% sparsity
95.09% 90.15% × WideResNet-18-2 NeurIPS 2021
6 On the effectiveness of adversarial training against common corruptions
Trained with RLAT and AugMix.
94.75% 89.60% × ResNet-18 arXiv, Mar 2021
7 AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty 95.83% 89.09% × ResNeXt29_32x4d ICLR 2020
8 PRIME: A Few Primitives Can Boost Robustness to Common Corruptions 93.06% 89.05% × ResNet-18 arXiv, Dec 2021
9 AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty 95.08% 88.82% × WideResNet-40-2 ICLR 2020
10 On the effectiveness of adversarial training against common corruptions
Trained with RLAT and AugMix without the JSD term.
94.77% 88.53% × PreActResNet-18 arXiv, Mar 2021
11 A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
Binary weight network trained with AugMix and pruned to 95% sparsity
94.87% 88.32% × WideResNet-18-2 NeurIPS 2021
12 Fixing Data Augmentation to Improve Adversarial Robustness
Trained for \(\ell_2 \) robustness with \(\varepsilon = 0.5\).
95.74% 88.23% WideResNet-70-16 arXiv, Mar 2021
13 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Trained for \(\ell_2 \) robustness with \(\varepsilon = 0.5\).
94.74% 87.68% WideResNet-70-16 arXiv, Oct 2020
14 On the effectiveness of adversarial training against common corruptions
Training with AugMix without the JSD term.
94.97% 86.60% × PreActResNet-18 arXiv, Mar 2021
15 On the effectiveness of adversarial training against common corruptions
Trained with 50% Gaussian noise per batch. Note: Gaussian noise is contained in CIFAR-10-C.
93.24% 85.04% × PreActResNet-18 arXiv, Mar 2021
16 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Trained for \(\ell_2 \) robustness with \(\varepsilon = 0.5\).
90.90% 84.90% × WideResNet-70-16 arXiv, Oct 2020
17 On the effectiveness of adversarial training against common corruptions
Trained with RLAT.
93.10% 84.10% × PreActResNet-18 arXiv, Mar 2021
18 Fixing Data Augmentation to Improve Adversarial Robustness
Trained for \(\ell_{\infty} \) robustness with \(\varepsilon = 8/255\).
92.23% 82.82% WideResNet-70-16 arXiv, Mar 2021
19 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Trained for \(\ell_{\infty} \) robustness with \(\varepsilon = 8/255\).
91.10% 81.84% WideResNet-70-16 arXiv, Oct 2020
20 Efficient and Effective Augmentation Strategy for Adversarial Training 88.71% 80.12% × WideResNet-34-10 NeurIPS 2022
21 Scaling Adversarial Training to Large Perturbation Bounds 85.32% 76.78% × WideResNet-34-10 ECCV 2022
22 Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
Trained for \(\ell_{\infty} \) robustness with \(\varepsilon = 8/255\).
85.29% 76.37% × WideResNet-70-16 arXiv, Oct 2020
23 Standardly trained model 94.78% 73.46% × WideResNet-28-10 N/A