| Rank | Method |
Standard accuracy |
AutoAttack robust accuracy |
Best known robust accuracy |
AA eval. potentially unreliable |
Extra data |
Architecture | Venue |
|---|---|---|---|---|---|---|---|---|
| 1 |
Better Diffusion Models Further Improve Adversarial Training
|
75.22% | 42.67% | 42.66% | × |
× | WideResNet-70-16 | ICML 2023 |
| 2 |
MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification
|
75.13% | 44.78% | 42.25% | × |
☑ | MeanSparse WideResNet-70-16 | arXiv, Jun 2024 |
| 3 |
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
|
83.08% | 41.91% | 41.80% | × |
☑ | ResNet-152 + WideResNet-70-16 | TMLR, Aug 2024 |
| 4 |
Decoupled Kullback-Leibler Divergence Loss
|
73.85% | 39.18% | 39.18% | × |
× | WideResNet-28-10 | NeurIPS 2024 |
| 5 |
Better Diffusion Models Further Improve Adversarial Training
|
72.58% | 38.83% | 38.77% | × |
× | WideResNet-28-10 | ICML 2023 |
| 6 |
Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing
|
85.21% | 38.72% | 38.72% | × |
☑ | ResNet-152 + WideResNet-70-16 + mixing network | SIMODS 2024 |
| 7 | Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples | 69.15% | 36.88% | 36.88% | × |
☑ | WideResNet-70-16 | arXiv, Oct 2020 |
| 8 |
Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing
|
80.18% | 35.15% | 35.15% | × |
☑ | ResNet-152 + WideResNet-70-16 + mixing network | SIMODS 2024 |
| 9 | A Light Recipe to Train Robust Vision Transformers | 70.76% | 35.08% | 35.08% | × |
☑ | XCiT-L12 | arXiv, Sep 2022 |
| 10 |
Fixing Data Augmentation to Improve Adversarial Robustness
|
63.56% | 34.64% | 34.64% | × |
× | WideResNet-70-16 | arXiv, Mar 2021 |
| 11 | A Light Recipe to Train Robust Vision Transformers | 69.21% | 34.21% | 34.21% | × |
☑ | XCiT-M12 | arXiv, Sep 2022 |
| 12 |
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
|
65.56% | 33.05% | 33.05% | × |
× | WideResNet-70-16 | ICML 2022 |
| 13 |
Decoupled Kullback-Leibler Divergence Loss
|
65.93% | 32.52% | 32.52% | × |
× | WideResNet-34-10 | NeurIPS 2024 |
| 14 | A Light Recipe to Train Robust Vision Transformers | 67.34% | 32.19% | 32.19% | × |
☑ | XCiT-S12 | arXiv, Sep 2022 |
| 15 |
Fixing Data Augmentation to Improve Adversarial Robustness
|
62.41% | 32.06% | 32.06% | × |
× | WideResNet-28-10 | arXiv, Mar 2021 |
| 16 | Decoupled Kullback-Leibler Divergence Loss | 65.76% | 31.91% | 31.91% | × |
× | WideResNet-34-10 | NeurIPS 2024 |
| 17 |
LAS-AT: Adversarial Training with Learnable Attack Strategy
|
67.31% | 31.91% | 31.91% | × |
× | WideResNet-34-20 | arXiv, Mar 2022 |
| 18 | Efficient and Effective Augmentation Strategy for Adversarial Training | 68.75% | 31.85% | 31.85% | × |
× | WideResNet-34-10 | NeurIPS 2022 |
| 19 |
Learnable Boundary Guided Adversarial Training
|
62.99% | 31.20% | 31.20% | × |
× | WideResNet-34-10 | ICCV 2021 |
| 20 |
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
|
65.93% | 31.15% | 31.15% | × |
× | WideResNet-34-10 | ICLR 2022 |
| 21 |
Data filtering for efficient adversarial training
|
64.32% | 31.13% | 31.13% | × |
× | WideResNet-34-10 | Pattern Recognition 2024 |
| 22 |
Robustness and Accuracy Could Be Reconcilable by (Proper) Definition
|
63.66% | 31.08% | 31.08% | × |
× | WideResNet-28-10 | ICML 2022 |
| 23 |
LAS-AT: Adversarial Training with Learnable Attack Strategy
|
64.89% | 30.77% | 30.77% | × |
× | WideResNet-34-10 | arXiv, Mar 2022 |
| 24 |
LTD: Low Temperature Distillation for Robust Adversarial Training
|
64.07% | 30.59% | 30.59% | × |
× | WideResNet-34-10 | arXiv, Nov 2021 |
| 25 | Scaling Adversarial Training to Large Perturbation Bounds | 65.73% | 30.35% | 30.35% | × |
× | WideResNet-34-10 | ECCV 2022 |
| 26 |
Learnable Boundary Guided Adversarial Training
|
62.55% | 30.20% | 30.20% | Unknown | × | WideResNet-34-20 | ICCV 2021 |
| 27 | Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples | 60.86% | 30.03% | 30.03% | × |
× | WideResNet-70-16 | arXiv, Oct 2020 |
| 28 |
Learnable Boundary Guided Adversarial Training
|
60.64% | 29.33% | 29.33% | Unknown | × | WideResNet-34-10 | ICCV 2021 |
| 29 |
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off
|
61.50% | 28.88% | 28.88% | × |
× | PreActResNet-18 | OpenReview, Jun 2021 |
| 30 | Adversarial Weight Perturbation Helps Robust Generalization | 60.38% | 28.86% | 28.86% | × |
× | WideResNet-34-10 | NeurIPS 2020 |
| 31 |
Fixing Data Augmentation to Improve Adversarial Robustness
|
56.87% | 28.50% | 28.50% | × |
× | PreActResNet-18 | arXiv, Mar 2021 |
| 32 | Using Pre-Training Can Improve Model Robustness and Uncertainty | 59.23% | 28.42% | 28.42% | Unknown | ☑ | WideResNet-28-10 | ICML 2019 |
| 33 | Efficient and Effective Augmentation Strategy for Adversarial Training | 65.45% | 27.67% | 27.67% | × |
× | ResNet-18 | NeurIPS 2022 |
| 34 |
Learnable Boundary Guided Adversarial Training
|
70.25% | 27.16% | 27.16% | × |
× | WideResNet-34-10 | ICCV 2021 |
| 35 | Scaling Adversarial Training to Large Perturbation Bounds | 62.02% | 27.14% | 27.14% | × |
× | PreActResNet-18 | ECCV 2022 |
| 36 | Efficient Robust Training via Backward Smoothing | 62.15% | 26.94% | 26.94% | Unknown | × | WideResNet-34-10 | arXiv, Oct 2020 |
| 37 | Improving Adversarial Robustness Through Progressive Hardening | 62.82% | 24.57% | 24.57% | Unknown | × | WideResNet-34-10 | arXiv, Mar 2020 |
| 38 | Overfitting in adversarially robust deep learning | 53.83% | 18.95% | 18.95% | Unknown | × | PreActResNet-18 | ICML 2020 |