Rank Method Standard
accuracy
AutoAttack
robust
accuracy
Best known
robust
accuracy
AA eval.
potentially
unreliable
Extra
data
Architecture Venue
1 MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness 78.62% 59.68% 59.68%
×
× Swin-L arXiv, Dec 2023
2 A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking 78.92% 59.56% 59.56%
×
× Swin-L arXiv, Feb 2023
3 MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification
It adds the MeanSparse operator to the adversarially trained models Liu2023Comprehensive_Swin-L. 58.92% robust accuracy is due to APGD (both versions) with BPDA.
78.80% 62.12% 58.92%
×
× MeanSparse Swin-L arXiv, Jun 2024
4 MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
It uses an ensemble of networks. The accurate base classifier was pre-trained on ImageNet-21k. 58.50% robust accuracy is due to the original evaluation (Adaptive AutoAttack)
81.48% 58.62% 58.50%
×
ConvNeXtV2-L + Swin-L TMLR, Aug 2024
5 A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking 78.02% 58.48% 58.48%
×
× ConvNeXt-L arXiv, Feb 2023
6 MeanSparse: Post-Training Robustness Enhancement Through Mean-Centered Feature Sparsification
It adds the MeanSparse operator to the adversarially trained models Liu2023Comprehensive_ConvNeXt-L. 58.22% robust accuracy is due to APGD (both versions) with BPDA.
77.92% 59.64% 58.22%
×
× MeanSparse ConvNeXt-L arXiv, Jun 2024
7 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 77.00% 57.70% 57.70%
×
× ConvNeXt-L + ConvStem NeurIPS 2023
8 A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking 76.16% 56.16% 56.16%
×
× Swin-B arXiv, Feb 2023
9 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 75.90% 56.14% 56.14%
×
× ConvNeXt-B + ConvStem NeurIPS 2023
10 MIMIR: Masked Image Modeling for Mutual Information-based Adversarial Robustness 76.62% 55.90% 55.90%
×
× Swin-B arXiv, Dec 2023
11 A Comprehensive Study on Robustness of Image Classification Models: Benchmarking and Rethinking 76.02% 55.82% 55.82%
×
× ConvNeXt-B arXiv, Feb 2023
12 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 76.30% 54.66% 54.66%
×
× ViT-B + ConvStem NeurIPS 2023
13 Characterizing Model Robustness via Natural Input Gradients 79.36% 53.82% 53.82%
×
× Swin-L arXiv, Sep 2024
14 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 74.10% 52.42% 52.42%
×
× ConvNeXt-S + ConvStem NeurIPS 2023
15 Characterizing Model Robustness via Natural Input Gradients 77.76% 51.56% 51.56%
×
× Swin-B arXiv, Sep 2024
16 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 72.72% 49.46% 49.46%
×
× ConvNeXt-T + ConvStem NeurIPS 2023
17 Robust Principles: Architectural Design Principles for Adversarially Robust CNNs 73.44% 48.94% 48.94%
×
× RaWideResNet-101-2 BMVC 2023
18 Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threat Models 72.56% 48.08% 48.08%
×
× ViT-S + ConvStem NeurIPS 2023
19 A Light Recipe to Train Robust Vision Transformers 73.76% 47.60% 47.60%
×
× XCiT-L12 arXiv, Sep 2022
20 A Light Recipe to Train Robust Vision Transformers 74.04% 45.24% 45.24%
×
× XCiT-M12 arXiv, Sep 2022
21 A Light Recipe to Train Robust Vision Transformers 72.34% 41.78% 41.78%
×
× XCiT-S12 arXiv, Sep 2022
22 Data filtering for efficient adversarial training
68.76% 40.60% 40.60%
×
× WideResNet-50-2 Pattern Recognition 2024
23 When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture 74.66% 38.30% 38.30%
×
× Swin-B NeurIPS 2022
24 Do Adversarially Robust ImageNet Models Transfer Better? 68.46% 38.14% 38.14%
×
× WideResNet-50-2 NeurIPS 2020
25 Do Adversarially Robust ImageNet Models Transfer Better? 64.02% 34.96% 34.96%
×
× ResNet-50 NeurIPS 2020
26 When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture 68.38% 34.40% 34.40%
×
× ViT-B NeurIPS 2022
27 Robustness library 62.56% 29.22% 29.22%
×
× ResNet-50 GitHub,
Oct 2019
28 Fast is better than free: Revisiting adversarial training
Focuses on fast adversarial training.
55.62% 26.24% 26.24%
×
× ResNet-50 ICLR 2020
29 Do Adversarially Robust ImageNet Models Transfer Better? 52.92% 25.32% 25.32%
×
× ResNet-18 NeurIPS 2020
30 Standardly trained model 76.52% 0.0% 0.0%
×
× ResNet-50 N/A