1 |
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
|
96.56% |
92.78% |
× |
WideResNet-18-2 |
NeurIPS 2021 |
2 |
NoisyMix: Boosting Robustness by Combining Data Augmentations, Stability Training, and Noise Injections
|
96.73% |
92.78% |
× |
WideResNet-28-4 |
arXiv, Feb 2022 |
3 |
Defending Against Image Corruptions Through Adversarial Augmentations
|
94.93% |
92.17% |
☑ |
ResNet-50 |
arXiv, Apr 2021 |
4 |
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
|
96.66% |
90.94% |
× |
WideResNet-18-2 |
NeurIPS 2021 |
5 |
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
|
95.09% |
90.15% |
× |
WideResNet-18-2 |
NeurIPS 2021 |
6 |
On the effectiveness of adversarial training against common corruptions
|
94.75% |
89.60% |
× |
ResNet-18 |
arXiv, Mar 2021 |
7 |
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
|
95.83% |
89.09% |
× |
ResNeXt29_32x4d |
ICLR 2020 |
8 |
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions
|
93.06% |
89.05% |
× |
ResNet-18 |
arXiv, Dec 2021 |
9 |
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
|
95.08% |
88.82% |
× |
WideResNet-40-2 |
ICLR 2020 |
10 |
On the effectiveness of adversarial training against common corruptions
|
94.77% |
88.53% |
× |
PreActResNet-18 |
arXiv, Mar 2021 |
11 |
A Winning Hand: Compressing Deep Networks Can Improve Out-Of-Distribution Robustness
|
94.87% |
88.32% |
× |
WideResNet-18-2 |
NeurIPS 2021 |
12 |
Fixing Data Augmentation to Improve Adversarial Robustness
|
95.74% |
88.23% |
☑ |
WideResNet-70-16 |
arXiv, Mar 2021 |
13 |
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
|
94.74% |
87.68% |
☑ |
WideResNet-70-16 |
arXiv, Oct 2020 |
14 |
On the effectiveness of adversarial training against common corruptions
|
94.97% |
86.60% |
× |
PreActResNet-18 |
arXiv, Mar 2021 |
15 |
On the effectiveness of adversarial training against common corruptions
|
93.24% |
85.04% |
× |
PreActResNet-18 |
arXiv, Mar 2021 |
16 |
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
|
90.90% |
84.90% |
× |
WideResNet-70-16 |
arXiv, Oct 2020 |
17 |
On the effectiveness of adversarial training against common corruptions
|
93.10% |
84.10% |
× |
PreActResNet-18 |
arXiv, Mar 2021 |
18 |
Fixing Data Augmentation to Improve Adversarial Robustness
|
92.23% |
82.82% |
☑ |
WideResNet-70-16 |
arXiv, Mar 2021 |
19 |
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
|
91.10% |
81.84% |
☑ |
WideResNet-70-16 |
arXiv, Oct 2020 |
20 |
Efficient and Effective Augmentation Strategy for Adversarial Training
|
88.71% |
80.12% |
× |
WideResNet-34-10 |
NeurIPS 2022 |
21 |
Scaling Adversarial Training to Large Perturbation Bounds
|
85.32% |
76.78% |
× |
WideResNet-34-10 |
ECCV 2022 |
22 |
Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples
|
85.29% |
76.37% |
× |
WideResNet-70-16 |
arXiv, Oct 2020 |
23 |
Standardly trained model
|
94.78% |
73.46% |
× |
WideResNet-28-10 |
N/A |