Table 2. Spearman's rank correlation coefficient on CIFAR-10 using GREAT Score, RobustBench (with test set), and Auto-Attack (with generated samples).
|
Uncalibrated |
Calibrated |
GREAT Score vs. RobustBench Correlation |
0.6618 |
0.8971 |
GREAT Score vs. AutoAttack Correlation |
0.3690 |
0.6941 |
RobustBench vs. AutoAttack Correlation |
0.7296 |
0.7296 |
We compare the model ranking on CIFAR-10 using GREAT Score (evaluated with generated samples), RobustBench (evaluated with Auto-Attack on the test set), and Auto-Attack (evaluated with Auto-Attack on generated samples).
Table 2 presents their mutual rank correlation (higher value means more aligned ranking) with calibrated and uncalibrated versions.
We note that there is an innate discrepancy between Spearman's rank correlation coefficient (way below 1) of RobustBench vs. Auto-Attack, which means Auto-Attack will give inconsistent model rankings when evaluated on different data samples. In addition, GREAT Score measures classification margin, while AutoAttack measures accuracy under a fixed perturbation budget ε. AutoAttack's ranking will change if we use different ε values. E.g., comparing the ranking of ε=0.3 and ε=0.7 on 10000 CIFAR-10 test images for AutoAttack, the Spearman's correlation is only 0.9485. Therefore, we argue that GREAT Score and AutoAttack are complementary evaluation metrics and they don't need to match perfectly.
Despite their discrepancy, before calibration, the correlation between GREAT Score and RobustBench yields a similar value. With calibration, there is a significant improvement in rank correlation between GREAT Score to Robustbench and Auto-Attack, respectively.