Trustworthy-ML-Lab / Label-free-CBM

A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept data
73 stars 13 forks source link

the results for cifar10 and cifar100 #3

Closed Adasunnylily closed 11 months ago

Adasunnylily commented 11 months ago

hi, I use the command in train_command.txt to reproduce, but I got results far more than results in the paper, I am wondering is there any mistake, could you please provide the parameters you used to reproduce results in the paper? Here are my results with comparison to paper in cifar100

paper-standard-sparse | cifar100 | 58.34 -- | -- | -- my-standard-sparse | cifar100 | 73.68 paper-LFCBM | cifar100 | 65.13(65-65,24) my-LFCBM | cifar100 | 77.34

MANY THANKS!!!

tuomaso commented 11 months ago

Interesting, what training command did you use? training_commands.txt has all the training commands to reproduce our results assuming you haven't changed the default values in the code.

Adasunnylily commented 11 months ago

AHhhh, I found that I made a mistake by using VITB-16 as the backbone instead of RN50, now I can reproduce the correct results. Thanks a lot!