Closed kittykg closed 2 years ago
The class-level attribute aggregation process (which reduces the total number of attributes used from 312 to 112) is described in the paper. Further code and data processing instructions can be found in the README https://github.com/yewsiang/ConceptBottleneck/blob/master/CUB/README.md. In particular, CUB/generate_new_data.py
another file that you should consider running, besides CUB/data_processing.py
. Hope that helps!
The class-level attribute aggregation process (which reduces the total number of attributes used from 312 to 112) is described in the paper. Further code and data processing instructions can be found in the README https://github.com/yewsiang/ConceptBottleneck/blob/master/CUB/README.md. In particular,
CUB/generate_new_data.py
another file that you should consider running, besidesCUB/data_processing.py
. Hope that helps!
Yeah I checked the appendix, I missed it when reading the paper. I found the 112 attributes in the code now, thanks a lot! :)
Hi guys, I'm looking at the code and I'm just curious why
n_attribute
is set to be 112 for the inception model. TheCUB_processed
uploaded on Codalab only contains 112 elements inattribute_label
. However, the CUB 200_2011 dataset contains 312 attributes. Looking atCUB/data_processing.py
, there seems no attribute selection process. Could you please explain the reason for the difference between the number of attributes used in the inception model and the dataset? Thanks a lot!