Jingkang50 / OpenOOD

Benchmarking Generalized Out-of-Distribution Detection
MIT License
858 stars 108 forks source link

cider in this benchmark isn‘t similar to original paper? #164

Closed Afleve closed 1 year ago

Afleve commented 1 year ago

I found the dataloader of cider lacks the TwoCropTransform. data = torch.cat([data[0], data[1]], dim=0).cuda() leading to data.shape= [batchSize, 3, 32, 32] ==> data.shape=[6, 32, 32] In theory, it should be data.shape= [2, batchSize, 3, 32, 32] ==> data.shape=[2 * batchSize, 3, 32, 32] Is this an error in code writing?

zjysteven commented 1 year ago

Good catch! We did borrow the TwoCropTransform here in cider_preprocessor but somehow the training wasn't using it (still using the base_preprocessor, which returns one view of the image). https://github.com/Jingkang50/OpenOOD/blob/5155ea67e104aae605a8d80eda426715676fcd73/openood/preprocessors/cider_preprocessor.py#L54-L60

We will fix this and update CIDER's results when possible. Meanwhile I was kinda confused why the training didn't throw an error (the input and output shape should have a mismatch now in the current implementation)...

Afleve commented 1 year ago

Good catch! We did borrow the TwoCropTransform here in cider_preprocessor but somehow the training wasn't using it (still using the base_preprocessor, which returns one view of the image).

https://github.com/Jingkang50/OpenOOD/blob/5155ea67e104aae605a8d80eda426715676fcd73/openood/preprocessors/cider_preprocessor.py#L54-L60

We will fix this and update CIDER's results when possible. Meanwhile I was kinda confused why the training didn't throw an error (the input and output shape should have a mismatch now in the current implementation)...

I found that you don't seem to be using this TwoCropTransform,but that self.transform_image = preprocessor self.transform_aux_image = data_aux_preprocessor in ImglistDataset Class. I saw that there is data and data_aux in the batch, perhaps it should be written like this data = torch.cat([batch['data'], batch['data_aux']], dim=0).cuda()

zjysteven commented 1 year ago

@Onenormler Yes but I don't think data_aux_preprocessor is configured to support two image crops either. The simplest fix would be to 1) wrap the transform in cider_preprocessor.py with TwoCropTransform and 2) specify using cider_preprocessor during CIDER training.

Afleve commented 1 year ago

@Onenormler Yes but I don't think data_aux_preprocessor is configured to support two image crops either. The simplest fix would be to 1) wrap the transform in cider_preprocessor.py with TwoCropTransform and 2) specify using cider_preprocessor during CIDER training.

Honestly,the preprocessor in the Cider's config is base, which can be changed to a Cider specific preprocessor. Currently, the transform_ image and transform_aux_image in the basepreprocessor is the same sample['data'] = self.transform_image(image) sample['data_aux'] = self.transform_aux_image(image) ...... return sample this sample is batch so it can also be equivalent to TwoCropTransform, but as mentioned above, we need to modify the batch data acquisition.Thanks for reply.

zjysteven commented 1 year ago

Hi @Onenormler, after careful investigation, it turns out that actually our implementation is correct.

So we did use TwoCropTransform here https://github.com/Jingkang50/OpenOOD/blob/5155ea67e104aae605a8d80eda426715676fcd73/openood/preprocessors/utils.py#L33-L37 and our experiment scripts indeed used cider preprocessor here https://github.com/Jingkang50/OpenOOD/blob/5155ea67e104aae605a8d80eda426715676fcd73/scripts/ood/cider/cifar10_train_cider.sh#L4-L10

You were probably getting the wrong data shape because you didn't specify --preprocessor.name cider.

Afleve commented 1 year ago

Hi @Onenormler, after careful investigation, it turns out that actually our implementation is correct.

So we did use TwoCropTransform here

https://github.com/Jingkang50/OpenOOD/blob/5155ea67e104aae605a8d80eda426715676fcd73/openood/preprocessors/utils.py#L33-L37

and our experiment scripts indeed used cider preprocessor here https://github.com/Jingkang50/OpenOOD/blob/5155ea67e104aae605a8d80eda426715676fcd73/scripts/ood/cider/cifar10_train_cider.sh#L4-L10

You were probably getting the wrong data shape because you didn't specify --preprocessor.name cider.

yes,I found it.Thank for your reply.

alvinmingsf commented 1 year ago

Just FYI, I also conducted some preliminary ablation studies on TwoCropAugmentation and found that the performance of cider is generally better w. this augmentation on CIFAR.

zjysteven commented 1 year ago

@alvinmingwisc Thank you Yifei for the comments! Yes I would think so. Let us know if you find anything is missing in the CIDER implementation.