acaglayan / CNN_randRNN

When CNNs Meet Random RNNs: Towards Multi-Level Analysis for RGB-D Object and Scene Recognition (CVIU 2022)
MIT License
13 stars 9 forks source link

About of accuracy on Washington RGB-D dataset #6

Open JingyiXu404 opened 1 year ago

JingyiXu404 commented 1 year ago

Hi, thanks for your good job. I run

sh run_steps.sh step="FIX_EXTRACTION"
python main_steps.py
sh run_steps.sh step="FIX_RECURSIVE_NN"
python main_steps.py

and got Fusion result: 79.08% (121/153) which is not equal to the result in the Tabel 1 of your paper. I have two questions here:

  1. Could you tell me how to modify and run the code if I want to get the result in Table 1?
  2. Is the result of Top-1 average accuracy or others in Table 1?

Thank you so much

JingyiXu404 commented 1 year ago

And when I run

python main.py --run-mode 2
python main.py --run-mode 3

under ResNet101 backbone, the fusion result is: Fusion result: 79.08% (121/153) and Fusion result: 77.12% (118/153), respectively

acaglayan commented 1 year ago

Hi, Thank you for your interest to this work. As stated in the paper, Washington RGB-D has 10 train/test splits (hence 10 trained models) and each split has about 7000 test images (not 153). The table results are the average results of the 10-splits. Please make sure that you're using WRGBD evaluation benchmark with the provided train/test splits.

JingyiXu404 commented 1 year ago

Hi, Thank you for your interest to this work. As stated in the paper, Washington RGB-D has 10 train/test splits (hence 10 trained models) and each split has about 7000 test images (not 153). The table results are the average results of the 10-splits. Please make sure that you're using WRGBD evaluation benchmark with the provided train/test splits.

Thank you so much, I removed --debug and it worked. And I have another question: why shuffle = False is used for training dataset?

training_set = WashingtonDataset(params, phase='train', loader=custom_loader, transform=data_form)
train_loader = torch.utils.data.DataLoader(training_set, params.batch_size, shuffle=False)

Thank you so much

acaglayan commented 1 year ago

why shuffle = False is used for training dataset?

Glad it helped. shuffle = False is the case where the trained model is used as a feature extractor in extract_cnn_features.py. In that case, there is no need to shuffle the data.