Open JingyiXu404 opened 1 year ago
And when I run
python main.py --run-mode 2
python main.py --run-mode 3
under ResNet101 backbone, the fusion result is: Fusion result: 79.08% (121/153) and Fusion result: 77.12% (118/153), respectively
Hi, Thank you for your interest to this work. As stated in the paper, Washington RGB-D has 10 train/test splits (hence 10 trained models) and each split has about 7000 test images (not 153). The table results are the average results of the 10-splits. Please make sure that you're using WRGBD evaluation benchmark with the provided train/test splits.
Hi, Thank you for your interest to this work. As stated in the paper, Washington RGB-D has 10 train/test splits (hence 10 trained models) and each split has about 7000 test images (not 153). The table results are the average results of the 10-splits. Please make sure that you're using WRGBD evaluation benchmark with the provided train/test splits.
Thank you so much, I removed --debug
and it worked. And I have another question: why shuffle = False
is used for training dataset?
training_set = WashingtonDataset(params, phase='train', loader=custom_loader, transform=data_form)
train_loader = torch.utils.data.DataLoader(training_set, params.batch_size, shuffle=False)
Thank you so much
why
shuffle = False
is used for training dataset?
Glad it helped. shuffle = False
is the case where the trained model is used as a feature extractor in extract_cnn_features.py. In that case, there is no need to shuffle the data.
Hi, thanks for your good job. I run
and got
Fusion result: 79.08% (121/153)
which is not equal to the result in the Tabel 1 of your paper. I have two questions here:Thank you so much