QingyangZhang / QMF

Quality-aware multimodal fusion on ICML 2023
MIT License
76 stars 6 forks source link

Code of NYUD2 #11

Open Mannix-D opened 4 days ago

Mannix-D commented 4 days ago

Is the code of noise experiments on NYUD2 the same as text-image datasets? I conducted the noise experiments on the NYUD2 dataset; however, the accuracy is lower than the paper's.

QingyangZhang commented 3 days ago

Hi, we randomly and independently add noise to RGB and Depth modality for NYUD2 experiments. That is, we apply torchvision.transforms.RandomApply(add_noise, p=0.5) for each modality individually. Thus we should expect 25% samples with both modalities corrupted, 25% samples with RGB modality corrupted only, 25% samples with Depth modality corrupted only and 25% clean samples. Due to my current busy schedule, I will update the test code to GitHub later. Sorry for any inconvenience.

Mannix-D commented 8 hours ago

. Thus we should expect 25% samples with both modalities corrupted, 25% samples with RGB modality corrupted only, 25% samples with Depth modality corrupted only and 25% clean samples.

Thank you for your response. I would greatly appreciate it if you could share your code on GitHub.