Open Mannix-D opened 4 days ago
Hi, we randomly and independently add noise to RGB and Depth modality for NYUD2 experiments. That is, we apply torchvision.transforms.RandomApply(add_noise, p=0.5) for each modality individually. Thus we should expect 25% samples with both modalities corrupted, 25% samples with RGB modality corrupted only, 25% samples with Depth modality corrupted only and 25% clean samples. Due to my current busy schedule, I will update the test code to GitHub later. Sorry for any inconvenience.
. Thus we should expect 25% samples with both modalities corrupted, 25% samples with RGB modality corrupted only, 25% samples with Depth modality corrupted only and 25% clean samples.
Thank you for your response. I would greatly appreciate it if you could share your code on GitHub.
Is the code of noise experiments on NYUD2 the same as text-image datasets? I conducted the noise experiments on the NYUD2 dataset; however, the accuracy is lower than the paper's.