yikaiw / CEN

[TPAMI 2023, NeurIPS 2020] Code release for "Deep Multimodal Fusion by Channel Exchanging"
MIT License
281 stars 44 forks source link

How can get the datasets of "train" and "val" #14

Closed PatrickWilliams44 closed 1 year ago

PatrickWilliams44 commented 2 years ago

It's my pleasure for me to see your paper "Deep Multimodal Fusion by Channel Exchanging",and I have downloaded the corresponding code from github, but how can I get the "train" dataset and "val" dataset? Looking forward to your early reply! Thank you!

yikaiw commented 2 years ago

Hi, thanks for your interest.

segmentation dataset: https://drive.google.com/drive/folders/1mXmOXVsd5l9-gYHk92Wpn6AcKAbE0m3X image translation dataset: https://github.com/alexsax/taskonomy-sample-model-1

Both segmentation and image translation codes provide train and val splits:(https://github.com/yikaiw/CEN/tree/master/semantic_segmentation/data/nyudv2, https://github.com/yikaiw/CEN/tree/master/image2image_translation/data).

PatrickWilliams44 commented 2 years ago

Dear yikaiw: Thanks for your reply! Do I need to combine the nyudv dataset containing "depth", "masks" and "rgb" with the nyudv2 dataset containing "train.txt" and "val.txt" in the same folder?

yikaiw commented 2 years ago

It is not necessary. You can place the dataset folder (that contains "depth", "masks" and "rgb") to any path, as long as you modify the data path in https://github.com/yikaiw/CEN/blob/40f277ed1a377a3c81f979a6c534ae268773aa9d/semantic_segmentation/config.py#L5

PatrickWilliams44 commented 2 years ago

Thanks for your patient answer. Now due to the limitation of my hardware device, I can only try to run it on the cpu. The loading time of the training process is relatively long. I will continue to have a deep understanding of this network. Thank you again for your guidance!