EPFL-VILAB / MultiMAE

MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022
https://multimae.epfl.ch
Other
533 stars 61 forks source link

Request for pseudo semseg and depth for IN-1k #13

Closed yibingwei-1 closed 1 year ago

yibingwei-1 commented 1 year ago

Thanks for the great work! Could you please provide the pseudo semseg and depth for IN-1k to reproduce the result?

Thanks!

roman-bachmann commented 1 year ago

Hi @ywwwei!

Thank you!

We just uploaded the ImageNet-1K train and val pseudo labels for both Omnidata depth and COCO semantic segmentation. Please see the instructions here on how to download them. For more details, please also check out the section on which networks we used here.

Hope this helps!

Best, Roman