billhhh / ShaSpec

The official code repository of ShaSpec model from CVPR 2023 [paper](https://arxiv.org/pdf/2307.14126) "Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling"
24 stars 3 forks source link

shared encoder on the Audiovision-MNIST dataset #9

Closed HYC01 closed 1 month ago

HYC01 commented 1 month ago

The code you provided offers detailed implementation based on the Brats18 dataset, which I found very helpful. However, I am particularly interested in your experimental details on the Audiovision-MNIST dataset. Specifically, I am keen to understand how to construct the shared encoder when dealing with two modalities (image and audio) that have different data dimensions.

billhhh commented 1 month ago

Thank you for your interest. Currently, we release the code version on Brats but do not have plans for Audiovision-MNIST data, as Brats have much more influence on the field. For dealing with 2 modalities, if one is missing, we can use the available one feature to replace the missing one.