billhhh / ShaSpec

The official code repository of ShaSpec model from CVPR 2023 [paper](https://arxiv.org/pdf/2307.14126) "Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling"
37 stars 6 forks source link

shared encoder on the Audiovision-MNIST dataset #9

Closed HYC01 closed 3 months ago

HYC01 commented 3 months ago

The code you provided offers detailed implementation based on the Brats18 dataset, which I found very helpful. However, I am particularly interested in your experimental details on the Audiovision-MNIST dataset. Specifically, I am keen to understand how to construct the shared encoder when dealing with two modalities (image and audio) that have different data dimensions.

billhhh commented 3 months ago

Thank you for your interest. Currently, we release the code version on Brats but do not have plans for Audiovision-MNIST data, as Brats have much more influence on the field. For dealing with 2 modalities, if one is missing, we can use the available one feature to replace the missing one.