Closed raghavchalapathy closed 4 years ago
Hello Raghav,
Thanks for your interest in the package. The code does not impose any limit on the number of modalities, so you can easily have 9 different modalities.
If you have so many modalities, you might want to use any prior knowledge that you have about the relation of your modalities, though. You could do this by overriding the MultimodalAutoencoder
methods _construct_fusion_encoder
and _construct_fusion_decoder
. The default class functions use a densely connected fusion network, but if you know for example that the sets of modalities 1-4 and 5-9 are pretty unrelated then you could connect modalities 1-4 to one set of latent units and 5-9 to another set of latent units.
To stay on top of your modalities, you might want to use Python dictionaries instead of lists for your modalities:
data = {'mod1_name': x1_train, 'mod2': x2_train, 'mod3': x3_train}
data_val = {'mod1_name': x1_val, 'mod2': x2_val, 'mod3': x3_val}
input_shapes = {'mod1_name': x1_train.shape[1:], 'mod2': x2_train.shape[1:], 'mod3': x3_train.shape[1:]}
output_activations = {'mod1_name': 'sigmoid', 'mod2': 'relu', 'mod3': 'relu'}
...
The package supports this out of the box.
Thank you for eloborate answer, Can we change the layers of hidden units to CNN, GRU, ? Can we accomodate attention custom? Could you please help provide an example if exists for doing so
Thanks
Hello,
Thanks for the nice package, May I know how many modality will the autoencoder support
lets say for example we have input like
Multimodal training data
data = [x1_train, x2_train, x3_train, x4_train, x5_train,x6_train, x7_train, x8_train,x9_train ]
Does your code support this input
where X1_train = X9_train of shape = 100 X 1024
Raghav