Open Oufattole opened 3 weeks ago
I think we can start with creating a new input_encoder class: https://github.com/Oufattole/meds-torch/blob/main/src/meds_torch/input_encoder/triplet_encoder.py, that adds a new modality specific key to batch
in the forward pass.
and then making a custom multimodal supervised method: https://github.com/Oufattole/meds-torch/blob/main/src/meds_torch/models/supervised_model.py
For methods to support for multimodal modeling we should support:
This approach maintains temporal alignment while avoiding memory issues from padding sparse modalities to match dense measurement frequencies. See the JNRT padding example #6 for why naive padding is problematic.