-
Recently, I have been using your metaRec model, and I want to add a multi-modal feature fusion mechanism to it. The initial idea is to embed the multi-modal features that have been processed by fusion…
-
For methods to support for multimodal modeling we should support:
- [ ] No Fusion approach: Contrastive workflows between modalities where a single embedding is generated for each modality (and/or te…
-
Hey, thanks a lot for your work!
I want to extend this model to more modalities(audio and video along with text and images). How difficult would that be? Also, if possible, how will that ?
-
**Describe the feature**
As we can see, there are many 3D object detection models based on multi-modal fusion in recent papers.
And these models have achieved good results in benchmark dataset, such…
-
I have a multi modal sensor fusion project where we need to train a convolution neural network .
the data is streamed in real time using nats Jetstream from various platforms like cars and trucks a…
-
Hi, thanks for your work on AV FGC task. I'd like to inquire about some experiment details in your paper:
1. In 4.1-Audio-Modality in your paper, you use logit average as evaluation strategy, but in …
-
Nice Work!!
But I have a doubt that why not to directly modal fusion teacher? There is no weight loading process in huston2013_multi_train.py, so why train single modality baseline ??
-
HI,
Thank you for sharing your work. I’m working with your implementation and noticed that the model_late_fusion.py file is mentioned in the README, but I’m unable to locate it in the repository. C…
-
Check this dataset out. 180k triplets, georeferenced, multi band, multi modal, multi resolution: "SEN12MS -- A Curated Dataset of Georeferenced Multi-Spectral Sentinel-1/2 Imagery for Deep Learning an…
taeil updated
3 years ago
-
_Note: we're aware of some missing content in the output and layout issues on tables. Please refrain from opening new issues on this topic unless if you think it's different from what has already been…