IDT-ITI / MMFusion-IML

Code and trained models for our paper: K. Triaridis, V. Mezaris, "Exploring Multi-Modal Fusion for Image Manipulation Detection and Localization", Proc. 30th Int. Conf. on MultiMedia Modeling (MMM 2024), Amsterdam, NL, Jan.-Feb. 2024.
74 stars 8 forks source link

About the weight of BayerConv2D #3

Closed Kelfvin closed 8 months ago

Kelfvin commented 8 months ago

Your research results are excellent! However, I have a question that I hope you can answer.

How are the weights for BayerConv2D configured? There are learnable parameters inside this module, so why aren't they loaded from the weight file during initialization?

kostino commented 8 months ago

Hello, thank you for your interest in our paper and code! For the phase 1 training, the weights are loaded in this way: https://github.com/IDT-ITI/MMFusion-IML/blob/608acc73bf6a7209f64a023c5d65c91f0e724091/ec_train.py#L53-L59 You need to download the pretrained weights as described in pretrained/README.md

For phase 2 the weights are loaded from the checkpoint of the modal extractor module (this is where they are saved after phase 1 training): https://github.com/IDT-ITI/MMFusion-IML/blob/608acc73bf6a7209f64a023c5d65c91f0e724091/ec_train_phase2.py#L60