XunpengYi / Diff-IF

Official Code of Diff-IF: Multi-modality image fusion via diffusion model with fusion knowledge prior (Information Fusion 2024)
MIT License
24 stars 1 forks source link

[Information Fusion 2024] Diff-IF: Multi-modality image fusion via diffusion model with fusion knowledge prior

Paper | Code

Yi, Xunpeng, et al. "Diff-IF: Multi-modality image fusion via diffusion model with fusion knowledge prior." Information Fusion (2024): 102450.

Image

1. Create Environment

2. Prepare Your Dataset

You can also refer to MFNet, RoadScene, LLVIP to prepare your data.

If you want to test only, you should list your dataset as the followed rule:

# Infrared and visible image fusion:
    dataset/
        your_dataset/
            test/
                Infrared/
                Visible/

# Medical image fusion:
    dataset/
        your_dataset/
            test/
                CT-PET-SPECT/
                MRI/

3. Pretrain Weights

We provide the pretrain weights for infrared and visible image fusion and medical image fusion. Download the weight and put it into the weights folder.

The pretrain weight for infrared and visible image fusion is at Google Drive | Baidu Drive (code: 82nm).

The pretrain weight for medical image fusion is at Google Drive | Baidu Drive (code: 7u1g).

4. Testing

For infrared and visible image fusion or medical image fusion test, you can use:

# Infrared and visible fusion
CUDA_VISIBLE_DEVICES=0 python infer_ddim.py  --config config/diff-if-ivf_val.json

# Medical image fusion
CUDA_VISIBLE_DEVICES=0 python infer_ddim.py  --config config/diff-if-mif_val.json

5. Train

Please refer to existing methods to achieve fusion knowledge prior construction of training sets. We recommend the U2Fusion, TarDAL, DDFM, MetaFusion, etc. You can also organize your own fusion knowledge prior based on your needs. We encourage the researchers to do this.

You should list your fusion knowledge prior as followed rule:

    dataset/
        fusion_knowledge_prior/
          Knowledge_U2Fusion/
          Knowledge_TarDAL/
          Knowledge_DDFM/
          Knowledge_MetaFusion/
          ...

Get the fusion knowledge prior

We also encourage the researchers to use the customized targeted search based on the needs.

python targeted_search.py

Please move your fusion knowledge (Fusion_K) into the training datasets before training the model. For infrared and visible image fusion, you can list your dataset as followed rule (the folder of Fusion_K will be produced by the targeted search):

    dataset/
        your_dataset/
            train/
                Fusion_K/
                Infrared/
                Visible/
            eval/
                Infrared/
                Visible/

Similarly, the way of training the medical image fusion is simple, referring to the section of prepared your dataset.

Train the Model

# Infrared and visible fusion
CUDA_VISIBLE_DEVICES=0 python train.py  --config config/diff-if-ivf.json

# Medical image fusion
CUDA_VISIBLE_DEVICES=0 python train.py  --config config/diff-if-mif.json

Citation

If you find our work useful for your research, please cite our paper.

@article{yi2024diff,
  title={Diff-IF: Multi-modality image fusion via diffusion model with fusion knowledge prior},
  author={Yi, Xunpeng and Tang, Linfeng and Zhang, Hao and Xu, Han and Ma, Jiayi},
  journal={Information Fusion},
  pages={102450},
  year={2024},
  publisher={Elsevier}
}

If you have any questions, please send an email to xpyi2008@163.com.