[NEW!] checkpoint release: https://drive.google.com/drive/folders/11V1XJCORcm3XQ_AfaOEfRaKFZpjMLyVm?usp=drive_link (if it expired, contact me via qying@nvidia.com)
This framework is developed on the basis of source code from "Invertible image decolorization", which also uses invertible network for image-to-image translation.
In separate research works, we usually encounter repeated codes such as training loop, launching DDP and so on. So I developed this framewotk and the scripts can be called via routers, i.e., using "opt" to define options and using "mode" to do different experiments/ablations for a same project.
Please see README_PAMI.md
localizer
run bash ./run_ISP_OSN.sh (mode==4)
Line 141 of Modified_invISP.py, modify the model as that of OSN network
specify Line 1319-2323 which provides the tamper source and mask
the setting file is train_ISP_OSN.yml. If you want to do automatic copy-move, set inference_tamper_index=2
and inference_load_real_world_tamper=False
using_which_model_for_test
decides using which model for testing. discriminator_mask
is our method, localizer
is OSN.
The average F1 score will be printed in the console.
The main loop loops over the training set. Therefore you should manually kill the process when all the validate images are runned.
Voila! the flow is optimize_parameters_router -> get_performance_of_OSN
test_baseline: true
and task_name_customized_model: ISP_alone, load_customized_models: 64999
(which loads the trained baseline model from that location)test_restormer: true
, and the model localizer
will be Restormer. mode=0
, which would contain protected image generation and tampering localization.