We have invented Dense Normalization (DN) and published in ECCV 2024. DN is better than Kernelized Instance Normalization. Please don't forget to check our latest DN here.
We have released v0 that can reproduce experiments mentioned in our paper or be used in your application and study.
requirements.txt
.A simple example is provided for you here. Or you can jump to the next section to train a model for your own dataset. All the steps here will help you train a model with CUT framework.
ANHIR2019/dataset_medium/breast_1/scale-20pc/HE.jpg
and ANHIR2019/dataset_medium/breast_1/scale-20pc/ER.jpg
in data/example/
folder. We would like to transfer HE (domain X)
to ER (domain Y)
.config.yaml
The whole pipeline is heavily dependent on the config.yaml
. Please take a look at the ./data/example/config.yaml
first to understand what are necessary during training and testing process. You can easily train your own model with your own dataset by modifiying the config.yaml
.
HE.jpg
and ER.jpg
first as the main contents are surrounded by a lot of unnecessary blank region, which will increase the training time but make the distribution hard to be learned. ./data/example/HE_cropped.jpg
and ./data/example/ER_cropped.jpg
.python3 crop_pipeline.py -c ./data/example/config.yaml
python3 train.py -c ./data/example/config.yaml
./experiments/example_CUT/train/
folder.As the testing data have been cropped during the first step, we can skip this step here.
python3 transfer.py -c config_example.yaml --skip_cropping
The output will be in the ./experiments/example_CUT/test/HE_cropped/
folder.
The following is an example of output file structure.
experiments/
βββ example_CUT
βββ test
β βββ HE_cropped
β βββ combined_in_30.png
β βββ combined_kin_30_constant_5.png
β βββ combined_tin_30.png
β βββ in
β β βββ 30
β βββ kin
β β βββ 30
β β βββ constant_5
β βββ tin
β βββ 30
βββ train
./data/
config.yaml
in ./data/$your_folder/
config.yaml
./data/$your_folder/
.python3 crop_pipeline.py -c ./data/$your_folder/config.yaml
crop.py
to crop each image and save those patches in the same folder (trainX
, trainY
)
python3 crop.py -i ./data/$your_folder/$image_a -o ./data/$your_folder/trainX/ --thumbnail_output ./data/$your_folder/trainX/
python3 crop.py -i ./data/$your_folder/$image_b -o ./data/$your_folder/trainX/ --thumbnail_output ./data/$your_folder/trainX/
...
python3 crop.py -i ./data/$your_folder/$test_a -o ./data/$your_folder/$test_a/ --stride 512 --thumbnail_output ./data/example/$test_a/
python3 crop.py -i ./data/$your_folder/$test_b -o ./data/$your_folder/$test_b/ --stride 512 --thumbnail_output ./data/example/$test_b/
...
TRAINING_SETTING
section in ./data/$your_folder/config.yaml
, especially the TRAIN_DIR_X
and TRAIN_DIR_Y
.python3 train.py -c ./data/$your_folder/config.yaml
INFERENCE_SETTING
section in ./data/$your_folder/config.yaml
, especially the TEST_X
and TEST_DIR_X
. Then,
python3 transfer.py -c ./data/$your_folder/config.yaml --skip_cropping
TEST_X
and TEST_DIR_X
in the INFERENCE_SETTING
section and execute the following script for each image.
python3 transfer.py -c ./data/$your_folder/config.yaml --skip_cropping
Besides kernelized instance normalization
, thumbnail instance normalization
and instance normalization
are also provided.
You can adjust NORMALIZATION.PADDING
, NORMALIZATION.KERNEL_TYPE
, and NORMALIZATION.KERNEL_SIZE
for inference.
INFERENCE_SETTING:
...
NORMALIZATION:
TYPE: "kin"
PADDING: 1
KERNEL_TYPE: "constant" # constant or gaussian
KERNEL_SIZE: 3
Please provide the path of the THUMBNAIL
.
INFERENCE_SETTING:
...
NORMALIZATION:
TYPE: "tin"
THUMBNAIL: "./data/example/testX/thumbnail.png"
Specification of NORMALIZATION.TYPE: in
is enough.
INFERENCE_SETTING:
...
NORMALIZATION:
TYPE: "in"
Besides CUT
, LSeSim
and CycleGAN
are also provided. For each experiment, you should rename EXPERIMENT_NAME
to avoid overwritting.
Has been described above.
MODEL_NAME: "CUT"
Specify in the config.yaml
.
MODEL_NAME: "cycleGAN"
Please use F-LSeSim
, which is subtly modifed from the offical implementation.
./data/
where you might have already created during training other models. Duplicated work is not required.config.yaml
. Please set Augment
to True
for L-LSeSim
or False
for F-LSeSim
.
MODEL_NAME: "LSeSim"
...
TRAINING_SETTING:
Augment: True #LSeSim
config.yaml
, including EXPERIMENT_ROOT_PATH
, TRAINING_SETTING::TRAIN_ROOT
, TRAINING_SETTING::TRAIN_DIR_X
, TRAINING_SETTING::TRAIN_DIR_Y
,
INFERENCE_SETTING::TEST_X
,
INFERENCE_SETTING::TEST_DIR_X
, and INFERENCE_SETTING::THUMBNAIL
.
# Example 1
EXPERIMENT_ROOT_PATH: "./experiments/"
# Change to
EXPERIMENT_ROOT_PATH: "../experiments/"
TRAINING_SETTING: TRAIN_ROOT: "./data/example/"
TRAINING_SETTING: TRAIN_ROOT: "../data/example/"
4. Move to `./F-LSeSim`.
cd ./F-LSeSim
5. Run script
```script
./scripts/train_sc.sh $path_to_yaml
./scripts/train_sc.sh ./../data/example/config.yaml
./F-LSeSim/checkpoints/$EXPERIMENT_NAME
./scripts/transfer_sc.sh $path_to_yaml
./scripts/transfer_sc.sh ./../data/example/config.yaml
./experiments/$EXPERIMENT_NAME
We open-source the web server for human evaluation study. Researchers can easily modify the config to conduct their human evaluation study.
Given two folders pathA
and pathB
that store the original and generated images within the same domain, following metrics will be calculated.
python3 metric_images_with_ref.py --path-A $pathA --path-B $pathB
If images are stored in multiple folders, please concatenate those paths with delimiters of ,
.
python3 metric_images_with_ref.py --path-A $pathA1,$pathA2,... --path-B $pathB1,$pathB2,...
python3 metric_whole_image_with_ref.py --image_A_path $path_to_ref_image --image_B_path $path_to_compared_image
Please refer to the implementation of NIQE
and PIQE
calcuations in this repo.
python3 metric_whole_image_no_ref.py --path $image_path
Script has been provided to visualize the relationship between thumbnail's features and patches' features, which shows that the concept using the same mean and variance calcuated from the thumbnail is incorrect and patches nearby each other share similar features.
Please specify the image that would be tested in the inference
part of config.yaml
. Then:
python3 appendix/proof_of_concept.py -c $path_to_config_file
Generated images would be saved in ./proof_of_concept/
We thank Chao-Yuan Yeh, the CEO of aetherAI, for pro- viding computing resources, which enabled this study to be performed, and Cheng-Kun Yang for his revision suggestions. Besides our novel kernelized instance normalizatio module, we use CycleGAN, Contrastive Unpaired Translation (CUT) as our backbone, and LSeSim. For the CUT model, please refer to the official implementation here. This code is a simplified version revised from wilbertcaine's implementation.