This repo contains the supported code and configuration files to reproduce anomaly detection results of EUG. Its code is based on mmsegmentation and jsrnet.
In order to run this project normally, you first need to install the dependencies required by jsrnet
pip install requirements.txt
Then you need to install mmsegmentation as instructed by the official documentation
The data sets used in this project include:
NOTE: The link of testing sets is provided, and we have made following changes: 1) Convert all the *.png* files to the .jpg** images; 2) Convert images in LaF and RO to smaller sizes (1/2w × 1/2H).
For the testing set, the ground truth of RA is generated by the authors; for LaF and RO, the GTs are similar to the official ones.
Modify the path of each dataset in mypath.py
The selection of EUG's base model is as follows:
Methods | Base Model 1 | Base Model 2 |
---|---|---|
DeepEnsemble | PSP_s | PSP_s2 |
EUG_tiny | PSP_s | OCR_s |
EUG_base | PSP | OCR |
EUG_heter | OCR | Segformer |
The model we chose to use is detailed below,you need to go to the official repo of [mmsegmentation]() to download the corresponding config and checkpoint files
NOTE: PSP_s2 is trained by us.
To start training the model, first you need to modify the weights and configuration file paths for the base model in the configuration file. The location of the configuration file is '/exp_config/defaults.py', and the configuration items that need to be modified are as follows:
_C.EXPERIMENT.CONFIG_FILE1='xxx'
_C.EXPERIMENT.CONFIG_FILE2='xxx'
_C.EXPERIMENT.CHECKPOINT_FILE1='xxx'
_C.EXPERIMENT.CHECKPOINT_FILE2='xxx'
# for EUG tiny
python train.py --model_name eug_tiny
# for EUG base
python train.py --model_name eug_base
# for EUG heter
python train.py --model_name eug_heter
Model evaluation can be applied using the pretrained checkpoints or the ones trained by yourself:
NOTE: Only the fusion model weight provided.
Download the required test dataset(The link is in section Data Preparation)
Execute the following code to perform inference for the model.
python inference.py --ckpt_path --out_dir --img_dir
'ckpt_path' is the storage path for the fuse_decoder's weights, 'out_dir' is the storage location for inference results, and 'img_dir' is the storage location for test images.
NOTE: The generated color images are for visualization and the grey scale images can be used for evaluation.
Prepare the GT information: all the GT information can be downloaded from following link(Generated by the authors and the links are provided in 2.3):
Update the path list in each subfolder to your local path, take the RA dataset for example, you should change the path list in 'path_of_GT\ra_lab\lablist.txt'.
Run the evaluation code to generate the results (AP,FPRs): you should change the GT path and output path in the code.
python eval.py