This repository is the official implementation of the following paper, which focuses on defect/anomaly generation for downstream tasks in industries:
Few-Shot Defect Image Generation via Defect-Aware Feature Manipulation
Yuxuan Duan, Yan Hong, Li Niu, Liqing Zhang
The 37th AAAI Conference on Artificial Intelligence (AAAI 2023)
https://arxiv.org/abs/2303.02389
hazelnut_good.pkl
and hazelnut_hole.pkl
are now available here for a quick trial.pip install scipy psutil lpips tensorboard
.git clone https://github.com/Ldhlwh/DFMGAN.git
cd DFMGAN
./data
. (If you wish to try your own datasets, organize the defect-free images, defect images and the corresponding masks in a similar way.)Preprocess the dataset images into zip files for easy StyleGAN loading: (e.g. object category hazelnut, defect category hole)
# Defect-free dataset for Stage 1
python dataset_tool.py --source ./data/hazelnut/train/good \
--dest ./data/hazelnut_good.zip \
--width 256 --height 256
# Defect image & mask dataset for Stage 2
python dataset_tool.py --source ./data/hazelnut/test/hole \
--source-mask ./data/hazelnut/ground_truth/hole \
--dest ./data/hazelnut_hole_mask.zip --width 256 --height 256
hazelnut_good.pkl
provided here.Pretrain a StyleGAN2 model on defect-free images, using the default configuration auto
: (e.g. object category hazelnut)
python train.py --data ./data/hazelnut_good.zip \
--outdir runs/hazelnut_good \
--gpus 2 --kimg 3000
# If training for 3000 kimgs are not enough, you may resume the pretraining by
python train.py --data ./data/hazelnut_good.zip \
--outdir runs/hazelnut_good \
--gpus 2 --kimg 3000 --resume runs/hazelnut_good/path/to/the/latest/model.pkl
# You may also try different values for the following settings
# --gpus: number of GPUs to be used
./runs/hazelnut_good/*
. Chose a good model for the transfer in Stage 2. You may optionally make a copy as ./pkls/hazelnut_good.pkl
for easy loading.hazelnut_hole.pkl
provided here.Transfer the pretrained model to defect images with the defect-aware feature manipulation process: (e.g. object category hazelnut, defect category hole)
python train.py --data ./data/hazelnut_hole_mask.zip \
--outdir runs/hazelnut_hole --resume pkls/hazelnut_good.pkl \
--gpus 2 --kimg 400 --snap 10 --transfer res_block_match_dis
# You may also try different values for the following settings
# --gpus: number of GPUs to be used
# --lambda-ms: weight for the mode seeking loss
# --dmatch-scale: the number of base channel/max channel of D_match
--snap
ticks (i.e. $4 \times$--snap
kimgs). You may alter the metric list with --metrics
../runs/hazelnut_hole/*
. You may optionally make a copy of a good model as ./pkls/hazelnut_hole.pkl
for easy loading.Generate 100 random defect images: (e.g. object category hazelnut, defect category hole)
python generate.py --network pkls/hazelnut_hole.pkl \
--output gen_img/hazelnut_hole
# You may also try different values for the following settings
# --seeds: specify the random seeds to be used
# --num: number of generated images (only when --seeds is unspecified)
# --gen-good: (flag) generate defect-free images along
# --gen-mask: (flag) generate masks along
python generate.py --network runs/hazelnut_hole/path/to/a/model.pkl --cmp
to generate triplets of defect-free image, mask and defect image like Fig. 4 under the same directory with model.pkl
, named cmp<kimg>.png
.
If you find DFMGAN helpful to your research, please cite our paper:
@inproceedings{Duan2023DFMGAN,
title = {Few-Shot Defect Image Generation via Defect-Aware Feature Manipulation},
author = {Yuxuan Duan and Yan Hong and Li Niu and Liqing Zhang},
booktitle = {AAAI},
year = {2023}
}