The implementation is built on the pytorch implementation of Faster RCNN jwyang/faster-rcnn.pytorch
Clone the code and create a folder
git clone https://github.com/TKKim93/DivMatch.git
cd faster-rcnn.pytorch && mkdir data
Build the Cython modules
cd DivMatch/lib
sh make.sh
You can download pretrained VGG and ResNet101 from jwyang's repository. Default location in my code is './data/pretrained_model/'.
DivMatch
├── cfgs
├── data
│ ├── pretrained_model
├── datasets
│ ├── clipart
│ │ ├── Annotations
│ │ ├── ImageSets
│ │ ├── JPEGImages
│ ├── clipart_CP
│ ├── clipart_CPR
│ ├── clipart_R
│ ├── comic
│ ├── comic_CP
│ ├── comic_CPR
│ ├── comic_R
│ ├── Pascal
│ ├── watercolor_CP
│ ├── watercolor_CPR
│ ├── watercolor_R
├── lib
├── models (save location)
Here are the simplest ways to generate shifted domains via CycleGAN. Some of them performs unnecessary computations, thus you may revise the I2I code for faster training.
Change line 177 in models/cycle_gan_model.py to
loss_G = self.loss_G_A + self.loss_G_B + self.loss_idt_A + self.loss_idt_B
Change line 177 in models/cycle_gan_model.py to
loss_G = self.loss_G_A + self.loss_G_B + self.loss_cycle_A + self.loss_cycle_B
Use the original cyclegan model.
Here is an example of adapting from Pascal VOC to Clipart1k:
python train.py --dataset clipart --net vgg16 --cuda
python test.py --dataset clipart --net vgg16 --cuda