pb2377 / Pytorch-Domain-Adaptation-via-Style-Consistency

MIT License
4 stars 1 forks source link

The readme file does not specify the version you use and which is the script file for training and testing #2

Closed yanjinwei143 closed 2 years ago

yanjinwei143 commented 2 years ago

The readme file does not specify the version you use and which is the script file for training and testing. I am a novice, so I need your help. Thank you

pb2377 commented 2 years ago

Apologies. This is an old project from 2019 that I implemented from a BMVC paper. It’s not the cleanest as I was only using it for some short benchmarking. I will get reacquainted with it and get back to you as soon as I can.

yanjinwei143 commented 2 years ago

道歉。这是我从监查论文中实现的2019年的一个旧项目。它不是最干净的,因为我只是将它用于一些简短的基准测试。我会重新熟悉它并尽快回复您。

I feel you are doing a great job, so I look forward to it. Thank you for your reply

pb2377 commented 2 years ago

Hi @yanjinwei143,

I've added a requirements.txt for all the library versions etc.

To run the code and do the full training as described in the paper, run the command

python main.py --train --target_domain clipart

or change clipart for the target domain of your choice (i.e. clipart, watercolour, comic). There are also various optional arguments that can be used to change various hyper parameters.

There are some pre-trained weights you need to download:

  1. pretrained VGG16 weights (https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth) and SSD weights (https://s3.amazonaws.com/amdegroot-models/ssd300_mAP_77.43_v2.pth) taken from the SSD implementation amdegroot/ssd.pytorch -- place these in a directory 'weights'
  2. pretrained Ada in model weights decoder.pth and vgg_normalized.pth that can be found in https://github.com/naoto0804/pytorch-AdaIN -- place these in the directory 'style-models'.

You'll also need to set up the directory paths for the two datasets (VOC and Clipart1k-Watercolor2k-Comic2k). The code will run the style transfer preprocess over the VOC dataset using the chosen target domain a few times before starting training.

I hope this helps!

I'll update the README with this as soon as I get some more time.

zyfone commented 2 years ago

Hi @yanjinwei143,

I've added a requirements.txt for all the library versions etc.

To run the code and do the full training as described in the paper, run the command

python main.py --train --target_domain clipart

or change clipart for the target domain of your choice (i.e. clipart, watercolour, comic). There are also various optional arguments that can be used to change various hyper parameters.

There are some pre-trained weights you need to download:

  1. pretrained VGG16 weights (https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth) and SSD weights (https://s3.amazonaws.com/amdegroot-models/ssd300_mAP_77.43_v2.pth) taken from the SSD implementation amdegroot/ssd.pytorch -- place these in a directory 'weights'
  2. pretrained Ada in model weights decoder.pth and vgg_normalized.pth that can be found in https://github.com/naoto0804/pytorch-AdaIN -- place these in the directory 'style-models'.

You'll also need to set up the directory paths for the two datasets (VOC and Clipart1k-Watercolor2k-Comic2k). The code will run the style transfer preprocess over the VOC dataset using the chosen target domain a few times before starting training.

I hope this helps!

I'll update the README with this as soon as I get some more time.

Thank you very much! I'll try your work. Thank you again