luoxuan-cs / PAMA

An arbitrary style transfer algorithm
MIT License
110 stars 23 forks source link

How to retrain an encoder? #5

Closed dandingbudanding closed 2 years ago

dandingbudanding commented 2 years ago

The performance of VGG19 is nice, but I want to use a lighter encoder, How can I retrain an encoder (for example mobilenet v3)? Thank you~

luoxuan-cs commented 2 years ago

There are two methods:

  1. Just use a mobilenet pre-trained on the Imagenet and freeze its parameter.
  2. Train a mobilenet encoder-decoder on the image reconstruction task and then replace the original encoder-decoder of PAMA. The performance of 1 will be better because pre-trained the encoder with supervision will help it to parse the semantics. However, adapting the mobilenet to arbitrary style transfer algorithms is not that easy. You will find that the results degrade greatly if you use a mobilenet. Please check: Rethinking and Improving the Robustness of Image Style Transfer.
dandingbudanding commented 2 years ago

There are two methods:

1. Just use a mobilenet pre-trained on the Imagenet and freeze its parameter.

2. Train a mobilenet encoder-decoder on the image reconstruction task and then replace the original encoder-decoder of PAMA.
   The performance of 1 will be better because pre-trained the encoder with supervision will help it to parse the semantics.
   However, adapting the mobilenet to arbitrary style transfer algorithms is not that easy. You will find that the results degrade greatly if you use a mobilenet. Please check: [Rethinking and Improving the Robustness of Image Style Transfer](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_Rethinking_and_Improving_the_Robustness_of_Image_Style_Transfer_CVPR_2021_paper.pdf).

Thanks for your reply~