williamyang1991 / GP-UNIT

[CVPR 2022] Unsupervised Image-to-Image Translation with Generative Prior
Other
192 stars 15 forks source link

content_encoder.pt #2

Closed mapengsen closed 1 year ago

mapengsen commented 2 years ago

If I want to train the model from a new datasets,

  1. does content_encoder.pt need to be retrained?
  2. How to train content_encoder.pt on the new datasets?
williamyang1991 commented 2 years ago

Our content_encoder.pt is trained on ImageNet291 and synImageNet291, which contains many domains and human faces. Generally, you can expect it to generelize to your new datasets. So you can directly use our pretrained content_encoder.pt on the new dataset.

If you want to train your own encoder, you can follow https://github.com/williamyang1991/GP-UNIT/#train-content-encoder-of-prior-distillation

If you datasets are unpaired, you can merge them into the unpaired dataset (ImageNet291) as new domains, and specify the udataset_sizes of your new domains. https://github.com/williamyang1991/GP-UNIT/blob/2ee754e118ae838acbc9107f9ec15617c8f27271/prior_distillation.py#L174-L178

williamyang1991 commented 2 years ago

In our experiment, we found the pretrained encoder generelized to giraffes, landscapes and art portraits.