eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

pre-trained model weights #191

Closed LonglongaaaGo closed 3 years ago

LonglongaaaGo commented 3 years ago

Hi! Thanks for your excellent works. I want to know can you provide the pre-trained pSp model weights on the AFHQ Cat and Dog datasets for StyleGAN inversion.

yuval-alaluf commented 3 years ago

Unfortunately, the code we used for training those models is a very old version so you'll need to re-train the models for the models to work with the current code base. But here are the exact parameters we used for training the models:

{
    "batch_size": 8,
    "board_interval": 50,
    "checkpoint_path": "",
    "dataset_type": "afhq_cat_encode",
    "encoder_type": "GradualStyleEncoder",
    "exp_dir": "",
    "id_lambda": 0,
    "image_interval": 100,
    "input_nc": 3,
    "l2_lambda": 1.0,
    "l2_lambda_crop": 0,
    "label_nc": 0,
    "learn_in_w": false,
    "learning_rate": 0.0001,
    "lpips_lambda": 0.8,
    "lpips_lambda_crop": 0,
    "max_steps": 500000,
    "optim_name": "ranger",
    "output_size": 512,
    "resize_factors": null,
    "save_interval": 10000,
    "start_from_latent_avg": true,
    "stylegan_weights": "",
    "test_batch_size": 8,
    "test_workers": 8,
    "train_decoder": false,
    "val_interval": 5000,
    "w_norm_lambda": 0,
    "workers": 8
}

If you have any questions about training the models feel free to ask.