eladrich / pixel2style2pixel

Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework
https://eladrich.github.io/pixel2style2pixel/
MIT License
3.2k stars 568 forks source link

Recommended train, test split #227

Closed theRoodest closed 3 years ago

theRoodest commented 3 years ago

I am currently trying to convert cartoon avatars back into their original image. I am currently training on 706 paired images with a 85/15 percent train/test split. I am about to add more training data but wanted to know if I need to maintain any split. In my test logs, I only see 12 unique files (for a total of 24 images) that appear to be used for validating the training. That being said, I'm only about 170k steps into 500k max training steps. Current params and log attached for reference.

!/bin/bash

SBATCH --job-name=psp_train

SBATCH --nodes=1

SBATCH --gres=gpu:1

SBATCH --cpus-per-task=24

SBATCH --mem=64G

SBATCH --time=144:59:59

SBATCH --output=test-out-%j.txt

SBATCH --partition=beards

. /etc/profile

module load lang/miniconda3/4.5.12 module load lib/cuda/10.1.243 export CXX=g++

source activate psp_env

python scripts/train.py \ --dataset_type=celebs_sketch_to_face \ --expdir=log$(date +%Y-%m-%d_%H-%M-%S) \ --workers=16 \ --batch_size=16 \ --test_batch_size=8 \ --test_workers=8 \ --val_interval=5000 \ --save_interval=10000 \ --encoder_type=GradualStyleEncoder \ --start_from_latent_avg \ --lpips_lambda=1.6 \ --l2_lambda=1 \ --id_lambda=0 \ --moco_lambda=0.5 \ --w_norm_lambda=0.005 \ --label_nc=0 \ --input_nc=3 \ --output_size=256

test-out-43706147.txt

yuval-alaluf commented 3 years ago

A couple of things:

theRoodest commented 3 years ago

Alright thank you. My school's resources are stretched so I'll have to wait for my training to finish before running inference.py. Thanks for the response!