Closed theRoodest closed 3 years ago
A couple of things:
inference.py
to see how the model is on all your images. Alright thank you. My school's resources are stretched so I'll have to wait for my training to finish before running inference.py
. Thanks for the response!
I am currently trying to convert cartoon avatars back into their original image. I am currently training on 706 paired images with a 85/15 percent train/test split. I am about to add more training data but wanted to know if I need to maintain any split. In my test logs, I only see 12 unique files (for a total of 24 images) that appear to be used for validating the training. That being said, I'm only about 170k steps into 500k max training steps. Current params and log attached for reference.
!/bin/bash
SBATCH --job-name=psp_train
SBATCH --nodes=1
SBATCH --gres=gpu:1
SBATCH --cpus-per-task=24
SBATCH --mem=64G
SBATCH --time=144:59:59
SBATCH --output=test-out-%j.txt
SBATCH --partition=beards
. /etc/profile
module load lang/miniconda3/4.5.12 module load lib/cuda/10.1.243 export CXX=g++
source activate psp_env
python scripts/train.py \ --dataset_type=celebs_sketch_to_face \ --expdir=log$(date +%Y-%m-%d_%H-%M-%S) \ --workers=16 \ --batch_size=16 \ --test_batch_size=8 \ --test_workers=8 \ --val_interval=5000 \ --save_interval=10000 \ --encoder_type=GradualStyleEncoder \ --start_from_latent_avg \ --lpips_lambda=1.6 \ --l2_lambda=1 \ --id_lambda=0 \ --moco_lambda=0.5 \ --w_norm_lambda=0.005 \ --label_nc=0 \ --input_nc=3 \ --output_size=256
test-out-43706147.txt