Closed doantientai closed 5 years ago
When you run generate_data.py
for the first time, it will convert the tensorflow model to pytorch version. After the conversion, the script will do some test on whether the output model gives same results as original model. Can you check the test error (Line 98 of file models/pggan_generator.py
?
A friendly reminder, please delete the already converted model, pull the latest version, and re-run generate_data.py
.
Hi, I'm sorry for late response. The test error is below: [2019-10-28 10:02:11,883][INFO] Average distance is 6.019953e-01. Is that a lot?
I'm using the source code of InterFaceGAN I cloned a few minutes ago.
Again, thank you so much for your kindly support!
Hmmm, that distance is indeed large... What is the resolution of your model? Do you use the official code for traning?
I train my model using this github: https://github.com/tkarras/progressive_growing_of_gans, I think it is the official one.
I choose the resolution 128x128 as a quick start. Below is my config.txt of the training:
D = {'func': 'networks.D_paper'}
D_loss = {'func': 'loss.D_wgangp_acgan'}
D_opt = {'beta1': 0.0, 'beta2': 0.99, 'epsilon': 1e-08}
EasyDict = <class 'config.EasyDict'>
G = {'func': 'networks.G_paper'}
G_loss = {'func': 'loss.G_wgan_acgan'}
G_opt = {'beta1': 0.0, 'beta2': 0.99, 'epsilon': 1e-08}
data_dir = data/trainingdata
dataset = {'tfrecord_dir': 'data/trainingdata_tfr'}
desc = pgan-128-preset-v2-1gpu-fp32
env = {'TF_CPP_MIN_LOG_LEVEL': '1'}
grid = {'size': '1080p', 'layout': 'random'}
join = <function join at 0x7fb0934b3488>
num_gpus = 1
project_dir =
random_seed = 1000
result_dir = results
sched = {'minibatch_base': 4, 'minibatch_dict': {4: 128, 8: 128, 16: 128, 32: 64, 64: 32, 128: 16, 256: 8, 512: 4}, 'G_lrate_dict': {1024: 0.0015}, 'D_lrate_dict': {1024: 0.0015}, 'max_minibatch_per_gpu': {128: 32, 256: 16, 512: 8, 1024: 4}}
tf_config = {'graph_options.place_pruned_graph': True}
train = {'func': 'train.train_progressive_gan', 'mirror_augment': True, 'total_kimg': 20000}
And I created this in MODEL_POOL in InterFaceGAN/models/model_settings.py
'pggan_128': {
'tf_model_path': MODEL_DIR + '/network-snapshot-006008.pkl',
'model_path': MODEL_DIR + '/network-snapshot-006008.pth',
'gan_type': 'pggan',
'dataset_name': 'data_128',
'latent_space_dim': 512,
'resolution': 128,
'min_val': -1.0,
'max_val': 1.0,
'output_channels': 3,
'channel_order': 'RGB',
'fused_scale': False,
},
I think the official released 1024x1024 model sets fused_scale
as False
, however, the official released training code sets fused_scale
as True
by default. Please try deleting your converted model, setting fused_scale
as True
in file models/model_settings.py
, and re-running generate_data.py
.
More details can be found in issue #10
Yeah that is absolutely true. I keep fused_scale as True for my training. It's generating reasonable images now. Thank you very much!
Hi guys,
Thank you for your great work. I have a question.
I found that generate_data.py works very well with my style-gan models, but when I trained PGGAN with the same dataset of images and then ran generate_data.py for generating 10k images, all images look like this:
I tried another checkpoint but it gave similar results:
I guess it is caused by some misconfiguration between my PGGAN and the inference code of InterFaceGAN, but haven't found it yet. Can you give me some advice please?