Closed zhengzibing2011 closed 4 years ago
Hi @zhengzibing2011 ,
Thanks for the good question. The discriminator(d_container) is compiled in two models: "d_model" and "all_model". According to the paper, the discriminator is trained in the "d_model", so I set the flag for training in the "all_model" to False.
Hi @zhengzibing2011 ,
Thanks for the good question. The discriminator(d_container) is compiled in two models: "d_model" and "all_model". According to the paper, the discriminator is trained in the "d_model", so I set the flag for training in the "all_model" to False.
Thanks a lot for your timely response! The code provides me with great reference value. Thanks you again. I still have the question about "d_container.trainable = False" in the trian.py. Is its function to keep the all model (i.e., the combination of the completion network and the discriminator network) from being trained while training the discriminator when tc<n<tc+td? If my guess is true, the conditional statement "if n<tc: ..., else: ..., if n>tc+td:..." has already achieved this goal. When I comment "d_container.trainable = False" , the training-loss log seems unchanged. The training log is given as follows, in which the above the results from the original code and the bottom is the results after commenting "d_container.trainable = False" . Looking forward to your reply again. training loss log.docx
It seems strange that commenting out "d_container.trainable = False" would have the same behavior. I think it would be better to look at the change in the weight of the discriminator before and after training with all_model, not loss.
Thanks a lot for your contribution. When learning the code in the train.py, I find d_container.trainable is set to False. It means that the discriminator does not need to be trained? However, it seems that d_model is trained through "d_loss_real = d_model.train_on_batch([inputs, points], valid)". Can you tell me the reason? Thank you very much! I am looking forward for your response.