Open fpshuang opened 4 years ago
Quote: "GONet is trained with back-propagation in 3 steps. First, we train a DCGAN with automatically annotated positive data. Through this GAN, we estimate the generator (Gen) and discriminator (Dis) of GONet. Second, we use the auxiliary autoencoder network shown at the top part of Fig. 2 to train the InvGen using positive examples. Third, we train the final fully connected layer of GONet with a small set of positive and negative examples. We use early stopping in this last step to prevent over-fitting."
Since your Gen network's input is the latent feature z which is the invG's output, If you train the Gen and Dis first, how could get suitable data as Gen's input? Or you just freeze the Gen network when you train the invG network after DCGAN training completed?
look forward to your reply.
I re-read this paper, I found that, you did freeze the Gen while Auto-decoder is training. And seems you want to use a Decoder(InvG) to represent the latent Z.
Is this method too ambiguous?
https://github.com/FreakieHuang/GOnet_tensorflow
This is my reimplement training code with TF2. Welcome to use and comment.
Many thx for your work.
I have read your paper, but I am quite confused by the training process.
Could you release the training code? Or tell me more details about the training process?
Best regards.