Open zhaotianzi opened 2 years ago
Two pointers:
Two pointers:
- After training, the encoder and decoder loss should start reducing because the generator starts to learn a good mapping.
- The loss for critic should be increasing in magnitude. The reason is that a high critic loss implies that the critic (discriminator) is able to distinguish between fake samples and real samples really well. As it learns, the critic loss must increase.
Hi Arun, The experiment results are just the opposite in my own datasets. After training, the encoder and decoder loss increased, and the critic loss reduced. At the same time, the model performed badly when i used the model to perform anomaly detection in anomaly datasets.
Encoder and decoder loss should decrease as they learn a better mapping - I observed it in training log file. I am not sure what is wrong in your case, maybe you are seeing encoder as critic and vis-a-vis? Training of GANs are highly unstable and require a good amount of computation power. Try retraining it.
i have use the model in my own dataset ,But after a long time of training, the loss is still very high.Can you tell me how to reduce my loss?
DEBUG:root:critic x loss -30.026 critic z loss 0.416 encoder loss 1265.846 decoder loss 1235.256