Closed tanqi724 closed 5 years ago
Adversarial loss effects less in missing completely at random. When the missingness comes from missing at random or missing not at random settings, the effects of adversarial loss increase. You can try different datasets with different missingness settings.
Well noticed. Thank you very much for your reply!
Dear all, please bear me for adding question after this issue was closed, but I think my question is relevant so I post it here. I was wondering that Proposition 2 requires that M and X are independent? How could the theoretical analysis be adapted to the missing not at random mechanism (MNAR)? In table 3 of supplementary document, indeed we can see that GAIN is much better than auto-encoder under MNAR. Does the implementation of auto-encoder use M as additional input? Or it is a simple implementation with only X as input?
We only prove the theoretical works in missing completely at random setting.
We use M as the additional inputs for MNAR and MAR settings that we would like to capture the information in the mask vector.
Thanks for your prompt reply. From the results, it is a remarkable feature of GAIN for handling MNAR or MAR!
Thank you very much for your work. But after going through your paper, I cannot get the sense on how dose adversarial loss contribute in the missing data imputation. I have tried on your provided example and only use MSE_loss to train the generator. It seems that the test mse with only MSE_Loss (Line 162: G_loss = MSE_train_loss ) and that with MSE_Loss + Adversarial Loss (Line 162 G_loss = G_loss1 + alpha * MSE_train_loss ) are quite similar. Could you kindly explain more on how adversarial loss contributes in imputation, and maybe some other examples? Thank you very much!