Closed yujias424 closed 1 year ago
Hi, Thanks for pointing out this mistake. I uploaded the code with some typos in the first commit. In the new commit, I have updated code. Now the gumble_softmax has been applied in the ZINBAE model. Let me know if you have any questions. Best, Tian
Hi Tian,
I have another question regarding to the x^{count} used in the MSE. As mentioned in the paper, the MSE stands for the mean square error between log(X^{count} +1) and log(X′ + 1). I wonder whether this X^{count} is just the raw count matrix or the pre-processed raw count matrix following DCA's library size + log normalization. According to the code, I suppose it should be the original raw count matrix?
Best, Yujia
Hi Yujia, According the line 129 - 141, you can see the "ZI_output" is the autoencoder.output Gumbel(prob_logit). And in the line 81 - 84, autoencoder.output = mean size factor. So as you can see, the "ZI_output" already considers the size factor. Then I use the MSE between log(X^{raw_count}+1) and log(X'+1) as the ZI_output loss, since X' = mean size factor Gumbel(prob_logit). Tian
Dear Tian,
Thank you for your detailed explanation!
Best, Yujia
Hi,
Thanks for presenting this nice approach. I wonder where the gumble_softmax() function is used in ZINBAE.py. I could only find that the function is defined in line 97 but could not find where the output of this function is used to perform element-wise multiplication with the reconstructed mean estimated in the ZINB model. I wonder whether it is because I miss any important part of the code?
Looking forward to your response!
Thanks.