shrimai / Style-Transfer-Through-Back-Translation

162 stars 32 forks source link

cross-aligned auto-encoder(CAE) results #17

Closed justkandy closed 4 years ago

justkandy commented 5 years ago

Hello! In your paper you compare the accuracies of sentences generated by your method and CAE. And it seems that CAE is outperforming your model in Gender transformation. I tried to train CAE myself, but my results for CAE were different than in your paper. Did you use the default parameters to train CAE? If not, what did you change? Sorry for bothering, and any response would be greatly appreciated.

shrimai commented 5 years ago

Yes, I had tried to keep the parameters of my model and the CAE model similar for comparison. The --dim_emb was set to 300, --dim_z = 500 and -max_seq_length = 50. I had not changed the classifier settings.

justkandy commented 5 years ago

Thank you for the response!
At the moment, we are writing a paper comparing existing style-transformation methods. Seems like you were able to train CAE quite well on Gender transformation. I wonder, maybe it is possible for you to send pre-trained CAE model, so we can make sure that we are properly replicating the model. Sorry for bothering, and thanks for considering my request!

shrimai commented 5 years ago

Sure, could I have your email id?

justkandy commented 5 years ago

andrei.kazlouski@aalto.fi