Open RenqinCai opened 4 years ago
Thanks for sharing the code and great work!
The paper picks several example sentences and they look amazing. But after running the code, I could hardly see such good examples. Could you tell me how to obtain examples listed in your paper? Thanks.
Couldn't agree more
Sorry for replying so late!
Yes, the two examples in Table#5 are indeed two good examples we chose, aiming to show that different modification weights W can control the obviousness of the target attribute in the generated sentences.
We also show 16 more examples and failure cases in the supplementary material (“0.2 Transfer Degree Control Cases” and “0.5 Failure Cases” in https://papers.nips.cc/paper/2019/file/8804f94e16ba5b680e239a554a08f7d2-Supplemental.zip).
a. The examples presented in the paper (i.e., Table#5 in Section#4.5) were generated under a fixed modification weight W, aiming to show that different modification weights W can control the obviousness of the target attribute in the generated sentence. In other words, the result of each row in Table 5 is for a specific W, and W has only one fixed value W = 1 or W=2, …, W = 6}. In Section “4.5 Transfer Degree Control”, we did show two good examples (as stated in Paragraph#2 on Page#8 “We also show two examples in the Yelp test dataset in Table 5 (more cases are shown in Supplementary Material).”,), but we have shown 16 more examples and failure cases in the supplementary material (“0.2 Transfer Degree Control Cases” and “0.5 Failure Cases” in https://papers.nips.cc/paper/2019/file/8804f94e16ba5b680e239a554a08f7d2-Supplemental.zip). b. For the outputs presented in https://github.com/Nrgeup/controllable-text-attribute-transfer/blob/master/outputs/ (i.e., main results in Section#4.3), the modification weight W is dynamic. We detailed it in Section#3.3 (what we call “Dynamic-weight-initialization method” in our paper), in which we dynamically try each weight in W from small to large until we get our target latent representation Z. In other words, we have a weight set W = {1:0; 2:0; 3:0; 4:0; 5:0; 6:0} and use the Dynamic-weight-initialization method (as shown in Section 3.3) to allocate the initial modification weight w_i in each trial process. The reason we do this is to get better BLEU scores with the reference sentences, because the attribute of some human-written references is not obvious (we mentioned this in Paragraph#2 on Page#8 -- “However, the BLUE score first increases and then decreases, we argue that this is because the attribute of some human-written references is not obvious.” ).
Therefore, these are the results from two different settings, which we describe in detail in Section#4.3 “Sentiment and Style Transfer Results” and Section#4.5 “Transfer Degree Control”, respectively.
thank u for ur excellent work. But the quetion is still there, how to choose the final result among 30 different results under various weight?
Thanks for sharing the code and great work!
The paper picks several example sentences and they look amazing. But after running the code, I could hardly see such good examples. Could you tell me how to obtain examples listed in your paper? Thanks.