Open playma opened 7 years ago
@playma I think target_lstm is just a model for us to generate something like "toy-corpus" in this experiment. As the paper said, it used this model to generate some token sets as real data. So if you want to pre-train with real data like natural text, you don't really need target_lstm model.
Excuse me, I have some questions. If I want to train SeqGan on the real data like natural text(poem, article), I can not pre-train the generator because I can not know the distribution of the text right and I don't have targer_lstm params?