Open dishavarshney082 opened 4 years ago
sample()
is to sample num_samples
sentences from the generator G. The CUDA memory would be overflowed if we directly sample num_samples
sentences from G. Thus, num_samples
sentences are divided into num_batch
batches of sentences.
Does sampling from the generator means the same as inferencing from the pretrained MLE model? In RelGan the model used for pretraining and for generating samples are different?
I could not understand the way you are sampling data from the generator. Basically, how are you creating batches from num_samples?