Open yijunzhouzoey opened 3 years ago
Hi @EstelleZhou,
yes, you could try to speed this up by decreasing the number of iteration per token. However, this may lead to a worst result, in term of positivity/negativity, compared to the one reported in the paper.
Andrea
Thanks for your brilliant work!
Currently, I am trying the PPLM with a discriminator on GPU but it still needs around 5 mins to generate 512 tokens. I wonder if there is any way to speed up the inference time?
Many thanks and best regards, Yijun