Open fostiropoulos opened 4 years ago
Actually the network used in the paper is much larger than the default model in this implementation.
Yes I would initially have thought so. I can only think of being able to train such a large model on a TPU. Do you have any insights on how it could have been done?
Maybe they have used tpus or large amount of gpus. Anyway replicating the model training in the paper will be very hard (actually practically impossible) with a few number of gpus.
I am using 4x Nvidia V100 and I am not able to get a batch size larger than 32 for the hyperparameters of this paper for training on the top codes. I have also changed the loss to discretized mixtures of logistics similar to the actual PixelCNN++ and PixelSnail implementation. The authors mention a batch size of 1024 which seems unreal to reach. Does this implementation of PixelSnail use more layers than the one reported in the VQVAE2 paper?
I am not able to make the mapping between this implementation and the one described in the appendix of VQVAE 2 to correctly configure it to replicate their results. Any help appreciated.