Open supriya-gdptl opened 1 year ago
Hi Supriya, I didn't see the nan loss in the log, is it happen after epoch10?
I can think of several hyper-parameters to help with stabilize the training:
trainer.opt.vae_lr_warmup_epochs
, the default value is 0, perhaps you can try setting it to 50 or even larger. This one will have the lr start from small number and slowly increase to the target lr through N epochs. sde.kl_anneal_portion_vada
to be larger, default is 0.5
, you can try increase to 1 (the maximum value), this control how fast the KL weight is increasing (the larger portion
, the slower the weight increase), and slowly increasing the kl weight can lead to smoother training dynamicshapelatent.log_sigma_offset
from 6.0 to say 5.0 or so, this is a constant offset pushing the sigma towards 0. Reducing the offset will make the latent point noisier when it's initialized and as a results lower KL loss. I am not sure whether reduce the offset can help, but it may worth a try. For the checkpoints, sorry we are still under the company process of getting approval to release it. (it's unlikely to release this week, I will track the process next week).
Hi @ZENGXH ,
Thank you for sharing the code.
I am training VAE (stage 1) on the ShapeNet15k dataset by following the instructions given in the README.md file. I am using the default config, except the batch size is 16 (because using batch size 32 was giving
cuda_out_of_memory
error). The loss started increasing and eventually becamenan
. So, I trained with a lower learning rate of 1e-4 (originally it was 1e-3). This time again, the loss decreased, then increased, and becamenan
.Please see the contents of log file below:
I looked at previous issues #9 , #17 , #18 , #22 , #35 , but did not find any solution. Could you please tell me how to resolve this issue?
Also, could you please share the checkpoint you mentioned in this section?
Thank you, Supriya