TODO: revist output_padding. This code may not generalize to other examples. Needs testing.
See https://github.com/pytorch/pytorch/pull/904/files could fix by storing list of
output_sizes from the encoder conv layers and removing the decoder conv layers from
the sequential and instead store a list of them. However, this approach has problems
because output_sizes must be passed in the forward function, meaning that we can't
use the nn.Sequential of conv layers i.e. the conv layers are stored as a list, not
as member variables as is needed by nn.Module.
TODO referenced in molecules/ml/unsupervised/conv_vae/pytorch_cvae/cvae.py
TODO: revist output_padding. This code may not generalize to other examples. Needs testing. See https://github.com/pytorch/pytorch/pull/904/files could fix by storing list of output_sizes from the encoder conv layers and removing the decoder conv layers from the sequential and instead store a list of them. However, this approach has problems because output_sizes must be passed in the forward function, meaning that we can't use the nn.Sequential of conv layers i.e. the conv layers are stored as a list, not as member variables as is needed by nn.Module.
TODO referenced in molecules/ml/unsupervised/conv_vae/pytorch_cvae/cvae.py