cagatayyildiz / ODE2VAE

ODE2VAE: Deep generative second order ODEs with Bayesian neural networks
MIT License
119 stars 27 forks source link

Velocity encoder in the torch example #4

Closed qu-gg closed 3 years ago

qu-gg commented 3 years ago

Hello! I have a quick question regarding the minimal PyTorch implementation. Is the velocity encoder for v0 implemented the same way in this version?

I see the general encoder, but it appears to only take in one image slice and be shared for the velocity and position with its shape of [N, 2q].

Thanks!

cagatayyildiz commented 3 years ago

Hi Ryan! In the simple torch implementation, the encoder only takes the initial frames while in general multiple (stacked) frames should be given as input. This is because initial velocity is constant, so no need for multiple frames to extract it.

Cagatay.

qu-gg commented 3 years ago

Ah that makes sense with respect to the training set there. Thank you for the reply!