Closed zkaiWu closed 2 months ago
Hi, the aligned-vae is to compatibility with the original Michelangelo code: https://github.com/NeuralCarver/Michelangelo, and they use contrastive learning to align the encoded shape latent space align with the CLIP. We also train another shape vae without this constrain, namely w/o aligned vae
Are there any performance gaps between them?
theoretically, I think VAE without alignment may have higher bounds for reconstruction performance as it has less constrain. But the released code may have the similar performance after my evaluation
Thanks a lot!!
Thanks for your work. And I have a question, what are the differences between michelangelo aligned vae and michelangelo vae