In the Latent inferers, in some cases, if the latent tensor shape is the same as the one you have to pad to (because the users populated the fields ldm_latent_shape and vae_latent_shape), the code fails because the latent tensor's got a track_meta flag that tries to calculate the inverse transform when calling the resizer and it fails with an error.
A check, before padding, that verifies that padding is necessary (ldm_latent_shape != latent.shape) will overcome this problem.
In the Latent inferers, in some cases, if the latent tensor shape is the same as the one you have to pad to (because the users populated the fields ldm_latent_shape and vae_latent_shape), the code fails because the latent tensor's got a track_meta flag that tries to calculate the inverse transform when calling the resizer and it fails with an error.
A check, before padding, that verifies that padding is necessary (ldm_latent_shape != latent.shape) will overcome this problem.