Chrixtar / latentsplat

[ECCV 2024] Implementation of latentSplat: Autoencoding Variational Gaussians for Fast Generalizable 3D Reconstruction
https://geometric-rl.mpi-inf.mpg.de/latentsplat/
MIT License
146 stars 3 forks source link

about training config #8

Open BroenLin opened 4 months ago

BroenLin commented 4 months ago

I am trying to train latentSplat on re10k, but i notice that only the loss of target_render_image is activated. Is that the correct config? is_active_loss from src/model/model_wrapper {'gaussian': False, 'context': False, 'target_autoencoder': False, 'target_render_latent': False, 'target_render_image': True, 'target_combined': False}

BroenLin commented 4 months ago

I also wondered how many training step it needs to reproduce the paper result.

myutility commented 4 months ago

Hi, did you manage to train a model by yourself and reproduce the results? If so could you provide me the hyperprams you use? Thanks

Chrixtar commented 4 months ago

Hi,

sorry for the late response. It is correct that we first train only with the target_render_image loss for the first 100k iterations, because the rough geometry should be correct, before it makes sense to train the generative decoder. In our experiments we trained for 200k iterations.

Please let me know, if you have any other questions :)