Closed niraj-bhujel closed 11 months ago
We have no plans to add significant new capabilities to this repository. We did experiment with batch sizes > 1 in the past, but it did not significantly reduce the training time. The additional compute needed to process more images within a batch lets you do fewer parameter updates within the same time frame. The gradients get better but not enough to make up for the slower training.
Please see the spiritual successor of DSAC* - ACE (CVPR 2023) - for a way to compile batches differently across many mapping images for better gradients without a slowdown. This, indeed, speeds up training significantly: 5 minutes training instead of 15 hours on a modern GPU.
Currently single batch is supported which took more than one day for training a single scene of Cambridge. Are you planning to support for mini batch size >1. This would significantly decrease the training time.