Note that pixsfm only uses the GPU to accelerate when performing dense feature extraction, and the remaining steps are completed on the CPU. However, the GPU will continue to be occupied until the entire reconstruction process is completed. I tried adding in Hierarchical-Localization/hloc/extract_features.py:
del model
torch.cuda.empty_cache()
gc.collect()
However, the GPU occupied by the initialization network is still not released. Are there any methods to solve this problem?
Also, can hloc be further accelerated? For example, trading space for time, using all cores to run sfm, etc.
If you can receive a reply, I would be grateful!
Note that pixsfm only uses the GPU to accelerate when performing dense feature extraction, and the remaining steps are completed on the CPU. However, the GPU will continue to be occupied until the entire reconstruction process is completed. I tried adding in Hierarchical-Localization/hloc/extract_features.py:
However, the GPU occupied by the initialization network is still not released. Are there any methods to solve this problem?
Also, can hloc be further accelerated? For example, trading space for time, using all cores to run sfm, etc. If you can receive a reply, I would be grateful!