Closed Kevin-chen-chen closed 3 weeks ago
Hello,
All features are loaded to the CPU RAM before the training starts to speed up training speed. What is the number of images in your scene? I think 16GB should be enough for the poster example in the instruction
My dataset is "poster" from Nerfstudio official
I just tested in a fresh conda env. The poster scene took ~13G CPU RAM on my machine. Accounting for memory usage from some other program, it may fail with 16G RAM.
For your second question, I am not sure about what you mean by 'SAM semantics and masks'. Can you elaborate on that?
I currently don't have the bandwidth to make a low-resource implementation, but if you are interested, I believe most of the RAM usage come from the feature_dict
variable located here https://github.com/vuer-ai/feature-splatting/blob/main/feature_splatting/feature_splatting_datamgr.py#L101
A straightforward solution would be to read the feature map from the disk every time a training image is accessed, but I imagine that would also slightly decrease the training speed.
I am closing the issue for now as there is no reply. Feel free to re-open the issue or send me emails if you have further questions.
First, I appreciate you releasing the code. I have some questions:
The first issue is that the terminal triggered the OOM killer, which indicates that my computer's DRAM ran out of memory. The OOM killer was triggered while full_images_datamanager.py had cached both eval images and train images, and the next step was to train the model. At that point, the terminal showed "killed." Therefore, I would like to ask about your computer setup.
Here is a screenshot of the issue: and this my Computer Setup:
CPU: Intel(R) Core(TM) i5-10400F CPU @ 2.90GHz DRAM: 16GB GPU: NVIDIA GeForce RTX 3080
The second issue is regarding SAM semantics and masks. Can I obtain SAM semantics and masks exclusively through feature-splatting in this model?
Thanks.