NVlabs / nvdiffrast

Nvdiffrast - Modular Primitives for High-Performance Differentiable Rendering
Other
1.42k stars 157 forks source link

Limit RAM during compilation #201

Closed mhubii closed 1 month ago

mhubii commented 1 month ago

Hi and thank you for this wonderful library!

Problem

Wondering if there is a way to limit threads during compilation?

Assuming this might be parsed somewhere here:

https://github.com/NVlabs/nvdiffrast/blob/c5caf7bdb8a2448acc491a9faa47753972edd380/nvdiffrast/torch/ops.py#L118

Solution

Found this in pytorch doc (https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.BuildExtension):

By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the MAX_JOBS environment variable to a non-negative number.

It appears (on Linux)

export MAX_JOBS=...

limits resources (worked for me).

Maybe if you could quickly confirm this before closing, as I am not very confident with this compilation process. Thank you!

s-laine commented 1 month ago

PyTorch uses Ninja to build the plugin, and you can indeed limit the number of concurrent compilation processes launched by Ninja via the MAX_JOBS environment variable. This is a perfectly valid way to reduce the memory usage during compilation.