Open daniCh8 opened 2 years ago
Same error here. Passing quantization_mode=ME.SparseTensorQuantizationMode.RANDOM_SUBSAMPLE
as an argument when constructing TensorField seems to be a workaround
Thanks for sharing! My workaround so far has been to launch any script that uses TensorFields with CUDA_VISIBLE_DEVICES=<target_device_id>
.
Update: using torch.cuda.set_device(device_index) would be a better practice.
It seems that the author uses Tensor.cuda() instead of manually setting the device by Tensor.to(device), so it is necessary to specify a default target device
Describe the bug When creating a TensorField, the object uses memory in the first available device even after specifying the device id in the constructor. This makes it impossible to use models stored in all the devices which are not the first one.
Code To Reproduce
Expected behavior A clear and concise description of what you expected to happen.
The
tensor_field
above is created using the first available GPU. Callingtensor_field.device
will return thedevice
i set in the constructor (cuda:5
); however, when checking the GPUs memory, there is memory utilization in the GPU 0 triggered by the TensorField constructor (see attached picture - snapshot of the memory status after running the code above, 3632 is the id of the script). The output of the code above is the following exception, which proves again that thetensor_field
object is not fully stored incuda:5
as requested:Desktop:
MinkowskiEngine.print_diagnostic()
: