The proposed pull request modify the SemKITTI2train_single function. It increases the overall training speed by more than 20%. Instead of finding labels equals to 0 and then setting their value at 255 and decreasing the other values by 1 it simply decreases the value of every label by 1 : as they are encoded as unsigned int on 8 bits this is equivalent but much faster.
I also modified the training procedure so that the first error is printed : this is to help detect out of memory errors.
I added the torch.no_grad() context around the test part in test_pretrain.py.
I corrected some typos in requirements.txt file (however I wasn't able to install torch-scatter with CUDA support using only pip install -r requirements.txt, maybe the README.md file should contain a more precise installation procedure.
Hi,
The proposed pull request modify the
SemKITTI2train_single
function. It increases the overall training speed by more than 20%. Instead of finding labels equals to 0 and then setting their value at 255 and decreasing the other values by 1 it simply decreases the value of every label by 1 : as they are encoded as unsigned int on 8 bits this is equivalent but much faster.I also modified the training procedure so that the first error is printed : this is to help detect out of memory errors.
I added the torch.no_grad() context around the test part in
test_pretrain.py
.I corrected some typos in
requirements.txt
file (however I wasn't able to install torch-scatter with CUDA support using onlypip install -r requirements.txt
, maybe theREADME.md
file should contain a more precise installation procedure.I hope you'll find this suggestion useful.
Best, Marius