Open maskjp opened 3 years ago
Amazing!!! Thank you so much for contributing, this is a really needed feature. Tagging @CCInc so that he can take a look as well.
@maskjp Thanks for the contribution!
I'm curious which version of pytorch you compiled against, did they change the tensor namespace to torch
at somepoint?
I'm guessing the models use ~50% the memory compared to full precision? Did you notice any training speed increase too?
@nicolas-chaulet we should add precommit.ci to this repo, so we can ensure consistent formatting on PRs, what do you think? Also, maybe we could add a pytorch version matrix to unittesting? Maybe 1.7.0 to latest?
I'm guessing the models use ~50% of the memory compared to full precision? Did you notice any training speed increase too?
Hi,@CCInc,
I used torch1.8.1+cu111 to compile it. About the tensor namespace, torch
tensor has a higher level, we can also use at
too. I just noticed that in chamfer_dist.cpp, cubic_feature_sampling.cpp the torch
namespace are used but interpolate.cpp, sampling.cpp and ball_query.cpp used at
tensor. I refer to the toturial of pytorch and torch_geomtrics and decide to use torch
tensor.
Yes, the modification saves the memory, but I didn't see the training speed increase too. The speed also depends on the model architecture, ops, and io.
Looks good to me, @CCInc could you please verify that the gpu tests pass on your machine? I don't have access to gpus anymore... Thanks! And yes to pre-commit ci!
@maskjp I'm getting an issue with the testing, it seems like the cpu and gpu fps are not matching up?
FAIL: test_gpu (test.test_fps.TestFps)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/mnt/f/data/PC/torch-points-kernels/test/__init__.py", line 7, in wrapped_func
return func(*args, **kwargs)
File "/mnt/f/data/PC/torch-points-kernels/test/test_fps.py", line 35, in test_gpu
torch.testing.assert_allclose(sorted_idx,sorted_idx_cpu)
File "/home/chris/miniconda3/envs/tpk/lib/python3.7/site-packages/torch/testing/_core.py", line 270, in assert_allclose
raise AssertionError(msg)
AssertionError: Found 27 different element(s) (out of 32), with the greatest difference of 63 (82 vs. 19) occuring at index (8, 1).
@CCInc , Yes, test_gpu in test_fps.py file is created by me. The original test didn't test the GPU version of fps. I found that the output of the CPU and GPU versions are different even before my modification.
That would make sense, since the seed point might be different. In that case, the test should be modified so that it passes while still checking for some level of correctness. Having test that fails can be a bit confusing.
Le mar. 3 août 2021 à 02:31, maskjp @.***> a écrit :
@CCInc https://github.com/CCInc , Yes, test_gpu in test_fps.py file is created by me. The original test didn't test the GPU version of fps. I found that the output of the CPU and GPU versions are different even before my modification.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/torch-points3d/torch-points-kernels/pull/77#issuecomment-891440962, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYH57O4EZZ4AK5CEZPN35DT25BH5ANCNFSM5BCTEFGQ .
Thanks for clarifying! In addition to what Nicolas suggested, would you mind also add a test that verifies the functionality of cuda fps on its own, similar to test_simplecpu
?
The rest of the PR works well!
Hi,
Thank you for this great library and torch-points3d.
I made some modifications to support mixed-precision training.
The major changes are as follows:
AT_DISPATCH_FLOATING_TYPES
toAT_DISPATCH_FLOATING_TYPES_AND_HALF
;atomicAdd
togpuAtomicAdd
(in pytorch THCAtomics.cuh);custom_fwd
andcustom_bwd
intorch.autograd.Function
to allowautocast
;The modified version passed the tests in the test folder. I didn't see affection on full-precision training. And I tried to train the PointNet2 model in the torch-point3d library in a mixed-precision style, it works.