CVMI-Lab / PAConv

(CVPR 2021) PAConv: Position Adaptive Convolution with Dynamic Kernel Assembling on Point Clouds
Apache License 2.0
287 stars 40 forks source link

TypeError: furthestsampling_cuda(): incompatible function arguments. The following argument types are supported: 1. (arg0: int, arg1: int, arg2: at::Tensor, arg3: at::Tensor, arg4: at::Tensor, arg5: at::Tensor, arg6: at::Tensor) -> None #49

Open ycxchina opened 11 months ago

ycxchina commented 11 months ago

Traceback (most recent call last): File "/data/yechangxin/code/PAConv/scene_seg/tool/train.py", line 326, in main() File "/data/yechangxin/code/PAConv/scene_seg/tool/train.py", line 146, in main loss_train, mIoU_train, mAcc_train, allAcc_train = train(train_loader, model, criterion, optimizer, epoch, args.get('correlation_loss', False)) File "/data/yechangxin/code/PAConv/scene_seg/tool/train.py", line 201, in train output = model(input) File "/home/yechangxin/anaconda3/envs/torch2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, *kwargs) File "/data/yechangxin/code/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_seg.py", line 74, in forward li_xyz, li_features = self.SA_modules[i](l_xyz[i], l_features[i]) File "/home/yechangxin/anaconda3/envs/torch2/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(input, **kwargs) File "/data/yechangxin/code/PAConv/scene_seg/model/pointnet2/pointnet2_paconv_modules.py", line 160, in forward new_xyz_idx = pointops.furthestsampling(xyz, self.npoint) # (B, N1) File "/data/yechangxin/code/PAConv/scene_seg/lib/pointops/functions/pointops.py", line 55, in forward pointops_cuda.furthestsampling_cuda(b, n, m, xyz, temp, idx) TypeError: furthestsampling_cuda(): incompatible function arguments. The following argument types are supported:

  1. (arg0: int, arg1: int, arg2: at::Tensor, arg3: at::Tensor, arg4: at::Tensor, arg5: at::Tensor, arg6: at::Tensor) -> None

Invoked with: 4, 4096, tensor(1024), tensor([[[ 0.1788, -0.0732, 2.0553], [ 0.4895, 0.0057, 1.2836], [ 0.4620, 0.1135, 0.3648], ..., [-0.4602, 0.2132, 0.0284], [-0.1321, -0.0043, 0.0203], [-0.0581, 0.2376, 0.0274]],

    [[ 0.3045, -0.1832,  2.0979],
     [-0.1608,  0.0875,  0.4716],
     [-0.3269,  0.2184,  0.9395],
     ...,
     [-0.1070,  0.0705,  2.3420],
     [ 0.4195, -0.0825,  0.9823],
     [ 0.3508, -0.0412,  0.0478]],

    [[ 0.2891, -0.1238,  0.3503],
     [ 0.3764, -0.0161,  2.9845],
     [-0.4895, -0.2011,  0.3623],
     ...,
     [ 0.3420,  0.2010,  0.3461],
     [-0.0695, -0.1036,  0.3419],
     [ 0.2596,  0.2259,  2.9968]],

    [[ 0.5076,  0.2055,  2.4076],
     [-0.2929, -0.4469,  2.9717],
     [ 0.1165,  0.3497,  2.9553],
     ...,
     [-0.2720, -0.2826,  2.4170],
     [ 0.4584,  0.2303,  2.4252],
     [-0.2447, -0.2032,  2.9756]]], device='cuda:0'), tensor([[1.0000e+10, 1.0000e+10, 1.0000e+10,  ..., 1.0000e+10, 1.0000e+10,
     1.0000e+10],
    [1.0000e+10, 1.0000e+10, 1.0000e+10,  ..., 1.0000e+10, 1.0000e+10,
     1.0000e+10],
    [1.0000e+10, 1.0000e+10, 1.0000e+10,  ..., 1.0000e+10, 1.0000e+10,
     1.0000e+10],
    [1.0000e+10, 1.0000e+10, 1.0000e+10,  ..., 1.0000e+10, 1.0000e+10,
     1.0000e+10]], device='cuda:0'), tensor([[0, 0, 0,  ..., 0, 0, 0],
    [0, 0, 0,  ..., 0, 0, 0],
    [0, 0, 0,  ..., 0, 0, 0],
    [0, 0, 0,  ..., 0, 0, 0]], device='cuda:0', dtype=torch.int32)
gyy520cyaowu commented 6 months ago

do you know how to slove the problem, sir? 请问您知道怎么解决这个问题了吗,我也遇到了