V2AI / Det3D

World's first general purpose 3D object detection codebse.
https://arxiv.org/abs/1908.09492
Apache License 2.0
1.49k stars 298 forks source link

error: no suitable constructor exists to convert from "c10::ScalarType" to "at::Type" #51

Closed cslxiao closed 4 years ago

cslxiao commented 4 years ago

I encounter this error when executing

python setup.py build develop

det3d/ops/sigmoid_focal_loss/src/sigmoid_focal_loss_cuda.cu(121): error: no suitable constructor exists to convert from "c10::ScalarType" to "at::Type"

a157801 commented 4 years ago

I encounter this error when executing

python setup.py build develop

det3d/ops/sigmoid_focal_loss/src/sigmoid_focal_loss_cuda.cu(121): error: no suitable constructor exists to convert from "c10::ScalarType" to "at::Type"

Please check the version of cuda cudnn and gcc. I use gcc 5.4.0.

poodarchu commented 4 years ago

you need to use gpu to compile. or it will raise error when executing. Please follow instructions.

cslxiao commented 4 years ago

I use cuda 10, cudnn 7.5 and gcc 7.4. I strictly followed the instructions but still got the error

a157801 commented 4 years ago

I use cuda 10, cudnn 7.5 and gcc 7.4. I strictly followed the instructions but still got the error

Please use gcc 5.4.0 and try again

muzi2045 commented 4 years ago

If you are using pytorch 1.0, you will face this error(same as me). there is some api changed between torch 1.0 and 1.3 check this part in the sigmoid_focal_loss_cuda.cu file

/** pytorch 1.1+ **/
  // AT_DISPATCH_FLOATING_TYPES_AND_HALF(
  //     logits.scalar_type(), "SigmoidFocalLoss_forward", [&] {
  //       SigmoidFocalLossForward<scalar_t><<<grid, block>>>(
  //           losses_size, logits.contiguous().data<scalar_t>(),
  //           targets.contiguous().data<int64_t>(), num_classes, gamma, alpha,
  //           num_samples, losses.data<scalar_t>());
  //     });

  /** pytorch 1.0 **/
  AT_DISPATCH_FLOATING_TYPES_AND_HALF(
      logits.type(), "SigmoidFocalLoss_forward", [&] {
        SigmoidFocalLossForward<scalar_t><<<grid, block>>>(
            losses_size, logits.contiguous().data<scalar_t>(),
            targets.contiguous().data<int64_t>(), num_classes, gamma, alpha,
            num_samples, losses.data<scalar_t>());
      });
cslxiao commented 4 years ago

@muzi2045 Thanks. Updating torch solves this problem.