graspnet / graspnet-baseline

Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)
https://graspnet.net/
Other
415 stars 133 forks source link

关于knn编译报错 #73

Closed fuzhao123232 closed 10 months ago

fuzhao123232 commented 10 months ago

[图片]请问是必须pytorch1.7吗,高版本的pytorch不能支持吗?

fuzhao123232 commented 10 months ago

image

chenxi-wang commented 10 months ago

如果你用了较高版本的pytorch,需要修改一些接口才能编译,类似 https://stackoverflow.com/questions/72988735/replacing-thc-thc-h-module-to-aten-aten-h-module

nikepupu commented 9 months ago

Can you share the modified file @fuzhao123232 ?

quanfeifan commented 3 months ago

请问你解决这个问题了吗

YuyangLee commented 2 months ago

Hi, the following procedures work for me. However, I'm not an expert in CUDA and C++ so it would be great if someone professional could refine the solution.

Modify knn/src/cuda/vision.h

First, go to knn/src/cuda/vision.h and comment this line:

#include <THC/THC.h>

Append the following codes after this line:

#include <ATen/cuda/CUDAContext.h>
#include <ATen/cuda/CUDAEvent.h>

Also in this file we need to change several APIs:

// float *dist_dev = (float*)THCudaMalloc(state, ref_nb * query_nb * sizeof(float));
// Change this to:
float *dist_dev = (float*)c10::cuda::CUDACachingAllocator::raw_alloc(ref_nb * query_nb * sizeof(float));
// THCudaFree(state, dist_dev);
// Change this to:
c10::cuda::CUDACachingAllocator::raw_delete(dist_dev);

About the last change, I am not sure what's the substitute for THError so I temporarily use return 0 here. I believe there should be a proper solution for this.

cudaError_t err = cudaGetLastError();
if (err != cudaSuccess)
{
    printf("error in knn: %s\n", cudaGetErrorString(err));
    return 0;
    // THError("aborting");
}

Modify knn/src/knn.h

Go to knn/src/knn.h and comment these lines:

#include <THC/THC.h>
extern THCState *state;

Install the Package

Finally we can install the package knn/, and the demo.py works fine for me.

Reference

airobot1024 commented 2 weeks ago

Hi, the following procedures work for me. However, I'm not an expert in CUDA and C++ so it would be great if someone professional could refine the solution.

Modify knn/src/cuda/vision.h

First, go to knn/src/cuda/vision.h and comment this line:

#include <THC/THC.h>

Append the following codes after this line:

#include <ATen/cuda/CUDAContext.h>
#include <ATen/cuda/CUDAEvent.h>

Also in this file we need to change several APIs:

// float *dist_dev = (float*)THCudaMalloc(state, ref_nb * query_nb * sizeof(float));
// Change this to:
float *dist_dev = (float*)c10::cuda::CUDACachingAllocator::raw_alloc(ref_nb * query_nb * sizeof(float));
// THCudaFree(state, dist_dev);
// Change this to:
c10::cuda::CUDACachingAllocator::raw_delete(dist_dev);

About the last change, I am not sure what's the substitute for THError so I temporarily use return 0 here. I believe there should be a proper solution for this.

cudaError_t err = cudaGetLastError();
if (err != cudaSuccess)
{
    printf("error in knn: %s\n", cudaGetErrorString(err));
    return 0;
    // THError("aborting");
}

Modify knn/src/knn.h

Go to knn/src/knn.h and comment these lines:

#include <THC/THC.h>
extern THCState *state;

Install the Package

Finally we can install the package knn/, and the demo.py works fine for me.

Reference

Great! Many thanks. According your answer, I resolve this issue. In addition, my torch version is 1.11.0. Below "Also in this file we need to change several APIs:", the remained modification is belonged to knn/src/knn.h.