When we try to run cmd "python main.py --cfg ./config/test.yaml" to inference base on the scannet dataset.
An error occured, we found that there may be parameter incompatible issue exist. Could you help to figure it out?
Steps: Compile "torchsparse/backbones/backend/hash/hash_cpu.cpp" and related header files as torch_sparse_c_dll.dll. Then, we excute inference cmd to call C++ API “ts_dll.hash_cpu(coords)”.
Error:ctypes.ArgumentError: argument 1: <class 'TypeError'>: Don't know how to convert parameter 1
When we try to run cmd "python main.py --cfg ./config/test.yaml" to inference base on the scannet dataset. An error occured, we found that there may be parameter incompatible issue exist. Could you help to figure it out?
Steps: Compile "torchsparse/backbones/backend/hash/hash_cpu.cpp" and related header files as torch_sparse_c_dll.dll. Then, we excute inference cmd to call C++ API “ts_dll.hash_cpu(coords)”.
Error:ctypes.ArgumentError: argument 1: <class 'TypeError'>: Don't know how to convert parameter 1
Code: Python: Parameter -》torch.Tensor from ctypes import * ts_dll = CDLL("D:/torch_sparse_c_dll/x64/Release/torch_sparse_c_dll.dll") assert coords.ndim == 2 and coords.shape[1] == 4, coords.shape coords = coords.cpu() coords = coords.type(torch.FloatTensor) return ts_dll.hash_cpu(coords)
C++: Parameter -》at.Tensor at::Tensor hash_cpu(const at::Tensor idx) { int N = idx.size(0); at::Tensor out = torch::zeros({N}, at::device(idx.device()).dtype(at::ScalarType::Long)); cpu_hash_wrapper(N, idx.data_ptr(), out.data_ptr());
return out;
}