Open aamzhas opened 11 months ago
Yes, it seems there is a typo here, and the corresponding function should be uvd2xyz_nvidia() , thanks for your correction!
During our experiments, we didn't notice significant performance difference between image coordinate and camera coordinate systems (and we use the default intrinsic matrix of camera). And most experiments are conducted under the image coordinate system as shown in dataloader. However, I believe using the camera coordinate system should be more robust in real-world applications, and it would be a valuable research direction to increase the robustness of point-cloud based method.
Yes, it seems there is a typo here, and the corresponding function should be uvd2xyz_nvidia() , thanks for your correction!
During our experiments, we didn't notice significant performance difference between image coordinate and camera coordinate systems (and we use the default intrinsic matrix of camera). And most experiments are conducted under the image coordinate system as shown in dataloader. However, I believe using the camera coordinate system should be more robust in real-world applications, and it would be a valuable research direction to increase the robustness of point-cloud based method.
Hi, I am confused for converting the point-cloud to 3D space using uvd2xyz_***(). I didn't really find out at what a stage the 3D information works. It seems that point-cloud in training and testing stage both are [batch_size, T, N, 4] by here.
Hi, I was going through your processing for NVGesture and wanted some clarification regarding a function call.
I noticed that in
nvidia_process.py:31
theuvd2xyz_sherc()
function is called rather thanuvd2xyz_nvidia()
. The differences between the two functions being that the parameters for f and the image centers change. Would this mean that the processing was incorrect? Am I misunderstanding something? Any help would be appreciated. Thank you!