Open jx-zhong-for-academic-purpose opened 2 years ago
Hi Jiaxing,
Thank you!
I am sorry I did not keep the datasets and the scripts. It is a bit complicated and messy to process the two datasets. Moreover, I do not think my processing is 100% correct.
Basically, you can use the https://github.com/hehefan/Point-Spatio-Temporal-Convolution/blob/main/scripts/depth2point4ntu.py script to convert the datasets. I did not find the intrinsic focal parameters and used the default 280 (potentially problematic).
In UWA3D, the gaps between performers and backgrounds are relatively obvious. I filtered most background points by setting depth thresholds (around 4000). Other non-person points are removed based on vertical and horizontal positions. You can check the filter results via mayavi visualization.
In N-UCLA, the dataset provides depth visualization images. I removed backgrounds according to them.
Best.
Many thanks for your prompt reply, Hehe!
Let me try to re-implement them :)
Hi Hehe, your works about dynamic point clouds are so impressive, really inspiring me a lot 👍 👍 👍
I would like to conduct experiments on N-UCLA and UWA3DII as you do - for fair comparisons, if possible, could you provide the data converters for these two datasets?
Big thanks!