Open nauti16 opened 10 months ago
The demo doesn't use the NYU dataset. It's using a dataset we collected using Azure Kinect. The poses of that dataset are computed using a VIO sliding-window filter (more details are provided in Section IV.B of the paper).
It's really great work. Thanks. I have a question about your data. You used NYU depth v2 dataset for Demo. Then how did you make or get the pose of each image frame? I can't find any pose information in NYU depth v2 dataset either any study to extract that. I would be very grateful if you let me know.