Closed ziruiw-dev closed 2 weeks ago
Ground truth poses are only released as part of the evaluation data subset for keyframes in mapping & validation sequences. We did not release any poses as part of the raw data because we do not want to expose them for any sequence that includes test queries. So:
1) You can only use GT poses of mapping or validation sequences but not of test sequences.
2) If you need poses for all timestamps (not only for keyframes), you would need to interpolate them from the nearest keyframes, e.g. with a linear model. We do not have code for this but it would be a valuable addition to the Pose
object. As a starting point, check out scipy.spatial.transform.Slerp
for rotation interpolation.
@mihaidusmanu We could consider releasing full-framerate poses for all sequences (excluding test sequences) if there is a need for it - it would make 2) unnecessary.
Hi @sarlinpe, Thanks a lot for the reply. Very helpful and I got it rendered to validation splits successfully.
Hi LAMAR dataset authors,
Thanks for making and releasing this dataset.
I am wondering is it possible to render dense depth maps to an iOS session using the mesh provided in a navvis session? i.e. projecting mesh using iOS trajectories? If I understand correctly, it seems like some registration files (dir
location1/registration
) and alignment files (dirhololens1/sessions/proc/alignment
andphone1/sessions/proc/alignment
)... are not provided in the current raw data release?The planed data structure in CAPTURE.md is shown below:
Some extra context: I am currently using scantools/run_sequence_rerendering.py and my plan is to
I manage to get the A2A working perfectly, A2B working okay (it seems like there are some surface normal direction issues occasionally), but I get stuck at step A2C. I am wondering if I could get some advices or example code?
Best, Zirui