Markerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).
I am trying to use StructureNet to align perspectives of multiple Azure Kinects.
Is it possible to produce transformation matrices based on the output of inference.py or calibration.py ? If so, how?
I understand that your VolumetricCapture system provides a GUI to calibrate multiple sensors, but it requires the installation of the entire infrastructure. I do not currently own the entire hardware required for that, so I was hoping I could achieve the alignment by providing only the input depth images from my Azure Kinects.
Hello,
I am trying to use StructureNet to align perspectives of multiple Azure Kinects. Is it possible to produce transformation matrices based on the output of inference.py or calibration.py ? If so, how?
I understand that your VolumetricCapture system provides a GUI to calibrate multiple sensors, but it requires the installation of the entire infrastructure. I do not currently own the entire hardware required for that, so I was hoping I could achieve the alignment by providing only the input depth images from my Azure Kinects.
Thank you for your time.