VCL3D / StructureNet

Markerless volumetric alignment for depth sensors. Contains the code of the work "Deep Soft Procrustes for Markerless Volumetric Sensor Alignment" (IEEE VR 2020).
https://vcl3d.github.io/StructureNet/
43 stars 9 forks source link

Format of Calibration Results #1

Closed jessekirbs closed 3 years ago

jessekirbs commented 4 years ago

Could you please explain the format of the end result calibration from this? I see that there are several different files (extrinsics.json in the main Calibration folder, calibration.json in the 'initial_calibration' folder, and calibration.json in the 'crf_refined_calibration' folder) that are output after running the calibration. I'd be interested to try this method's results within something like TouchDesigner - I've tried using the matrices under 'extrinsics' in the the calibration.json files as transforms, but it doesn't seem to be the correct use. If you could go over how to use the calibration output from this method that would be much appreciated. Thank you!

vladsterz commented 4 years ago

Hello @jessekirbs. The folders _initialcalibration and _crfrefined contain the results for intermediate steps of out calibration method. You should be interested only in the base directory files, where the .ply files are results of the calibration procedure for each viewpoint and extrinscis.json contains the transformations. Transformations are stored as 4x4 row-major transformation matrices. Can you describe how did you use these transformations and show examples of your inputs and the corresponding outputs?

jessekirbs commented 4 years ago

Hey @vladsterz - thanks for the clarification! After transposing the matrix from row-major to column-major within TouchDesigner, the pointclouds aligned much better. I'm seeing an offset for some reason which you can see here:

1 2 3

I experimented with doing capturing and calibration several times with different positions for the Kinects and the offset was the same every time. The calibration looks great within your software but when used in TouchDesigner the offset is present. Here's a pic of the transform setup:

4

The raw matrices are fed into TouchDesigner and converted into tables, then they're transposed to column-major and applied as transforms to each Kinect.

Do you have any idea why I might be seeing this consistent offset? Thank you for the support!

vladsterz commented 4 years ago

Before we proceed any further into debugging, could you please share with me the results of calibration? To do so, please use a point clouds viewer program ( preferred one is meshlab ) in order to merge the final calibrations (files that are in the same directory as calibration.json with .ply extension ) and please show me the result from some descriptive views.

jessekirbs commented 4 years ago

Here's the Meshlab project with the calibration meshes from the two Kinects I used in the above TouchDesigner test. The calibration software was a being a little buggy so I could only get two of the Kinects aligned - the third one would sometimes align with the box structure, but it would be mirrored so it was not usable. Please let me know if I can provide anything else. Thanks, @vladsterz!

calibration.zip

vladsterz commented 4 years ago

Hello, There are missing files, you should've include the actual point-clouds ( .ply files ).

jessekirbs commented 4 years ago

Sorry about that - updated the file link above to include the .ply files.

I'm trying to debug the calibration data further by applying the extrinsics matrices to virtual cameras within Blender to see if they match positions with my Kinects. It looks like they're in the correct positions and angles but are facing away from center instead of in towards the center. Any idea why that might be?

blendercams

vladsterz commented 4 years ago
I guess that the issue here is the rotation part which is transposed. I think you should look into the way toy are reading out files. Our files contain the extrinsic calibration as an array of numbers, let's say [e1,e2,e3..., e16]. The matrix should be constructed as e1 e2 e3 e4
e5 e6 e7 e8
e9 e10 e11 e12
e13 e14 e15 e16
jessekirbs commented 4 years ago

That's the way I've been structuring the matrices. The offset in TouchDesigner may be an issue related to that software or the transpose method so I'll look into that further with the devs over there.

In Blender, the cameras should line up and match the Kinect positions when I apply the matrices, correct? I'm using this add-on (https://github.com/SBCV/Blender-Matrix-World-Addon) which allows loading the matrices and applying them.

Thanks, @vladsterz!

vladsterz commented 4 years ago
Indeed, these transformations, when applied to corresponding pointclouds, should align the pointclouds in the calibrated coordinate system. A quick test would be making the transformation matrix as e1 e5 e9 e4
e2 e6 e10 e8
e3 e7 e11 e12
e13 e14 e15 e16

i.e. transposing only the rotation part of the matrix.

jessekirbs commented 4 years ago

@vladsterz Sorry, can you expand on what you mean by that quick test please? Make the transform matrix only contain:

e1 | e5 | e9 | e4 ?

And would this be a test within Blender? If you could clarify I'd appreciate it. Thanks, vlad!

vladsterz commented 4 years ago
Eh, I know what confused you, its the bold row but I didnt find a way to get rid of it. What I mean is to transpose the rotation part. A rigid transformation of a body in 3D consists of a rotation part (3x3 matrix) and a translation vector (3x1 vector) and is composed as r1 r2 r3 v1
r4 r5 r6 v2
r7 r8 r9 v3
0 0 0 1

What I say is, transpose the rotation part (the inner rotation 3x3 matrix ).

jessekirbs commented 4 years ago

Ah, I think I understand. You're suggesting transposing the inner 3x3 rotation matrix into column-major format instead of row-major? I tried this but the results were strange in Blender. I also tried rotating the rotation matrix by 180 and that didn't work either. Please let me know if I'm still misunderstanding your suggestion.

I also realized the coordinate spaces for the Kinect are Y Up and Z Forward, while Blender is Z Up and Y Forward which I assume is causing (or exacerbating) the issue. Is this calibration software using Y Up? I'm going to test in Maya which uses Y Up axis.

EDIT: I just tried using the matrices within Maya and still didn't get the correct result. I also tried applying the calibration matrices to pointclouds recorded from each Kinect with no luck. I tried this with the direct matrix output from your software, transposing the rotation matrix, and switching from column-row to column-major but none of them worked. Have you guys used the calibration matrices from your software in third party 3D software successfully? If so, which software did you use? So far only TouchDesigner has come close to looking correct aside from that strange offset. Thanks for your continued help, @vladsterz !

vladsterz commented 4 years ago

Sadly we haven't used any 3d party 3D software with our calibration as far as I know. But our input is X down, Y left, Z in. image

jessekirbs commented 4 years ago

I tried entering the matrix values to match your coordinate space but still not having any luck getting the cameras to look right or the pointclouds to align. Something I thought of - I opened an issue on the VCL Volumetric Github regarding my pointclouds being upside down (https://github.com/VCL3D/VolumetricCapture/issues/34). Is it possible this issue is causing the problems I'm seeing here?

vladsterz commented 4 years ago

Sadly we've never encountered such an issue with our applications. I answered to your other post, so try that solution, but to fully support you in fixing this issue I should look into blender and/or TouchDesigner, which I cannot do for some time.

jessekirbs commented 3 years ago

Thanks, @vladsterz. I'll try flipping the Kinects. I guess I assumed the camera should be on top when filming vertically.

I understand. Thanks for your assistance with this and let me know whenever you get a chance to test it out in Blender/TouchDesigner! Looking forward to new volumetric capture release as well. Take care!

jessekirbs commented 3 years ago

@vladsterz Been looking into this more and I wanted to clarify the format of the extrinsic calibration 4x4 matrix. I'm getting confused by the varying coordinate systems and how that effects the matrix. Is the matrix output from your calibration software in this format:

r1(x) r2(y) r3(z) v1(x) r4(x) r5(y) r6(z) v2(y) r7(x) r8(y) r9(z) v3(z) 0 0 0 1

or is it in this format (due to using 'X down, Y left, Z in' coordinate system that you mentioned):

r1(z) r2(x) r3(y) v1(z) r4(z) r5(x) r6(y) v2(x) r7(z) r8(x) r9(y) v3(y) 0 0 0 1

or a variation of that? Is the coordinate system you use considered ZXY Euler? All the variations are scrambling my brain so I'm trying to understand how to translate between programs and their corresponding coordinate systems. Hope this makes sense. Thanks!