victorprad / InfiniTAM

A Framework for the Volumetric Integration of Depth Images
http://www.infinitam.org
Other
918 stars 351 forks source link

using InfiniTAM with the PMD CamBoard PicoFlexx? #57

Open akosmaroy opened 7 years ago

akosmaroy commented 7 years ago

Hi,

I wonder if InfiniTAM can be used with the PMD CamBoard PicoFlexx: http://pmdtec.com/picoflexx/ as a live input?

If not, would this be reasonable to achieve? What would be the recommended integration points to hook the PMD live output to?

the PMD PicoFlexx camera provides a depth information for each pixel it sees, but is a monochrome camera otherwise / basically no color information is provided.

best regards,

Akos

connerbrooks commented 7 years ago

Yeah this is definitely feasible, color is not required and is ignored if not provided.

To get started you would add another ImageSourceEngine (e.g. PMDEngine). Within the new engine setup the device, register the depth image callback, and provide those images such that getImages() can provide them to the ITMMainEngine.

Overall this process is pretty straightforward and is a bit simpler for devices such as the RealSense (e.g. dev->get_frame_data()) which provide synchronous access to depth images. So you may have to add some locking and image caching to get this working.

akosmaroy commented 7 years ago

I have an initial PicoFlexxEngine working. it's not returning anything for the RGB image. for the depth image, it's returning short values for each pixel as millimeters reported by the Pico Flexx camera.

the GUI application seems to get the data, it displays it live in a color coded manner with different distance being different colors. so far so good.

but, how do I make this work otherwise? :)

my expectation was that the app would build a 3D model of an area that I 'show it' through the depth camera. but I don't see a 3D map / model being built by the GUI app. is there something additional to this?

(sorry about the naive question)

victorprad commented 7 years ago

try going frame by frame with 'n' to see if it reconstruct anything.

note that I've also tried the standard PMD camera and the results were too bad to work with -- the depth is not accurate enough to track with using just ICP.

i've not tried with the colour tracker though. I expect that to work better.

akosmaroy commented 7 years ago

sometimes it starts to create a reconstruction, but the results are not really good. I understand that IMU fusion should be off.

for the camera calibration, what format is the calibration file in? I guess that should help as well if I supply this data properly.

regarding using a color tracker - I need something that works in a totally dark environment without human visible light. thus that doesn't seem to be an option

victorprad commented 7 years ago

you should not use IMU fusion unless you have an IMU attached to the device and with a correct calibrator class.

camera calibration I used for the pico would have this format:

--

width_rgb height_rgb focal_length_x_rgb focal_length_y_rgb principal_point_x_rgb principal_point_y_rgb

width_depth height_depth focal_length_x_depth focal_length_y_depth principal_point_x_depth principal_point_y_depth

affine ratio_to_m 0.0

--

ratio_to_m can be something like 0.0002 and is a factor used convert the depth measurement from the camera to meters.

overall I really don't expect the camera to work very well with the ICP tracker. I'd much rather suggest some other type of tracking (perhaps from a monocular tracker ?) with infinitam used just for fusion.

connerbrooks commented 7 years ago

When I was trying a similar PMD camera I was able to get pretty reasonable results with ICP. It didn't work well until I realized they were packing the depth data in a weird way (I think the first 3 bits are confidence value, so you can just shift those out of the way).

Regarding the intrinsics, I set those programatically similar to how its done in the RealSenseEngine:

this->calib.intrinsics_d.SetFrom(intrinsics_depth.fx, intrinsics_depth.fy,
                                     intrinsics_depth.ppx, intrinsics_depth.ppy,
                                     requested_imageSize_d.x, requested_imageSize_d.y);

You can query the PMD device for focal length, principle point, and image size and just pass them as above.

akosmaroy commented 7 years ago

tnx I made it work this way. also added the grayscale image for RGB data, but it actually seems to reduce the quality of the results, as the grayscale shows the camera illuminaton pattern strongly and also shows the short range of this active illumination

I'll clean up the code and put in a pull request if interesting

questions / comments:

it seems that if the 't' key is pressed to turn off sensor fusion - map building stops. even though there is no IMU in the picture, e.g. none on the camera and none on my laptop :)

at the same time, is there a 'definitive' way to turn off map making, and just use tracking without changing the 3D map?

victorprad commented 7 years ago

pressing 't' turns off fusion (so all map updates). that's the definitive way :).

akosmaroy commented 7 years ago

tnx

another question (sorry to bloat the thread): is there a way to load a previously saved mesh (that was mapped through a previous session) when starting the InfiniTAM app? so that the mesh is re-used and it doesn't have to be built again?

also, I've set the camera calibration parameters programmatically as described by Conner above. should I / is there a way to set the depth measurement ratio as well? currently I'm sending depth information as millimeters

akosmaroy commented 7 years ago

created pull request with the initial implementation: https://github.com/victorprad/InfiniTAM/pull/58