isl-org / Open3D

Open3D: A Modern Library for 3D Data Processing
http://www.open3d.org
Other
10.79k stars 2.23k forks source link

ReconstructionSystem with Azure Kinect images? #1510

Open SeanStephensenAIRM opened 4 years ago

SeanStephensenAIRM commented 4 years ago

I'm trying to use the ReconstructionSystem script to stitch together 60 images taken in my office, very similar to the sample case provided. I was able to run the sample (bedroom) data no problem and generate a point cloud, but I don't have the same luck with my own Kinect images. I think the problem here is the config.json that I'm using to as a controller. I don't have a config.json specific to the Kinect, so I was trying the same config file from the bedroom example.

The "matching between frames" step ran fine, but the overall program stopped at the beginning of the "integrate rgbd frame" step, with the error "[ScalableTSDFVolume::Integrate] Unsupported image format." All images are 4096x3072. Color images are 32 bit, depth images are 16 bit. All synchronized and registered and saved as PNG. The reason I don't have a Kinect config.json is that the MKVreader script that is supposed to write out the json does not work. Can anyone who has had success with this share any tips, or share the config.json for Kinect? Any parameters I can adjust in my case to get it running?

scimad commented 4 years ago

I have been doing the very same thing with perfect success here. So, it's doable. Could you please also provide your system configuration.

jimzou commented 4 years ago

I also encountered the same problem. Traceback (most recent call last): File "run_system.py", line 68, in make_fragments.run(config) File "D:\Open3D-0.9\examples\Python\ReconstructionSystem\make_fragments.py", line 191, in run n_files, n_fragments, config) File "D:\Open3D-0.9\examples\Python\ReconstructionSystem\make_fragments.py", line 169, in process_single_fragment intrinsic, config) File "D:\Open3D-0.9\examples\Python\ReconstructionSystem\make_fragments.py", line 143, in make_pointcloud_for_fragment intrinsic, config) File "D:\Open3D-0.9\examples\Python\ReconstructionSystem\make_fragments.py", line 130, in integrate_rgb_frames_for_fragment volume.integrate(rgbd, intrinsic, np.linalg.inv(pose)) RuntimeError: [Open3D ERROR] [ScalableTSDFVolume::Integrate] Unsupported image format.

using config.json: { "name": "Open3D reconstruction tutorial http://open3d.org/docs/release/tutorial/ReconstructionSystem/system_overview.html", "path_dataset": "dataset/02/", "path_intrinsic": "", "max_depth": 3.0, "n_frames_per_fragment":10, "n_keyframes_per_n_frame":2, "voxel_size": 0.05, "max_depth_diff": 0.07, "preference_loop_closure_odometry": 0.1, "preference_loop_closure_registration": 5.0, "tsdf_cubic_size": 3.0, "icp_method": "color", "global_registration": "ransac", "python_multi_threading": false }

Color : 1280 x 720 x 3 (8 bits per channel) Depth : 1280 x 720 x 1 (16 bits per channel)

SeanStephensenAIRM commented 4 years ago

@jimzou where/how/why are you obtaining a 1280x720x3@8bit/chan = 24bit color image? The kinect's standard output seems to be a 32 bit RGBA @8bit/chan.

@scimad I'm on Mac 10.15.2 using a pycharm venv running python 3.6.8 and o3d 0.9.0.0. Bedroom reconstruction example ran no problem. What are you using for an intrinsic for reconstructing Kinect images? Can you share your config file like Jim did? Here's mine, please let me know if you spot anything in here that could be causing error. More importantly (the error that I'm actually getting), here is one rgb/d image pair that I'm trying to feed into the reconstruction algorithm - do you see anything unsuitable with the format of them?

{ "name": "Open3D reconstruction tutorial http://open3d.org/docs/release/tutorial/ReconstructionSystem/system_overview.html", "path_dataset": "path", (note: path contains subfolders "image" and "depth", just like the examples) "path_intrinsic": "", "max_depth": 3.0, "voxel_size": 0.05, "max_depth_diff": 0.07, "preference_loop_closure_odometry": 0.1, "preference_loop_closure_registration": 5.0, "tsdf_cubic_size": 3.0, "icp_method": "color", "global_registration": "ransac", "python_multi_threading": false } 000150194412_color_32_0_20200212_141108 000150194412_depth_16_0_20200212_141108

scimad commented 4 years ago

My config.json:

{
    "path_dataset": "/home/madhav/<my-some-path>/Open3D/examples/Python/ReconstructionSystem/sensors/az-op",
    "path_intrinsic": "/home/madhav/<my-some-path>/Open3D/examples/Python/ReconstructionSystem/sensors/az-op/intrinsic.json",
    "depth_map_type": "redwood",
    "n_frames_per_fragment": 100,
    "n_keyframes_per_n_frame": 5,
    "min_depth": 0.3,
    "max_depth": 3.0,
    "voxel_size": 0.05,
    "max_depth_diff": 0.07,
    "preference_loop_closure_odometry": 0.1,
    "preference_loop_closure_registration": 5.0,
    "tsdf_cubic_size": 3.0,
    "icp_method": "color",
    "global_registration": "ransac",
    "python_multi_threading": "true",
    "folder_fragment": "fragments/",
    "template_fragment_posegraph": "fragments/fragment_%03d.json",
    "template_fragment_posegraph_optimized": "fragments/fragment_optimized_%03d.json",
    "template_fragment_pointcloud": "fragments/fragment_%03d.ply",
    "folder_scene": "scene/",
    "template_global_posegraph": "scene/global_registration.json",
    "template_global_posegraph_optimized": "scene/global_registration_optimized.json",
    "template_refined_posegraph": "scene/refined_registration.json",
    "template_refined_posegraph_optimized": "scene/refined_registration_optimized.json",
    "template_global_mesh": "scene/integrated.ply",
    "template_global_traj": "scene/trajectory.log"
}

My intrinsic.json:

{
    "color_mode" : "MJPG_720P",
    "depth_mode" : "WFOV_2X2BINNED",
    "height" : 720,
    "intrinsic_matrix" : 
    [
        602.0198974609375,
        0.0,
        0.0,
        0.0,
        601.69488525390625,
        0.0,
        637.13360595703125,
        365.44882202148438,
        1.0
    ],
    "serial_number_" : "000006192212",
    "stream_length_usec" : 13633356,
    "width" : 1280
}

That being shared, I will see if I can spot anything to resolve your issue.

Edit: Bad idea but if the mkv is sharable, I could use my mkv_reader (it's working on my system Linux and generate the config.json and intrinsic.json. We can debug the issue and also you may continue with your work as well if that's possible!

SeanStephensenAIRM commented 4 years ago

@scimad the images I'm using here are not from an mkv, just from 50 still image captures on a timed loop (A program I have written anyways, so I didn't have to parse an mkv). But I guess as long as the still images are being captured with the same settings as a recorded mkv, the intrinsic would still apply? If so, I can send you a very short/arbitrary mkv just with the goal of extracting the .json for my camera? Please let me know if this is correct, or if I'm missing something. Then I can email you a short mkv (or potentially even just attach it here if file size/format is not an issue).

jimzou commented 4 years ago

@scimad the problem in this place has been solved, My intristric is not correct. But Integrated resulting file is incorrect.

Reason for color convert to 3 channels: I observed that in the case of redwood with color format 3channels 1byte ,and in code of AzureKinectRecord.cpp called ConvertBGRAToRGB func to get 3channels,

AzureKinectSensor::DecompressCapture(...
{...
/* resize */
    rgbd_buffer->color_.Prepare(width, height, 3, sizeof(uint8_t));
    color_buffer->Prepare(width, height, 4, sizeof(uint8_t));
...
    ConvertBGRAToRGB(*color_buffer, rgbd_buffer->color_);
...
}

And in ScalableTSDFVolume.cpp:

void ScalableTSDFVolume::Integrate(
        const geometry::RGBDImage &image,
        const camera::PinholeCameraIntrinsic &intrinsic,
        const Eigen::Matrix4d &extrinsic) {
    if ((image.depth_.num_of_channels_ != 1) ||
        (image.depth_.bytes_per_channel_ != 4) ||
        (image.depth_.width_ != intrinsic.width_) ||
        (image.depth_.height_ != intrinsic.height_) ||
        (color_type_ == TSDFVolumeColorType::RGB8 &&
         image.color_.num_of_channels_ != 3) ||
        (color_type_ == TSDFVolumeColorType::RGB8 &&
         image.color_.bytes_per_channel_ != 1) ||
        (color_type_ == TSDFVolumeColorType::Gray32 &&
         image.color_.num_of_channels_ != 1) ||
        (color_type_ == TSDFVolumeColorType::Gray32 &&
         image.color_.bytes_per_channel_ != 4) ||
        (color_type_ != TSDFVolumeColorType::NoColor &&
         image.color_.width_ != intrinsic.width_) ||
        (color_type_ != TSDFVolumeColorType::NoColor &&
         image.color_.height_ != intrinsic.height_)) {
        utility::LogError(
                "[ScalableTSDFVolume::Integrate] Unsupported image format.");
    }

So, I turned kinect color into 3 channels

germanros1987 commented 4 years ago

@SeanStephensenAIRM is your problem solved?

www158 commented 3 years ago

My config.json:

{
    "path_dataset": "/home/madhav/<my-some-path>/Open3D/examples/Python/ReconstructionSystem/sensors/az-op",
    "path_intrinsic": "/home/madhav/<my-some-path>/Open3D/examples/Python/ReconstructionSystem/sensors/az-op/intrinsic.json",
    "depth_map_type": "redwood",
    "n_frames_per_fragment": 100,
    "n_keyframes_per_n_frame": 5,
    "min_depth": 0.3,
    "max_depth": 3.0,
    "voxel_size": 0.05,
    "max_depth_diff": 0.07,
    "preference_loop_closure_odometry": 0.1,
    "preference_loop_closure_registration": 5.0,
    "tsdf_cubic_size": 3.0,
    "icp_method": "color",
    "global_registration": "ransac",
    "python_multi_threading": "true",
    "folder_fragment": "fragments/",
    "template_fragment_posegraph": "fragments/fragment_%03d.json",
    "template_fragment_posegraph_optimized": "fragments/fragment_optimized_%03d.json",
    "template_fragment_pointcloud": "fragments/fragment_%03d.ply",
    "folder_scene": "scene/",
    "template_global_posegraph": "scene/global_registration.json",
    "template_global_posegraph_optimized": "scene/global_registration_optimized.json",
    "template_refined_posegraph": "scene/refined_registration.json",
    "template_refined_posegraph_optimized": "scene/refined_registration_optimized.json",
    "template_global_mesh": "scene/integrated.ply",
    "template_global_traj": "scene/trajectory.log"
}

My intrinsic.json:

{
  "color_mode" : "MJPG_720P",
  "depth_mode" : "WFOV_2X2BINNED",
  "height" : 720,
  "intrinsic_matrix" : 
  [
      602.0198974609375,
      0.0,
      0.0,
      0.0,
      601.69488525390625,
      0.0,
      637.13360595703125,
      365.44882202148438,
      1.0
  ],
  "serial_number_" : "000006192212",
  "stream_length_usec" : 13633356,
  "width" : 1280
}

That being shared, I will see if I can spot anything to resolve your issue.

Edit: Bad idea but if the mkv is sharable, I could use my mkv_reader (it's working on my system Linux and generate the config.json and intrinsic.json. We can debug the issue and also you may continue with your work as well if that's possible!

Hello, I have followed your configuration, but got this result,

OpenCV is not detected. Using Identity as an initial making fragments from RGBD sequence. OpenCV is not detected. Using Identity as an initial Fragment 001 / 001 :: RGBD matching between frame : 100 and 101 OpenCV is not detected. Using Identity as an initial Fragment 000 / 001 :: RGBD matching between frame : 0 and 1 joblib.externals.loky.process_executor._RemoteTraceback: """ Traceback (most recent call last): File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib\externals\loky\process_executor.py", line 431, in _process_worker r = call_item() File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib\externals\loky\process_executor.py", line 285, in call return self.fn(*self.args, *self.kwargs) File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib_parallel_backends.py", line 593, in call return self.func(args, **kwargs) File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib\parallel.py", line 253, in call for func, args, kwargs in self.items] File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib\parallel.py", line 253, in for func, args, kwargs in self.items] File "G:\Open3D\examples\Python\ReconstructionSystem\make_fragments.py", line 161, in process_single_fragment intrinsic, with_opencv, config) File "G:\Open3D\examples\Python\ReconstructionSystem\make_fragments.py", line 78, in make_posegraph_for_fragment intrinsic, with_opencv, config) File "G:\Open3D\examples\Python\ReconstructionSystem\make_fragments.py", line 37, in register_one_rgbd_pair config) File "G:\Open3D\examples\Python\ReconstructionSystem\make_fragments.py", line 30, in read_rgbd_image convert_rgb_to_intensity=convert_rgb_to_intensity) RuntimeError: [Open3D ERROR] [CreateFromColorAndDepth] Unsupported image format. """

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "run_system.py", line 68, in make_fragments.run(config) File "G:\Open3D\examples\Python\ReconstructionSystem\make_fragments.py", line 183, in run for fragment_id in range(n_fragments)) File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib\parallel.py", line 1042, in call self.retrieve() File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib\parallel.py", line 921, in retrieve self._output.extend(job.get(timeout=self.timeout)) File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\site-packages\joblib_parallel_backends.py", line 540, in wrap_future_result return future.result(timeout=timeout) File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\concurrent\futures_base.py", line 432, in result return self.get_result() File "G:\Users\Admin\AppData\Local\Programs\Python\Python36\lib\concurrent\futures_base.py", line 384, in get_result raise self._exception RuntimeError: [Open3D ERROR] [CreateFromColorAndDepth] Unsupported image format.

www158 commented 3 years ago

@jimzou @scimad What kind of camera do you use , Can you get 1280x720 resolution(depth image) with Azure Kinect?

scimad commented 3 years ago

I use Azure Kinect and yes, we can get 1280x720 resolution with Azure Kinect.

On Sun, Jul 12, 2020 at 9:41 PM www158 notifications@github.com wrote:

@jimzou https://github.com/jimzou @scimad https://github.com/scimad What kind of camera do you use , Can you get 1280x720 resolution with Azure Kinect?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/intel-isl/Open3D/issues/1510#issuecomment-657241087, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC2JLJDXGLNKJWRSOXWLV73R3HML5ANCNFSM4KUEMEGQ .

www158 commented 3 years ago

I use Azure Kinect and yes, we can get 1280x720 resolution with Azure Kinect. On Sun, Jul 12, 2020 at 9:41 PM www158 @.***> wrote: @jimzou https://github.com/jimzou @scimad https://github.com/scimad What kind of camera do you use , Can you get 1280x720 resolution with Azure Kinect? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#1510 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC2JLJDXGLNKJWRSOXWLV73R3HML5ANCNFSM4KUEMEGQ .

Thank you for your reply.But I can only get the resolution of these depth image, on the Azure Kinect official website. Kinect

scimad commented 3 years ago

Sorry, since I reply from my email, I didn't know you had edited the comment. Like you said, depth image are not originally of the aforementioned resolution. You have to select one of the modes, capture the RGB image in 1280x720px resolution, and the depth image in one of the above resolutions and later use image transformation to register the depth image over the RGB image.

On Mon, Jul 13, 2020 at 5:13 PM Aprilia notifications@github.com wrote:

I use Azure Kinect and yes, we can get 1280x720 resolution with Azure Kinect. … <#m-9034735251763418181> On Sun, Jul 12, 2020 at 9:41 PM www158 @.***> wrote: @jimzou https://github.com/jimzou https://github.com/jimzou @scimad https://github.com/scimad https://github.com/scimad What kind of camera do you use , Can you get 1280x720 resolution with Azure Kinect? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#1510 (comment) https://github.com/intel-isl/Open3D/issues/1510#issuecomment-657241087>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC2JLJDXGLNKJWRSOXWLV73R3HML5ANCNFSM4KUEMEGQ .

Thank you for your reply.But I can only get the resolution of these depth image, on the Azure Kinect official website. [image: Kinect] https://user-images.githubusercontent.com/67252756/87299161-18075780-c53e-11ea-861d-df8f04f5e43a.jpg

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/intel-isl/Open3D/issues/1510#issuecomment-657500791, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC2JLJGA7PXIVM3GFNYQATTR3LVXPANCNFSM4KUEMEGQ .