Closed AnujAgrawal7 closed 2 years ago
the depth image you got from azure kinet is in milimeter, convert it to meter and then generate the point cloud, then there should be no problem.
Consider of "Unsupported image format." Maybe there's more problem with your code, because i can successfully generate the point cloud with azure.
Hey thanks! I will try converting the depth data to meters and try again. As for "Unsupported image format." my config file is : { "camera_fps" : "K4A_FRAMES_PER_SECOND_30", "color_format" : "K4A_IMAGE_FORMAT_COLOR_MJPG", "color_resolution" : "K4A_COLOR_RESOLUTION_720P", "depth_delay_off_color_usec" : "0", "depth_mode" : "K4A_DEPTH_MODE_WFOV_2X2BINNED", "disable_streaming_indicator" : "false", "subordinate_delay_off_master_usec" : "0", "synchronized_images_only" : "false", "wired_sync_mode" : "K4A_WIRED_SYNC_MODE_STANDALONE" }
and the camera intrinsic file is : { "color_mode" : "MJPG_720P", "depth_mode" : "WFOV_2X2BINNED", "height" : 720, "intrinsic_matrix" : [ 602.9459228515625, 0.0, 0.0, 0.0, 602.57269287109375, 0.0, 648.361083984375, 373.14199829101562, 1.0 ], "serialnumber" : "000075293512", "stream_length_usec" : 17733367, "width" : 1280 }
In the azure_kinect_viewer I made the following changes:
if rgbd is None:
continue
else:
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd, pinhole_camera_intrinsic)
print ([pcd])
Can you please look into it and tell me where I went wrong. Also it will be really helpful if you can share your script and parameter files.
sorry, i'm not sure why you got that problem without code, maybe you can try the code below, and don't forget to change the intrinsic below.
class AzureKinect:
def __init__(self):
self.config = o3d.io.AzureKinectSensorConfig()
self.device = 0
self.align_depth_to_color = 1
def start(self):
self.sensor = o3d.io.AzureKinectSensor(self.config)
if not self.sensor.connect(self.device):
raise RuntimeError('Failed to connect to sensor')
def frames(self):
while 1:
rgbd = self.sensor.capture_frame(self.align_depth_to_color)
if rgbd is None:
continue
color,depth = np.asarray(rgbd.color).astype(np.uint8),np.asarray(rgbd.depth).astype(np.float32) / 1000.0
return color, depth
cam = AzureKinect()
cam.start()
color, depth = cam.frames()
intrinsic = o3d.camera.PinholeCameraIntrinsic(1280, 720, 601.1693115234375, 600.85931396484375, 637.83624267578125, 363.8018798828125)
detph = o3d.geometry.Image(depth)
img = o3d.geometry.Image(img)
rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth(img, detph, depth_scale=1.0, convert_rgb_to_intensity=False)
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd, intrinsic)
o3d.visualization.draw_geometries([pcd])
Hi,
@kaixin-bai where or how did you find out the values of your intrinsic?
intrinsic = o3d.camera.PinholeCameraIntrinsic(1280, 720, 601.1693115234375, 600.85931396484375, 637.83624267578125, 363.8018798828125)
Are these the values you can find in the instrinsic.json in the intrinsic_matrix
? If yes, do I use all values or only some specific? Which of them do I use?
My intrinsic.json looks like this:
{
"color_mode" : "MJPG_720P",
"depth_mode" : "NFOV_UNBINNED",
"height" : 720,
"intrinsic_matrix" :
[
613.01324462890625,
0.0,
0.0,
0.0,
613.0467529296875,
0.0,
639.76904296875,
369.3740234375,
1.0
],
"serial_number_" : "000040301612",
"stream_length_usec" : 21566700,
"width" : 1280
}
Do I use all values of the intrinsic_matrix
without the 0.0s and 1.0s but together with the width
and height
?
Currently I use your intrinsic because I get the best results with it. Better than the PinholeIntrinsics provided by o3d.camera.PinholeCameraIntrinsicParameters
achieve. But there is still some potential to improve the representation of the point clouds and I think it cloud be caused by the intrinsic I use.
Thank you in advance.
Also curious how did you get this beautiful intrinsics which is optimal IMO for the initial guess
Same question as above. How to get the intrinsics using open3D?
@Me817 @zapaishchykova @ghu191 Maybe you can try this functionopen3d.io.write_azure_kinect_sensor_config
. I remember that I save a mkv file with azure, and the intrinsic file is automatically generated and saved.
The current practice to get an intrinsic matrix is to first record a trial mkv with azure_kinect_recorder.py
, then decode it by azure_kinect_mkv_reader.py
with a specified --output target_folder
. All the configurations and intrinsics will be automatically generated in the folder.
I will see if I can also provide an interface during runtime.
I am trying to create a point cloud using the RGBD stream from Azure Kinect in Open3D. I get the following error while using o3d.geometry.PointCloud.create_from_rgbd_image function :
RGBDImage of size Color image : 1280x720, with 3 channels. Depth image : 1280x720, with 1 channels. Use numpy.asarray to access buffer data. [Open3D WARNING] [CreatePointCloudFromRGBDImage] Unsupported image format. [geometry::PointCloud with 0 points.]
I am using default config file using o3d.io.AzureKinectSensorConfig() function. For the camera intrinsic parameters json file, I recorded the kinect data and decoded using azure_kinect_mkv_reader.py.