ibaiGorordo / pyKinectAzure

Python library to run Kinect Azure DK SDK functions
MIT License
446 stars 114 forks source link

Getting the Error "Kinect Body Tracking is not implemented yet in ARM. " #77

Closed DongBumKim closed 1 year ago

DongBumKim commented 1 year ago

Hi.

I'm trying to implement your code to Jetson Nano, but I'm keep facing an error when i try to run your example codes.

Kinect Body Tracking is not implemented yet in ARM. Check https://feedback.azure.com/forums/920053 for more info.

Even when I'm not using body tracking at all, the same error happens.

Can you please help me out?

Thanks!

ibaiGorordo commented 1 year ago

Which example are you using? Or if you have modified the code, please share it.

DongBumKim commented 1 year ago

Which example are you using? Or if you have modified the code, please share it.

Using any of your example codes returns the same error, including my modified code.

The modified code is down below.

!/usr/bin/env python3

-- coding: utf-8 --

""" Created on Mon Jan 20 02:07:13 2019

@author: prabhakar """ import sys

sys.path.insert(1, './pyKinectAzure/')

import necessary argumnets

import gi import cv2 import argparse import cv2 import pykinect_azure as pykinect

import required library like Gstreamer and GstreamerRtspServer

gi.require_version('Gst', '1.0') gi.require_version('GstRtspServer', '1.0') from gi.repository import Gst, GstRtspServer, GObject

pykinect.initialize_libraries()

Modify camera configuration

device_config = pykinect.default_configuration device_config.color_resolution = pykinect.K4A_COLOR_RESOLUTION_720P

device_config.depth_mode = pykinect.K4A_DEPTH_MODE_NFOV_2X2BINNED

device_config.depth_mode = pykinect.K4A_DEPTH_MODE_OFF

Start device

device = pykinect.start_device(config=device_config)

Sensor Factory class which inherits the GstRtspServer base class and add

properties to it.

class SensorFactory(GstRtspServer.RTSPMediaFactory): def init(self, vtype, properties): super(SensorFactory, self).init(properties)

self.cap = cv2.VideoCapture(opt.device_id)

    # Initialize the library, if the library is not found, add the library path as argument
    self.vtype = vtype
    self.number_frames = 0
    self.fps = opt.fps
    self.duration = 1 / self.fps * Gst.SECOND  # duration of a frame in nanoseconds
    self.launch_string = 'appsrc name=source is-live=true block=true format=GST_FORMAT_TIME ' \
                         'caps=video/x-raw,format=BGR,width={},height={},framerate={}/1 ' \
                         '! videoconvert ! video/x-raw,format=I420 ' \
                         '! x264enc speed-preset=ultrafast tune=zerolatency ' \
                         '! rtph264pay config-interval=1 name=pay0 pt=96' \
                         .format(opt.image_width, opt.image_height, self.fps)
# method to capture the video feed from the camera and push it to the
# streaming buffer.
def on_need_data(self, src, length):
    if True:
        capture = device.update()
        if self.vtype == 'depth':
            ret, image = capture.get_colored_depth_image()
        elif self.vtype == 'rgb':
            ret, image = capture.get_color_image()
        elif self.vtype == 'ir':
            ret, image = capture.get_ir_image()

        if ret:
            # It is better to change the resolution of the camera 
            # instead of changing the image shape as it affects the image quality.
            frame = cv2.resize(image, (opt.image_width, opt.image_height), \
                interpolation = cv2.INTER_LINEAR)
            data = frame.tostring()
            buf = Gst.Buffer.new_allocate(None, len(data), None)
            buf.fill(0, data)
            buf.duration = self.duration
            timestamp = self.number_frames * self.duration
            buf.pts = buf.dts = int(timestamp)
            buf.offset = timestamp
            self.number_frames += 1
            retval = src.emit('push-buffer', buf)
            print('pushed buffer, frame {}, duration {} ns, durations {} s'.format(self.number_frames,
                                                                                   self.duration,
                                                                                   self.duration / Gst.SECOND))
            if retval != Gst.FlowReturn.OK:
                print(retval)

# attach the launch string to the override method
def do_create_element(self, url):
    return Gst.parse_launch(self.launch_string)

# attaching the source element to the rtsp media
def do_configure(self, rtsp_media):
    self.number_frames = 0
    appsrc = rtsp_media.get_element().get_child_by_name('source')
    appsrc.connect('need-data', self.on_need_data)

Rtsp server implementation where we attach the factory sensor with the stream uri

class GstServer(GstRtspServer.RTSPServer): def init(self, properties): super(GstServer, self).init(properties) self.factory_rgb = SensorFactory(vtype='rgb') self.factory_depth = SensorFactory(vtype='depth') self.factory_ir = SensorFactory(vtype='ir')

    self.factory_rgb.set_shared(True)
    self.factory_depth.set_shared(True)
    self.factory_ir.set_shared(True)

    self.set_service(str(opt.port))

    self.get_mount_points().add_factory('/rgb', self.factory_rgb)
    self.get_mount_points().add_factory('/depth', self.factory_depth)
    self.get_mount_points().add_factory('/ir', self.factory_ir)
    self.attach(None)

getting the required information from the user

parser = argparse.ArgumentParser() parser.add_argument("--fps", required=True, help="fps of the camera", type = int) parser.add_argument("--image_width", required=True, help="video frame width", type = int) parser.add_argument("--image_height", required=True, help="video frame height", type = int) parser.add_argument("--port", default=8554, help="port to stream video", type = int) parser.add_argument("--stream_uri", default = "/video_stream", help="rtsp video stream uri") opt = parser.parse_args()

initializing the threads and running the stream on loop.

GObject.threads_init() Gst.init(None) server = GstServer() loop = GObject.MainLoop() loop.run()

ibaiGorordo commented 1 year ago

There was a mistake that it was trying to find the abt module lath even though it was not necessary. I think It should be fine now

DongBumKim commented 1 year ago

There was a mistake that it was trying to find the abt module lath even though it was not necessary. I think It should be fine now

Didn't check on my modified code yet, but on the example codes it worked! Thanks a lot! :)

DongBumKim commented 1 year ago

There was a mistake that it was trying to find the abt module lath even though it was not necessary. I think It should be fine now

Sorry, but I got a new problem.

Thanks to your help, the error mentioned above is gone, but a new error,

[2022-09-21 13:40:35.726] [error] [t=8156] /w/1/s/extern/Azure-Kinect-Sensor-SDK/src/dewrapper/dewrapper.c (154): depth_engine_start_helper(). Depth engine create and initialize failed with error code: 204. [2022-09-21 13:40:35.726] [error] [t=8156] /__w/1/s/extern/Azure-Kinect-Sensor-SDK/src/dewrapper/dewrapper.c (160): deresult == K4A_DEPTH_ENGINE_RESULT_SUCCEEDED returned failure in depth_engine_start_helper() [2022-09-21 13:40:35.726] [error] [t=8156] /w/1/s/extern/Azure-Kinect-Sensor-SDK/src/dewrapper/dewrapper.c (194): depth_engine_start_helper(dewrapper, dewrapper->fps, dewrapper->depth_mode, &depth_engine_max_compute_time_ms, &depth_engine_output_buffer_size) returned failure in depth_engine_thread() [2022-09-21 13:40:35.726] [error] [t=8124] /w/1/s/extern/Azure-Kinect-Sensor-SDK/src/dewrapper/dewrapper.c (552): dewrapper_start(). Depth Engine thread failed to start [2022-09-21 13:40:35.726] [error] [t=8124] /__w/1/s/extern/Azure-Kinect-Sensor-SDK/src/depth/depth.c (398): dewrapper_start(depth->dewrapper, config, depth->calibration_memory, depth->calibration_memory_size) returned failure in depth_start() [2022-09-21 13:40:35.727] [error] [t=8124] /w/1/s/extern/Azure-Kinect-Sensor-SDK/src/depth_mcu/depth_mcu.c (359): cmd_status == CMD_STATUS_PASS returned failure in depthmcu_depth_stop_streaming() [2022-09-21 13:40:35.727] [error] [t=8124] /w/1/s/extern/Azure-Kinect-Sensor-SDK/src/depth_mcu/depth_mcu.c (362): depthmcu_depth_stop_streaming(). ERROR: cmd_status=0x00000063 [2022-09-21 13:40:35.727] [error] [t=8124] /w/1/s/extern/Azure-Kinect-Sensor-SDK/src/sdk/k4a.c (895): depth_start(device->depth, config) returned failure in k4a_device_start_cameras() Start K4A cameras failed!

happens for all your example codes. I tried to run the code in sudo, reinstalling libk4a1.4-dev libk4a1.4 k4a-tools, but nothing worked.

Can you help me out with this problem as well?

Thanks again.

ibaiGorordo commented 1 year ago

Not sure, that problem seems to be related to the system setup. Have you tried writing the actual path to the library to module_k4a_path when you initialize the libraries? https://github.com/ibaiGorordo/pyKinectAzure/blob/0766c3c70622a00bdf43e6875521d6a59dafc7c2/pykinect_azure/pykinect.py#L10

DongBumKim commented 1 year ago

Actually I found out that it was because I didn't connect a Display. I figured it out. Thanks though!

There is no error in your code anymore, but I got curious; Is there way to obtain "real" depth value from a Depth Image(Especially from the colored_Depth Image")?

I'm using Jetson Nano and transmitting RGB and Depth Image to a client Computer using RTSP. In my work, not only the relative depth information, but the real distance between Kinect and the object is important. Is there a way to achieve this information?

Thanks!!

ibaiGorordo commented 1 year ago

Glad to hear, you can get the actual depth map using the get_depth_image() function: https://github.com/ibaiGorordo/pyKinectAzure/issues/61

Basically when you get the colored depth, internally it calls that function, and then it applies a colormap.