cansik / librealsense-java

Intel® RealSense™ SDK 2 wrapper for Java.
19 stars 1 forks source link

Temporal Filter using Librealsense-java #8

Open jashshah999 opened 12 months ago

jashshah999 commented 12 months ago

I am able to replicate the Realsense SDk view exactly when I run the python script as shown below but when I run a similar script using this library in java, I get an output with many more values and a depth map which is not the same as the SDK view. Something similar to this : https://stackoverflow.com/questions/71381082/converting-16-bit-depth-frame-from-intel-realsense-d455-to-opencv-mat-in-android

Essentially, I want the same output that comes from running this below script; but in Java. Please help me out. Thank you in advance.


import pyrealsense2 as rs
import numpy as np
import cv2

BG_THRESHOLD = 2000
apply_filter = True 

# Configure depth and color streams
pipeline = rs.pipeline()
config = rs.config()

# Get device product line for setting a supporting resolution
pipeline_wrapper = rs.pipeline_wrapper(pipeline)
pipeline_profile = config.resolve(pipeline_wrapper)
device = pipeline_profile.get_device()
device_product_line = str(device.get_info(rs.camera_info.product_line))

found_rgb = False
for s in device.sensors:
    if s.get_info(rs.camera_info.name) == 'RGB Camera':
        found_rgb = True
        break
if not found_rgb:
    print("The demo requires Depth camera with Color sensor")
    exit(0)

config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)

if device_product_line == 'L500':
    config.enable_stream(rs.stream.color, 960, 540, rs.format.bgr8, 30)
else:
    config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

# Start streaming
pipeline.start(config)

filter = rs.temporal_filter(0.66, 46, 8)
# filter = rs.hole_filling_filter(1)

# filter.set_option(rs.option.filter_smooth_alpha, 0.66)
# filter.set_option(rs.option.filter_smooth_delta, 46)
filter.set_option(rs.option.holes_fill,7)
filter.set_option(rs.option.filter_smooth_alpha, 0.4)
filter.set_option(rs.option.filter_smooth_delta, 46)

filter_depth = rs.threshold_filter(0.64,1.01)

try:
    while True:

        # Wait for a coherent pair of frames: depth and color
        frames = pipeline.wait_for_frames()
        depth_frame = frames.get_depth_frame()

        # filter = rs.hole_filling_filter(1)
        # filter = rs.temporal_filter()
        if apply_filter:

            depth_frame = filter.process(depth_frame)
            depth_frame= filter_depth.process(depth_frame)

        color_frame = frames.get_color_frame()
        if not depth_frame or not color_frame:
            continue

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)

        depth_colormap_dim = depth_colormap.shape
        color_colormap_dim = color_image.shape

        # If depth and color resolutions are different, resize color image to match depth image for display
        if depth_colormap_dim != color_colormap_dim:
            resized_color_image = cv2.resize(color_image, dsize=(depth_colormap_dim[1], depth_colormap_dim[0]), interpolation=cv2.INTER_AREA)
            images = np.hstack((resized_color_image, depth_colormap))
        else:
            images = np.hstack((color_image, depth_colormap))

        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('RealSense', images)
        key =cv2.waitKey(1)
        if key == ord('f'):
            apply_filter = not apply_filter

finally:

    # Stop streaming
    pipeline.stop()
cansik commented 12 months ago

Thanks for reporting, could you add the java version too?

Because what you are doing is already covered in the library, except the manual step of colorizing the image. Using this library I would suggest to use the colorizer provided by librealsense.

cansik commented 12 months ago

And did you have a look at https://github.com/cansik/librealsense-java/blob/master/src/test/java/org/intel/rs/ui/ProcessingBlockTest.java already?

jashshah999 commented 11 months ago

Hi @cansik the main issue I have here is as follows; in the python script I am able to apply the filter on the depth map but in the java version, I have to apply the filter on the depth image. This does not solve my issue because I need the depth image to do the filtering and then on this filtered image, I need to run some background segmentation; but because the depth image does not have information about depth (just rgb values for each pixel), I cannot do both using the same frame. If you see, in python, I applied the filter on the depth map and then did the background removal using the same filtered map but I am not able to do that in Java so the question here is

1) Is it possible to go from the depth image to the depth map, is it a 1 to 1 relation? If so, is there a function already implemented for this 2) If not, how else can we first filter the image using temporal filter and then apply background segmentation on it?

cansik commented 11 months ago

I think your terminology is a bit misleading. You always work on the depth-frame, which by default is Y16 (16bit unsigned integer) on a realsense camera. If you map the depth-frame with a colormap, you get an RGB8 (8bit unsigned interger) image out of it. So every filter that is applied to the depth usually runs in Y16. This is also the case with this library, as shown in the example I've posted:

FrameList frames = pipeline.waitForFrames();
FrameList alignedFrames = align.process(frames);

VideoFrame colorFrame = alignedFrames.getColorFrame();
DepthFrame depthFrame = alignedFrames.getDepthFrame();

// example on how to process the depthFrame
DepthFrame decimatedFrame = decimationFilter.process(depthFrame);

// todo: do more processing here (thresholding, hole-filter, temporal-filter)

// mapping depth frame to RGB8 with colorizer
VideoFrame colorizedDepth = colorizer.colorize(decimatedFrame);

The only thing you have to do is now adding more filters, as in your python example.

Regarding your questions:

  1. No, it is not possible to map between the two because there is an 16bit to 8bit conversion which can not be reversed without data loss
  2. What do you mean by Background Segmentation? If you mean thresholding, you do that before the conversion. As in your own python example:
depth_frame = filter.process(depth_frame)
depth_frame= filter_depth.process(depth_frame)

It looks to me like you have some general questions about using the Realsense framework and want to clarify basic issues. If it's not really java specific, I'd be happy if you post the questions in the community forum.