IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.64k stars 4.83k forks source link

Rosbag Camera Info viewing and Multicam #9846

Closed nath-partha closed 3 years ago

nath-partha commented 3 years ago

Required Info
Camera Model { D435 }
Firmware Version (05.12.15.50)
Operating System & Version Linux (Ubuntu 16.0)
Kernel Version (Linux Only) 05.12.15.50
Platform PC
SDK Version 2.49
Language {python }
Segment {AR, MultiCam }

Issue Description

  1. How can i query the serial number from a bag file recording? {I can get the serial number from a device if its connected, i.e. rs.context.devices, but our setup requires us to id a camera after the recording was done to know the recording viewpoint.

  2. Our additional cameras are still en route, so I havent tested this yet. But is it possible to record multiple cameras, either in a single or separate bag files, from the realsense viewer or in python script using enable_record_to_file()

Thanks!

MartyG-RealSense commented 3 years ago

Hi @parthapn98

  1. The serial number of the camera that recorded a bag file is stored in a bag, as indicated by the image below of an official SDK sample bag where the serial number is shown in the RealSense Viewer when clicking the 'i' icon in the options panel during bag playback.

image

Whilst I do not have a specific script example for querying a bag's serial number with Python, my assumption would be that once you specify in your Python script that the data source is a bag file instead of a live camera with enable_device_from_file then you should be able to use the same code that is used to read the serial number of a live camera.

The link below has a simple example of a Python script for printing the serial number of multiple RealSense cameras.

https://stackoverflow.com/questions/50724909/pyrealsense2-librealsense-sdk-2-0-choose-cam-from-serial-number

  1. In https://github.com/IntelRealSense/librealsense/issues/2882 a RealSense user shared their Python script for saving two cameras to a bag file at the same time. Their script may act as a useful basis for creating a script for more than 2 cameras.
nath-partha commented 3 years ago

Hi Marty, thanks for the response. I have already tried it that way and specified the following

    rs.config.enable_device_from_file(config, args.input, repeat_playback=False)

    # Configure the pipeline to stream the depth stream
    # Change this parameters according to the recorded bag file resolution
    config.enable_stream(rs.stream.depth, rs.format.z16, 30)
    config.enable_stream(rs.stream.color, rs.format.rgb8, 30)#######

    # Start streaming from file
    profile = pipeline.start(config)
    playback = profile.get_device().as_playback()
    playback.set_real_time(False)

    # Create opencv window to render image in
    cv2.namedWindow("Depth Stream", cv2.WINDOW_AUTOSIZE)

    # Create colorizer object
    colorizer = rs.colorizer()

    ctx = rs.context()
    for d in ctx.devices:
        inf = d.get_info(rs.camera_info.serial_number)
        print(inf)

This however only works if the camera is connected and throws an error otherwise.

Thanks for the link to the second topic

MartyG-RealSense commented 3 years ago

Another RealSense user who was saving two cameras in one bag used the code below for reading the serial number from a bag file. It partly worked for them but would only print one serial number.

cfg = realsense.config();
cfg.enable_device_from_file('C:\Program Files (x86)\Intel RealSense SDK 2.0\matlab\20190429_171854_2cameras.bag');
pipe = realsense.pipeline(); 
profile = pipe.start(cfg);
dev = profile.get_device();
serial_number = dev.get_info(realsense.camera_info.serial_number);

I would therefore consider the possibility that although multiple cameras are attached, because the data is being merged into a single bag file then only one of the cameras has its serial stored in the bag.

nath-partha commented 3 years ago

Hi Marty,

Thanks for your help, following your threads i could get a good understanding of saving bag files using python for the cameras. Ran into a problem where a bag file recorded with python can be opened in realsense viewer but not with a python script that can open the bag files made with the realsense viewer. Any ideas on what i could have made a mistake on?

recorder script: sorry for using screenshot, the embed link glitches with so many comments and i havent cleaned the code yet A1

bag file in question in realsenseviewer: A3

the error i get is frames didnt arrive in 5000 for pipeline wait for frames

playback script


# First import library
import pyrealsense2 as rs
# Import Numpy for easy array manipulation
import numpy as np
# Import OpenCV for easy image rendering
import cv2
# Import argparse for command-line options
import argparse
# Import os.path for file path manipulation
import os.path

#import pathlib

# Create object for parsing command-line options
parser = argparse.ArgumentParser(description="Read recorded bag file and display depth stream in jet colormap.\
                                Remember to change the stream fps and format to match the recorded.")
# Add argument which takes path to a bag file as an input
parser.add_argument("-i", "--input", type=str, help="Path to the bag file")
parser.add_argument("-n", "--name", type=str, help="SubjectName")
# Parse the command line arguments to an object
args = parser.parse_args()
# Safety if no parameter have been given
if not args.input:
    print("No input paramater have been given.")
    print("For help type --help")
    exit()
# Check if the given file have bag extension

print(args.name)
if os.path.splitext(args.input)[1] != ".bag":
    print("The given file is not of correct file format.")
    print("Only .bag files are accepted")
    exit()
try:
    # Create pipeline
    pipeline = rs.pipeline()

    # Create a config object
    config = rs.config()

    # Tell config that we will use a recorded device from file to be used by the pipeline through playback.
    rs.config.enable_device_from_file(config, args.input, repeat_playback=False)

    # Configure the pipeline to stream the depth stream
    # Change this parameters according to the recorded bag file resolution
    config.enable_stream(rs.stream.depth, rs.format.z16, 30)
    config.enable_stream(rs.stream.color, rs.format.rgb8, 30)#######

    # Start streaming from file
    profile = pipeline.start(config)
    playback = profile.get_device().as_playback()
    playback.set_real_time(False)

    # # Create opencv window to render image in
    # cv2.namedWindow("Depth Stream", cv2.WINDOW_AUTOSIZE)

    # Create colorizer object
    colorizer = rs.colorizer()

    dev = profile.get_device();
    inf = dev.get_info(rs.camera_info.serial_number);

    # ctx = rs.context()
    # for d in ctx.devices:
    #     inf = d.get_info(rs.camera_info.serial_number)
    #     print(inf)

    print("Serial Number: %s"%inf)

    #directory set up
    datetime = os.path.splitext(os.path.basename(args.input))[0]
    dt = datetime.split("_")
    print(dt)
    counter = 0
    directory = "HAU_YCB/"+args.name+dt[0]+"/"+datetime+"/Cam"+inf+"/"
    if not os.path.exists(directory):
        os.makedirs(directory)

    # Create an align object
    # rs.align allows us to perform alignment of depth frames to others frames
    # The "align_to" is the stream type to which we plan to align depth frames.
    align_to = rs.stream.color
    align = rs.align(align_to)

    # Streaming loop
    while True:
        # Get frameset of depth
        frames = pipeline.wait_for_frames()
        # Align the depth frame to color frame
        aligned_frames = align.process(frames)

        #aligned_color_frame = aligned_frames.get_color_frame()

        counter = counter +1

        # Get frame
        depth_frame = frames.get_depth_frame()  # aligned_depth_frame is a 640x480 depth image
        color_frame = frames.get_color_frame()
        # Get aligned frames
        aligned_depth_frame = aligned_frames.get_depth_frame()  # aligned_depth_frame is a 640x480 depth image

        if not color_frame:
            continue

        # Colorize depth frame to jet colormap
        depth_color_frame = colorizer.colorize(depth_frame)
        aligned_depth_color_frame = colorizer.colorize(aligned_depth_frame)

        # Convert depth_frame to numpy array to render image in opencv
        depth_image = np.asanyarray(depth_frame.get_data())
        depth_color_image = np.asanyarray(depth_color_frame.get_data())
        aligned_depth_image = np.asanyarray(aligned_depth_frame.get_data())
        aligned_depth_color_image = np.asanyarray(aligned_depth_color_frame.get_data())

        #if not color_frame:
        #    continue

        color_image = np.asanyarray(color_frame.get_data())
        im_rgb = cv2.cvtColor(color_image, cv2.COLOR_BGR2RGB)

        #if not aligned_color_frame:
        #    continue
        #aligned_color_image = np.asanyarray(aligned_color_frame.get_data())

        # Render image in opencv window

        # images = np.hstack((color_image, depth_color_image))
        # aligned_images = np.hstack((aligned_color_image, aligned_depth_color_image))
        #cv2.namedWindow('Align Example', cv2.WINDOW_NORMAL)
        cv2.imshow("Depth Stream", depth_image)
        cv2.imshow("Depth Colour Stream", depth_color_image)
        cv2.imshow("Aligned Depth Stream", aligned_depth_image)
        cv2.imshow("Aligned Depth Colour Stream", aligned_depth_color_image)
        #cv2.namedWindow('Align Example', cv2.WINDOW_NORMAL)
        cv2.imshow("Color Stream", im_rgb)

        #cv2.imshow("Color Stream", color_image)

        result = cv2.imwrite(directory + "depth_image_%06d.png" % counter, depth_image)
        result1 = cv2.imwrite(directory + "depth_colour_image_%06d.png" % counter, depth_color_image)
        result2 = cv2.imwrite(directory + "color_image_%06d.png" % counter, im_rgb)
        result3 = cv2.imwrite(directory + "aligned_depth_image_%06d.png" % counter, aligned_depth_image)
        result4 = cv2.imwrite(directory + "aligned_depth_colour_image_%06d.png" % counter, aligned_depth_color_image)

        key = cv2.waitKey(1)
        # if pressed escape exit program
        if key == 27:
            cv2.destroyAllWindows()
            break

finally:
    pass
`
MartyG-RealSense commented 3 years ago

My understanding is that when playing back a bag file using scripting, if you define cfg configurations for the streams then they have to match the content of the specific recorded bag that is being played back. For example, if you recorded a bag file in the Viewer with depth only in it then there should only be a cfg definition for depth in the bag playback script.

nath-partha commented 3 years ago

Hi Marty,

We think we've narrowed down the issue causing this for us. In recording the bag files from the realsense viewer, the camera is already streaming, and when we start saving there are no missed frames. In our python recorder we have issues of sometimes the

python_sdk_bag_recording(1) python_sdk_bag_recording

It is similar to issue 9695, and 5848 but we would prefer not to manually intervene with individual files.

But it doesnot solve the issue

//Didnot attempt with the same pipe1 for both as pipe.start() starts the file writer as well

MartyG-RealSense commented 3 years ago

https://github.com/IntelRealSense/librealsense/issues/7932 provides an example of Python code for skipping the first 5 frames during playback in the script under the Loading Code heading. You could certainly change '5' to '3'.

for _ in range(5):
pipe.wait_for_frames()
for i in range(num_frames - 5):
print(i)
frameset = pipe.wait_for_frames()

I see no reason why the same code could not be used in the recording part of your application too to skip the first few frames when the pipeline is started.

nath-partha commented 3 years ago
conf1 = rs.config()
conf1.enable_stream(rs.stream.depth, args.width, args.height, rs.format.z16, args.fps)
conf1.enable_stream(rs.stream.color, 1280, 720, rs.format.rgb8, args.fps)
conf1.enable_device(camera_data[name])
conf1.enable_record_to_file(output_path)
pipe1 = rs.pipeline()
pipelineProfilel = pipe1.start(conf1)

The above is our basic format for recording to a bag file

The .bag file writer starts on calling pipe1.start(conf1). There is no room to do a skipping of frames using a loop+waitforframes()

I have tried the following variations: 1: no comment hash removed 2: 1 comment hash removed 3: all comment hash removed

conf1 = rs.config()
conf1.enable_stream(rs.stream.depth, args.width, args.height, rs.format.z16, args.fps)
conf1.enable_stream(rs.stream.color, 1280, 720, rs.format.rgb8, args.fps)
conf1.enable_device(camera_data[name])
pipe1 = rs.pipeline()
pipelineProfilel = pipe1.start(conf1)
for _ in range(90):
    pipe1.wait_for_frames()
##pipe1.stop()
conf1.enable_record_to_file(output_path)
#pipe1.start(conf1)

Variation 1 doesnt create the bag file, as pipe.start(config) isnt setup for file writing Variation 2 crashes, cannot start an already running pipe, "device busy" Variation 3 is the same as the original code given before the variations, sometimes it works to create a bag file without gaps, sometimes not

|||||||||||||||||||||||| Skipping in playback Comment : following this thread, counting the number of frames doesnot bring any advantage unless we can specify the playback to start at a specific frame. Also the frame count will be inaccurate since we have time blocks without any frames

7932 provides an example of Python code for skipping the first 5 frames during playback in the script under the Loading Code heading. You could certainly change '5' to '3'.

for _ in range(5):
pipe.wait_for_frames()
for i in range(num_frames - 5):
print(i)
frameset = pipe.wait_for_frames()

I see no reason why the same code could not be used in the recording part of your application too to skip the first few frames when the pipeline is started. this kind of skipping during playback only works when there is an even distribution of frames in the rosbag, ie. recorded with the realsense viewer. For the image i attached previously the pipe.wait_for_frames() causes our crash even when i put it inside a for loop like this

nath-partha commented 3 years ago

Found and resolved issue, thanks for your help @MartyG-RealSense

issue #4249's original code helped with the initial skipping during recording, i doubt we will have an issue with unaligned frame numbers since our unpacking is different.

Im closing this issue since its resolved Cheers

MartyG-RealSense commented 3 years ago

Great to hear that you developed a solution, @parthapn98 - thanks very much for the update!

nath-partha commented 3 years ago

@MartyG-RealSense Quick question,

Background: We're currently running experiments with our multicam setup and noticed a few things

Questions:

Asking because I was unable to find direct answers for all, and even just our experiment results are contradictory. Any links you can provide on related topics is also appreciated

MartyG-RealSense commented 3 years ago

RealSense 400 Series cameras do not interfere with each other when used in a multicam arrangement without hardware sync. The infrared imaging sensors do not have an IR Cut filter though and so can see all visible light frequencies. This is why the T265 Tracking Camera model has an IR Cut filter so that it can be paired with a 400 Series camera in close proximity, as the T265's IR Cut filter blocks it from seeing the dot pattern projection from the 400 Series model.

You could try reducing the Laser Power setting of the cameras, which will make the IR dot pattern projection less visible. This can have a negative impact on the amount of detail in the depth image. If the scene is well lit though then the camera can alternatively use ambient light in the scene to analyze surfaces for depth detail instead of using the IR dot pattern projection and so you could experiment with turning the IR Emitter off completely (it is a separate component from the left and right IR imaging sensors).

The Viewer will not be hardware syncing the cameras unless the Inter Cam Sync Mode of the cameras has been set and the cameras are connected by sync cabling or a wireless sync trigger pulse transmitted by a signal generator is being used. If you are not currently using hardware sync then I would recommend having the Inter Cam Sync Mode set to '0' on all cameras.

On the D435 model the projector pulses to coincide with camera exposure, whilst on the D415 model the projector is always on. The D435's projector can be set to be always on like it is with the D415, as you have already done. This subject is discussed in the section of Intel's projector white-paper linked to below

https://dev.intelrealsense.com/docs/projectors#section-9-using-multiple-depth-cameras


In regard to the number of cameras that you can use: if all of the cameras are attached to the same computer then the number of cameras that can be used may depend on whether all cameras are operating simultaneously, since computer resources will be being consumed. If all cameras are capturing at the same time then Intel has previously recommended using a computer with an i7 processor for a four-camera setup.

If the cameras are not all capturing simultaneously and are turning on and off in a sequence (e.g 1-4 on and then off, and then 5-8 on and then off, etc) then you could use a greater number of cameras on the same computer. If you use multiple computers and spread the cameras across them then that should greatly increase the number of cameras that can be used.

As an example, the CONIX Research Center at Carnegie Mellon developed a system for generating point clouds from up to 20 RealSense cameras and then stitching them together, with one cameras per computer for capturing and a central computer to process all of the captured data.

https://github.com/conix-center/pointcloud_stitching

Intel also demonstrated a simpler form of such a system in January 2018, with four D435 capturing a scene and sending the data to a fifth computer for final processing.

https://www.intelrealsense.com/intel-realsense-volumetric-capture/

nath-partha commented 3 years ago

Hi @MartyG-RealSense,

are there any solved issues on bag file errors recorded with a python script? Specifically regarding to timestamps of the frames.

During playback: The recording duration of the bag is fine, but then it jumps to a timestamp that very high "35xxxx.." and the cause of the issue appears random some bag files are fine and without issues and some do. We have been trying to debug this for a while with usb resets and trying to ensure we donot attempt to start recording on an already streaming device. No luck yet.

Any suggestions?

MartyG-RealSense commented 3 years ago

You should not be able to start a recording of a particular stream on a particular camera if recording of that stream has already started, as that camera stream will already be 'claimed' by the active recording. You might receive an error that the camera is busy or it 'failed to set power state' if you try to start a new pipeline after one has already been started though.

Checking if a stream is already busy is not straightforward. The error generated by a busy state is typically handled by catching the error in an exception check so that the program does not crash. An example of exception catching in Python is highlighted in the link below.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-tutorial-1-depth.py#L49

You could also have greater control over which camera is recording to which file by creating a separate pipeline and unique cfg configuration for each camera, manually providing the exact serial number for each camera instead of letting the program automatically detect the serial numbers and build a list of devices. A Python example of this is in https://github.com/IntelRealSense/librealsense/issues/1735#issuecomment-390840337

Have you also tried playing the affected bag files back in the RealSense Viewer by drag-and-dropping the bag file into the Viewer's central panel and observing the timestamps to see whether the jump occurs in the Viewer playback? If it does, this could indicate that there is a problem in the recording rather than in your Python code.

nath-partha commented 3 years ago

The error generated by a busy state is typically handled by catching the error in an exception check so that the program does not crash.

Yes This is how we decide to either start or abort the recording, or just let the script crash on pipe.start

You could also have greater control over which camera is recording to which file by creating a separate pipeline and unique cfg configuration for each camera, manually providing the exact serial number for each camera instead of letting the program automatically detect the serial numbers and build a list of devices.

We do a combination of these, by first creating a list of serial numbers and then create a list of pipes, profiles and configs each while using the serial number list

for i in range(len(serials)):
        pipe[i] = rs.pipeline()
        conf[i] = rs.config()
        conf[i].enable_device(serials[i])

        if All:
            conf[i].enable_all_streams()
        else:

            if Depth:
                conf[i].enable_stream(rs.stream.depth, wd, hd, rs.format.z16, fps_depth)
            if IR1:
                conf[i].enable_stream(rs.stream.infrared, 1, wd, hd, rs.format.y8, fps_depth)
            if IR2:
                conf[i].enable_stream(rs.stream.infrared, 2, wd, hd, rs.format.y8, fps_depth)
            if RGB:
                conf[i].enable_stream(rs.stream.color, w, h, rs.format.rgb8, fps)
            if Motion:
                # conf[i].enable_stream(rs.motion_stream, -1)
                conf[i].enable_stream(rs.stream.accel)
                conf[i].enable_stream(rs.stream.gyro)

        # conf[i].enable_all_streams()
        conf[i].enable_record_to_file(out[i])
        # prof[i] = pipe[i].start(conf[i])

The problem is less frequent if we use option ALL, however its never completely gone. Its much more frequent with the other settings, however never all the time

The remaining relevant part of the code is

for i in range(len(serials)):
        prof[i] = pipe[i].start(conf[i])
        sensor[i] = prof[i].get_device().first_depth_sensor()
        sensor[i].set_option(rs.option.emitter_always_on, 1)
        record[i] = prof[i].get_device().as_recorder()
        rs.recorder.pause(record[i])
# delay warmuo
for i in range(len(serials)):
        rs.recorder.resume(record[i])
# delay recording duration

#--------------------- we tried this as optional for a fix, it did not resolve the issue
# for i in range(len(serials)):
#        rs.recorder.pause(record[i])  # pause before closing stream
##delay pause 
#--------------------

for i in range(len(serials)):
        pipe[i].stop()
        conf[i].disable_all_streams()
del pipe
    del prof
    del conf
    del record
    del sensor

The problem is faced even if we write out a separate short script to record from just one camera instead of auto detecting like this.

Have you also tried playing the affected bag files back in the RealSense Viewer by drag-and-dropping the bag file into the Viewer's central panel and observing the timestamps to see whether the jump occurs in the Viewer playback? If it does, this could indicate that there is a problem in the recording rather than in your Python code.

We confirmed the timestamp issue with realsense viewer itself and not rqt or python bagreadeer The recording for the assigned duration runs fine with proper fps display, but the end of file is marked at value "35xxxx.." seconds, and the timestamp at the end of the record duration suddenly jumps to this point. Indicating there are no frames between the end of recording of the bag and this specific time value. the time value "35xxxx.." is actually specific and does not vary, Im writing from memory so don't remember what it was exactly

MartyG-RealSense commented 3 years ago

I wonder whether the incorrect timestamp on the final frame is a consequence of how a bag file is 'indexed' by the SDK when a recording is ended by closing the pipeline.

Lookng at the code, the line conf[i].disable_all_streams() after the pipe stop instruction is likely to be unnecessary as all of the streams that are currently active on a pipeline should be automatically stopped anyway when the pipeline is closed.

I think it's a good idea that you implemented a pause instruction before the pipe stop.

nath-partha commented 3 years ago

I wonder whether the incorrect timestamp on the final frame is a consequence of how a bag file is 'indexed' by the SDK when a recording is ended by closing the pipeline.

Is there any alternative ways to end the recording or any recommendations on how to proceed here? The scripts are designed for users to not require intricate knowhow of bag/ros and pyrealsense Screenshot_20211103-135851_(1)

The files in this video were recorded seconds apart. There is no specific order such, a good recording can be preceded by a bad one as well. The file names-ends are time stamps. No devices were disconnected in between

https://user-images.githubusercontent.com/77120537/140086243-61ee05fc-32ee-4d04-9489-21524194bf0d.mp4

MartyG-RealSense commented 3 years ago

It looks as though out of the three bags playing simultaneously in the Viewer, one is playing normally and the other two have end times that are completely wrong and that is somehow affecting the visual updating of the images. The timelines of the affected bags progress at a normal time rate that is the same as the working bag (even though the images are not updating) and then both start and end times become wrong after the jump to the final frame.

Do the bags play normally in the Viewer if played individually instead of three at the same time? If so then this might suggest that the computer that the bags are being played on cannot cope with the processing demands of a simultaneous 3-bag playback.

When you say that the bags were recorded seconds apart, do you mean sequentially - that an active recording was stopped and then a new recording to a different file was rapidly begun?

The rosbag record bag recording function of ROS provides an automated means to do this called --split, where you can define a maximum recording duration or file size, after which the bag file recording is closed and a new file starts recording seamlessly from that point onwards. So you could set the recording running and let rosbag record's split command take care of the start and stop process during recording.

image

nath-partha commented 3 years ago

(even though the images are ot updating)

The images are updating sorry there's not much distinguishing movement, I didnt realise it till after. But if you maximise and look closely the corresponding movement and update of frames are seen

Do the bags play normally in the Viewer if played individually instead of three at the same time? If so then this might suggest that the computer that the bags are being played on cannot cope with the processing demands of a simultaneous 3-bag playback.

Yes they are identical when played back individually. I did em simultaneously to record them in one mp4. We have done record+playback on multiple tiers of devices to rule out hardware causes. And as mentioned even with just a single camera the issues are present

When you say that the bags were recorded seconds apart, do you mean sequentially - that an active recording was stopped and then a new recording to a different file was rapidly begun?

I run my bagrecorder script N number of times, some outputs are with glitched timestamps, some are fine. No rapid start and stop of file writing

I will take a look at ROS recording, currently we only use ROS for further analysis of our recorded bags and we would prefer to keep the dependencies of this code limited to pyrealsense2 only

MartyG-RealSense commented 3 years ago

So the script's code seems to be okay if it can record correctly sometimes. It would seem to be a question of stability then and reproducing recording results consistently.

It may be worth shutting down and rebooting the computer inbetween each individual recording test to eliminate the possibility that the script is malfunctioning when run again after the previous test. This can occur with RealSense programs if the program was not properly closed down after the previous session of it.

nath-partha commented 3 years ago

@MartyG-RealSense i ran some tests, the scripts ability to create a bagfile without the issue in reference isnt dependent on when its being run. i.e. not all first runs are error free

On a different topic. are there any existing code for recording cameras in ros?

MartyG-RealSense commented 3 years ago

Some ROS users have created Python projects to activate and control the rosbag record bag recording tool of ROS. An example of a ROS node script in the Python language that was published in 2020 is in the link below.

https://gist.github.com/vbschettino/13c1d0102fa184811b31397cb9a4e85d

nath-partha commented 3 years ago

Thanks @MartyG-RealSense Ill check it out. Before i leave for the weekend: is there a way to use rs-server for standard linux based computers not a raspi?

Ive got librealsense built, rs enumerate shows device status and rs-server shows server started. Since its not a raspi, i donot set static ip. I can view the currently assigned ips of all devices connected to it, on my mobile, where the hotspot is hosted. However when a second computer using realsense viewer on the same wifi enters the ip of the streaming computer, it gives the error out of range value for argument "def"

If i try to add network device without running rs-server on host it gives this: [getControls] error: Connection to server failed: Connection refused - -107

MartyG-RealSense commented 3 years ago

rs-server should be able to be compiled on any Linux computer, not just a Raspberry Pi. The open-source ethernet white-paper uses a Pi 4 board by default but states that the project can be adapted with minor modifications for other computing brands, citing Up Board as an example.

The design of the open-source networking system is that you can have multiple remote boards that have a camera attached to them, but there is a single host central computer to receive the feeds from the remote cameras.

The design of the librealsense SDK is that once an application has 'claimed' access to a particular stream on a particular computer then that stream cannot be accessed by another application (such as an instance of the Viewer on a second host computer). Multiple applications can access different streams on the same camera, but two applications cannot access the same camera stream at the same time - the application that originally claimed the stream must disable that stream to release the claim on it and make it available to other applications to access.

The access rules are described in the SDK's Multi-Streaming Model documentation in the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/rs400_support.md#multi-streaming-model

nath-partha commented 3 years ago

The design of the librealsense SDK is that once an application has 'claimed' access to a particular stream on a particular computer then that stream cannot be accessed by another application (such as an instance of the Viewer on a second host computer). Multiple applications can access different streams on the same camera, but two applications cannot access the same camera stream at the same time - the application that originally claimed the stream must disable that stream to release the claim on it and make it available to other applications to access.

Yes I expected the same from experience with the RS camera behaviours. If you open up a particular stream, that cannot be accessed by anything else.

However, the original stream is not open anywhere, neither the receiver PC or the client where rs-server is started.

If i

It is similar to the issue faced by this user. But for me the errors are seen on realsense-viewer not python with pyrealsense trying to access the streams

https://github.com/IntelRealSense/librealsense/issues/9372

MartyG-RealSense commented 3 years ago

Can you confirm whether PC1 and PC2 are hosts or the remote board that the camera is attached to, please. The white paper states that rs-server should be installed on Raspberry Pi, indicating that it is on the remote board and not the host computer that runs the RealSense Viewer.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras#section-5-power-and-performance

nath-partha commented 3 years ago

Hi Marty,

PC1 is a linux computer to which a d435 is connected. PC1 runs the rs server PC2 is another linux computer on the same network and runs realsense-viewer

MartyG-RealSense commented 3 years ago

Thanks very much for the clarification.

The white paper describes that rs-server on the remote computer connects to the first available camera [attached to the remote computer] and listens for incoming connection requests [presumably Network Camera connection requests made by the Viewer on the host computer].

So when rs-server is not run and a Network Camera request is made by the Viewer on the host, it is logical that the [getControls] error: Connection to server failed: Connection refused - -107 error would occur, because rs-server is not listening for incoming connection requests whilst it is inactive and so a bridge between remote and host cannot be established. There is no interface active on the remote computer for the Viewer to talk to.

You also tried first running rs-server on the remote computer and then connecting to it from the Viewer on the host but received the error out of range value for argument "def".

You mentioned earlier in the case that you did not configure at static IP because you are not using a Pi. Whilst the requirement for a static IP does not seem to be exclusively for the Pi, the paper does suggest that the system has the flexibility to use other types of connection, such as wireless wi-fi / WLAN. Can you confirm please whether the problem is that you can connect to the remote camera successfully with the original host computer (PC2) but you cannot connect with a second host computer that is also equipped with the Viewer?

nath-partha commented 3 years ago

Hi Marty,

The rs-server was initially run with the following setup: Wifi hostpot:

Next Setup: Ethernet router+static ip

image

Can you confirm please whether the problem is that you can connect to the remote camera successfully with the original host computer (PC2) but you cannot connect with a second host computer that is also equipped with the Viewer?

Yes

nath-partha commented 3 years ago

Hi Marty,

Is there a way to do alignment of the depth frame to colour after receiving it via ROS? EG, we run rs_camera.launch, receive it on a remote ros client. Now we want to align the rgb and depth stream.

I know we could do the alignment directly with rs_aligned_depht.launch but since the aligned depth image is resized to rgb stream, it will block up a significant bandwidth in our setup.

MartyG-RealSense commented 3 years ago

Perhaps you could create a ROS adaptation of Intel's open-source ethernet system described in the link below, which streams RealSense data from a camera on a 'remote' board such as Raspberry Pi to a program such as the RealSense Viewer running on a central 'host' computer.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras

This adaptation possibility is suggested by the paper itself. When it mentions ROS, it has the number [10] beside it that refers to a RealSense ROS guide listed in the references at the bottom of the paper for using 3 cameras across 2 computers:

https://github.com/IntelRealSense/realsense-ros/wiki/showcase-of-using-3-cameras-in-2-machines

A RealSense ROS user in https://github.com/IntelRealSense/realsense-ros/issues/1808 has already performed a ROS adaptation of the paper's networking system.

nath-partha commented 3 years ago

Hi Marty, we already have a working remote ros system with the realsense where we can access the stream from a client on the network.

My question is regarding is it possible to somehow use the realsense SDK to perform alignment of depth to RGB on the client side? Because the frames are no longer in the 'frameset' format to the receiver and instead published as images.

We do not publish the aligned depth stream from the hosts as these are resized to RGB resolution which is typically higher, and takes up more bandwidth.

The ros questions are separate in nature to the rs-server. We are exploring this since the rs-server is not working yet.

MartyG-RealSense commented 3 years ago

A RealSense Python user in https://github.com/IntelRealSense/librealsense/issues/5296 took the approach of using pickling for remote alignment by transmitting the unaligned data.

nath-partha commented 3 years ago

Hi Marty,

Is there a way to set emitter always on/ other emitter controls from the ros launch files? (I wasnt able to find any parameter for these from the launch and node yaml files)

Whats the procedure for doing intercam sync between multiple cameras on ros? (Hardware sync cables involved) (I only found one parameter 'enable_sync'(boolean) in the launch files How should master/slave be designated?

MartyG-RealSense commented 3 years ago

https://github.com/IntelRealSense/realsense-ros/issues/1472#issuecomment-719415753 suggests setting the emitter_always_on ROS parameter.

I could not find an example of a ROS1 rosparam launch file code for emitter_always_on. I believe that the code below may work though:

<rosparam>
/camera/stereo_module/emitter_always_on: true
</rosparam>

Alternatively, you could add the instruction emitter_always_on:=true to the roslaunch instruction. For example:

roslaunch realsense2_camera rs_camera.launch emitter_always_on:=true

Regarding multiple camera sync, on the 400 Series cameras Inter Cam Sync Mode should be set to '1' for the Master camera and '2' for the Slave cameras.

enable_sync is not related to multiple camera hardware sync. It instead gathers the closest frames of different sensors - infra red, color and depth - to be sent with the same timetag.

You can define Inter Cam Sync Mode in ROS with the parameter inter_cam_sync_mode and information about setting it in ROS1 can be found in https://github.com/IntelRealSense/realsense-ros/issues/984#issuecomment-707729244

You do not necessarily need to use hardware sync in ROS for a multiple camera setup though. Intel have published ROS guidance for 2 cameras on one PC and 3 cameras across 2 PCs.

https://www.intelrealsense.com/how-to-multiple-camera-setup-with-ros/

nath-partha commented 3 years ago

Hi Marty,

Thanks for your help with the ros settings above.

We have come across an issue that crosses between the ros and sdk usage in python. Ill try to to it explain it concisely first

I know other users with realsense have explored software based usb reset in the issues, have you come across anything that succeeded reliably?

Edit: Do you have any recommendations for solving this without usb reset?

MartyG-RealSense commented 3 years ago

You could try a multiple camera version of the hardware_reset() instruction to reset only the cameras and not the entire USB port. An example of such a script in Python is in https://github.com/IntelRealSense/librealsense/issues/5428#issuecomment-564482167

Note in that script, a list of all attached cameras is automatically built with ctx.query_devices()

nath-partha commented 3 years ago

Hi marty,

The ctx object itself is empty here Screenshot from 2021-11-24 13-59-11

Screenshot from 2021-11-24 14-26-48

MartyG-RealSense commented 3 years ago

Is the camera detectable if you first shut down the ROS launch with rosnode kill --all instead of using Ctrl-C?

nath-partha commented 3 years ago

No its not detectable,

top right is the terminal running roslaunch bottom right roscore

Screenshot from 2021-11-24 14-54-22

MartyG-RealSense commented 3 years ago

As an alternative to closing down the ROS wrapper with Ctrl-C, would it be possible to run your Python code from within the active RealSense ROS wrapper as a Python node script, as discussed in https://github.com/IntelRealSense/librealsense/issues/6367 regarding the ROS wrapper's rs_test.py example script.

https://github.com/IntelRealSense/realsense-ros/issues/1086#issuecomment-718477110 also has guidance about using another of the wrapper's example Python scripts called show_center_depth.py

nath-partha commented 2 years ago

Hi Marty,

Is there an option to enable the coloriser filter in the launch files for ros?

MartyG-RealSense commented 2 years ago

Hi @parthapn98 You can enable the colorizer as a filter in ROS, similar to defining a pointcloud filter.

For example, if you only needed a colorizer filter then you could define it in the roslaunch instruction like this:

roslaunch realsense2_camera rs_camera.launch filters:=colorizer

If you needed both a pointcloud and colorizer then you could define it like this, putting a comma with no spaces between the filter types:

roslaunch realsense2_camera rs_camera.launch filters:=colorizer,pointcloud

If you prefer to set it in the launch file, I have not got a precise example for the colorizer. However, I do have one for other types of filters:

<arg name="filters" default="disparity,spatial,temporal,decimation,pointcloud"/>

So I would speculate (not certain) that as the colorizer is a 'filter' type, if it is possible to set it in the launch file then a launch file arg for it might look like this:

<arg name="filters" default="colorizer"/>

nath-partha commented 2 years ago

Hi Marty,

We are pretty happy with the coloriser and its predefined compression for the compressed topic

1) Is there a way to set the filter options of the coloriser like

thr_filter.set_option(RS2_OPTION_MIN_DEPTH, min_depth);
thr_filter.set_option(RS2_OPTION_MAX_DEPTH, max_depth);
color_filter.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED, 0);
color_filter.set_option(RS2_OPTION_COLOR_SCHEME, 9.0f);     // Hue colorization
color_filter.set_option(RS2_OPTION_MAX_DEPTH, max_depth);
color_filter.set_option(RS2_OPTION_MIN_DEPTH, min_depth);

2) is the threshold filter available in the roslaunch CLI? filters:=threshold_filter didnot work 3) Is there a way to set the compression scheme and percentage for the /camera/depth/image_rect_raw/compressed topic when using the coloriser

MartyG-RealSense commented 2 years ago
  1. I could not find specific information about setting colorizer filter options in the ROS wrapper, though I suspect that it would be approached in the same way as querying other filter types such as post-processing filters. Doronhi the RealSense ROS wrapper developer provides guidance about this in https://github.com/IntelRealSense/realsense-ros/issues/742#issuecomment-720295533

Using this guidance, you could potentially query what parameters are available for the colorizer filter in the ROS wrapper by using a rosrun dynamic_reconfigure dynparam get command.

https://github.com/IntelRealSense/realsense-ros/issues/2140#issuecomment-952850147 has a reference to setting histogram equalization using /camera/colorizer/

  1. The ROS wrapper does not support the threshold filter, though a maximum depth distance can be set with the clip_distance instruction. If you require a minimum distance like the threshold filter provides, you could set a minimum distance in rtabmap_ros by using its RangeMin option, as described in the link below.

https://github.com/introlab/rtabmap_ros/issues/413

  1. I researched this question carefully but could not find information about it, unfortunately.
nath-partha commented 2 years ago

The rqt recofigure had all the options for the coloriser including the jpeg/png compression quality settings Thank you

We had some issues when we shifted the test to our embedded device(jetson nano) its cpus are not really powerfull enough to even apply the coloriser filter at more than 15fps, anything with filters use a cpu and the nano loses fps significantly.

Is there any implementation from a user who leveraged the gpu on a nano? or used a different CPU embedded device that could provide sufficient compute power?

MartyG-RealSense commented 2 years ago

I'm pleased to hear that you were able to find the colorizer settings that you needed.

If librealsense has CUDA support enabled to make use of an Nvidia GPU (like the one that Jetson boards have) then it can provide acceleration for color conversion, depth-color alignment and pointclouds. If the type of processing that you are performing does not involve those specific types of operation then you may not benefit from CUDA support being enabled.

Also, if the ROS wrapper was installed from packages with the wrapper's Method 1 (librealsense and the wrapper installed together) then CUDA support would not be included in the librealsense build.

Intel strongly recommends that RealSense users with a Nano enable the barrel jack connector for extra power using the information in the link below if your Nano model supports that.

https://www.jetsonhacks.com/2019/04/10/jetson-nano-use-more-power/

In regard to alternative embedded devices, you could consider a more powerful Jetson board model so that you can still take advantage of an Nvidia GPU, though this would have increased cost.

You could also explore Intel's NUC computing range. They are small enough to fit on a mobile robot, suited to industrial applications and are available in a wide range of configurations (from low end to high end) and price points, and in different form factors (board, partially completed kit and complete ready-to-run mini-PC). NUCs also have a long history of being used with RealSense cameras dating back to the original generation of RealSense in 2014.

https://www.intel.com/content/www/us/en/products/details/nuc.html

There is also the hardware acceleration solution for vision computing called Intel Neural Compute Stick 2 (NCS2), which can be simply plugged into a USB port alongside the camera. It is available from the official RealSense online store. I recommend googling for realsense ros ncs2 to see whether an Intel NCS2 stick may be able to be leveraged in your ROS project.

https://store.intelrealsense.com/buy-intel-neural-compute-stick-2.html

nath-partha commented 2 years ago

Hi Marty,

We reflashed our nano os and rebuilt everything with cuda support -libuvc script with dbuild with cuda added -ros using catkin build

We had some weird results that ill try to detail:

Prior Setup: 1) Subscribing to rgb compressed: Maxed out 1 cpu core 26FPS 2) Publishing depthimage with coloriser filter: maxed out 1 Cpu core: On Subscribe: 14 FPS

Current Setup: 1) Subscribing to rgb compressed: Doesnot Max out any cpu cores: 26FPS 2) Publishing depthimage with coloriser filter: maxes out 1 Cpu core: On Subscribe: 13-15 FPS

My expectations from going over the issues on similar topic in realsense ros, were compression would still be done on CPU and filters like coloriser, point cloud, etc would be on GPU.

Any suggestions on where i can look into to get to the bottom of this?

MartyG-RealSense commented 2 years ago

If CUDA support is enabled then pointclouds and depth-color alignment should be processed on the GPU.

Compression would be a CPU activity.

Technical detail about the color conversion support for CUDA is provided at https://github.com/IntelRealSense/librealsense/pull/1866

As far as I am aware though, a colorizer filter (assigning color shading to depth values) would not be covered by the range of functions that CUDA provides acceleration for and so would be handled on the CPU.

If you are building with CMake then adding the CMake build term BUILD_WITH_OPENMP=true to the librealsense build will enable YUY to RGB conversion and depth-color alignment to take advantage of multiple cores. This can reduce latency at expense of greater CPU utilization.

nath-partha commented 2 years ago

Hi @MartyG-RealSense

Is there some documentation for the working of depth-color alignment function rs_align?

I am looking into this since our current format of recording utilises realsense_ros, and we want to understand how far the performance hit would go if we used aligned_depth:=true . (Also rs_aligned would resize the depth image to 1280x720 and BW is a small concern)

[I have taken a look at rs_align_example and rs_align_advanced example notebooks in librealsense, and rs_align however they only show how to impement the functions and their outputs, similar to a python demo on the depth_color alignment that removes background beyond a certain distance.]

Is there any documentation on the internal workings of rs_align or is it possible to take a look at the code itself?