IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.59k stars 4.82k forks source link

Reading RGB videos from bag files using python #2652

Closed shgold closed 5 years ago

shgold commented 5 years ago
Required Info
Camera Model { D400 }
Operating System & Version { Linux (Ubuntu 18)
Kernel Version (Linux Only) (e.g. 4.15.0)
Platform PC
SDK Version { legacy / 2.6.1}
Language {python }
Segment {others }

Issue Description

Hello, I am new with depth sensing camera and while following the example code for the python, I have encountered a problem to read the RGB video at the same time with Detph video.

I tried to configure the pipeline as following

pipeline = rs.pipeline()

# Create a config object
config = rs.config()

# Tell config that we will use a recorded device from filem to be used by the pipeline through playback.
rs.config.enable_device_from_file(config, args.input)
# Configure the pipeline to stream the depth stream
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)

# Start streaming from file
pipeline.start(config)

Then there is a error message like this:

09:08:03.362 [7828] [F] TrackingManager: Set Host log Control, output to Buffer, verbosity = 0x0, Rollover mode = 0x1 Traceback (most recent call last): File "readRealSenseCameraFiles.py", line 43, in pipeline.start(config) RuntimeError: Couldn't resolve requests

I am quite new to *.bag extension file and I wonder how to retrieve the RGB videos using realsense library.

Thanks in advance.

dorodnic commented 5 years ago

Are you sure the video was recorded in BGR format?

shgold commented 5 years ago

I tried with 'rs.format.rgb8' and it shows me the recorded video. Still the R and B components are switched.

I have recorded video using 'realsense-viewer'. Is there any way to check in what format does video is recorded?

dorodnic commented 5 years ago

The inverted components is most likely due to usage of OpenCV later in your code - OpenCV default for color is BGR. In python you can load the "playback" and inspect the sensors and stream profiles:

>>> d = ctx.load_device("C:\\Users\\local_admin\\Documents\\20180212_000327.bag")
>>> s = d.query_sensors()[0]
>>> s.get_stream_profiles()[0]
<pyrealsense2.video_stream_profile: Infrared(1) 1280x720 @ 30fps Y8>
RealSense-Customer-Engineering commented 5 years ago

[Realsense Customer Engineering Team Comment] Hi shgold,

Do you have any update based on dorodnic's suggestion?

Thank you!

shgold commented 5 years ago

Thanks @dorodnic! I converted BGR image to RGB using opencv and with your suggestion, I could inspect the profile of the recorded video. It works well!

sanxincao commented 3 years ago

good question

pfcouto commented 2 years ago

Hi @MartyG-RealSense and @dorodnic . I am facing a similar problem. I have a .bag file recorded using realsense-viewer. I am trying to use it to test a deep learning model. However, as you can see what should have been RED apples are BLUE and because of that the model doesnt detect almost nothing. I am using a d435i, how can I fix this issue?

In the second image you can see a part of my code. If no file is specified in an argument (uses live stream from camera) it runs the code in the red area and the output images have the correct color. If a file is specified it runs the green parts and it turns RED to BLUE. How can I fix this issue? Thanks!

image

image

MartyG-RealSense commented 2 years ago

Hi @pfcouto Does it make a difference if you add config to the brackets of enable_device_from_file with the line below?

config.enable_device_from_file(config,"{}".format(video_file))

pfcouto commented 2 years ago

Hi @MartyG-RealSense, I get an error. In the following prints, you can see the code.

You can also see the code in here: https://github.com/pfcouto/Detectron_IntelRealSense/blob/main/main_detectron2_simple_win10.py

Thanks!

image

image image image image

pfcouto commented 2 years ago

Isn't there a way to configure the format as we can when using config.enable_stream?

When I have the camera as the output the colors are correct, probably because of the rs.format.bgr8. However when I am reading the file the true red is blue.

MartyG-RealSense commented 2 years ago

If the bag file was recorded in bgr8 format and not rgb8, it may be worth investigating whether the white balance setting is responsible. If a low manual value for white balance is set then color images can be tinted towards the blue color spectrum, like the vegetation at https://github.com/IntelRealSense/librealsense/issues/6508

You could therefore try setting a white balance value in your Python code. The default is '4800'.

pfcouto commented 2 years ago

Here are the settings, I just pressed the recording button to record the Stereo Module and RGB camera.

As you can see the RGB8 option is the one selected

image

MartyG-RealSense commented 2 years ago

The color stream is being recorded into the bag as RGB8 then but your script's config instruction for color is defining the BGR8 format. The configuration requested in the script should be the same as the data stored in the bag.

  1. What happens if you change the color config in your script to RGB8

  2. What happens if the color format is set in the Viewer to BGR8 before the record button is pressed?

pfcouto commented 2 years ago

If I change to BRG8 it works, thanks! However it doesn't make much sense. But thanks!

MartyG-RealSense commented 2 years ago

It's great to hear that you were successful.

Possibly your Detectron object detection code prefers color data to be in BGR8 format. It could be an OpenCV color space issue in the main_detectron2_simple_win10.py script since it is making use of cv2 instructions. As mentioned at https://github.com/IntelRealSense/librealsense/issues/2652#issuecomment-435056679 near the start of this discussion, OpenCV uses BGR by default instead of RGB.

pfcouto commented 2 years ago

Yes, I managed to make it work. Just had to add a [ :, :, ::-1] in the code to swap the B with R. Thanks!

image

MartyG-RealSense commented 2 years ago

Thanks for sharing your code!

pfcouto commented 1 year ago

Hi @MartyG-RealSense . I hope you're still around. I want to create a SLAM with only the D435i camera. Found the realsense-ros GitHub https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i.

On my PC I am running Fedora 36 but I am having trouble installing ROS2 so I made a VM with Ubuntu 20.04 and installed ROS2 and realsense viewer. However the camera doesn't appear in realsense viewer, I believe that is because it can't connect to the VM since it gives me the warnings in the pictures. I am using usb-c to usb-c cable, don't know if that can create an issue, maybe with the VM? Can you help me out?

The camera isn't being used in my primary OS (Fedora)

image

image

image

I also tried to conect it through VM-Removable devices-Intel Realsense Depth Camera D435i-Connect and it gives me more warnings.

image

image

image

The camera is recognized by the VM as it showns in the settings. However is doesnt show inside the VM.

To install ros2 and realsense-viewer on ubuntu I am following these guides:

https://docs.ros.org/en/galactic/Installation/Ubuntu-Install-Debians.html https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i https://github.com/IntelRealSense/realsense-ros https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages

And did these commands:

`https://docs.ros.org/en/galactic/Installation/Ubuntu-Install-Debians.html :

locale # check for UTF-8

sudo apt update && sudo apt install locales sudo locale-gen en_US en_US.UTF-8 sudo update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8 export LANG=en_US.UTF-8

locale # verify settings

sudo apt install software-properties-common sudo add-apt-repository universe

sudo apt update && sudo apt install curl gnupg lsb-release sudo curl -sSL https://raw.githubusercontent.com/ros/rosdistro/master/ros.key -o /usr/share/keyrings/ros-archive-keyring.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/ros-archive-keyring.gpg] http://packages.ros.org/ros2/ubuntu $(source /etc/os-release && echo $UBUNTU_CODENAME) main" | sudo tee /etc/apt/sources.list.d/ros2.list > /dev/null

sudo apt update sudo apt upgrade

sudo apt install ros-galactic-desktop sudo apt install ros-dev-tools

===================

https://github.com/IntelRealSense/librealsense/blob/master/doc/distribution_linux.md#installing-the-packages :

sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key F6E65AC044F831AC80A06380C8B3A55A6F3EFCDE

sudo add-apt-repository "deb https://librealsense.intel.com/Debian/apt-repo $(lsb_release -cs) main" -u

sudo apt-get install librealsense2-dkms sudo apt-get install librealsense2-utils

sudo apt-get install librealsense2-dev sudo apt-get install librealsense2-dbg`

And then, the next step is to verify that it works, but it doesn't to me.

image

And realsense does appear with this command.

image

image

Thanks!!

MartyG-RealSense commented 1 year ago

Hi @pfcouto I would recommend focusing first on the USB C to C cable as a possible cause, as a RealSense camera using this type of cable is significantly more likely to experience problems than an A to C (USB Type-C) cable.

When using a VM with RealSense, it should also be one that can simulate the USB 3 controller such as VMWare Workstation Player or VMWare Workstation Pro, as described at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation.md#linux-ubuntu-installation

pfcouto commented 1 year ago

Yes, I am using VMWare Workstation Pro. I will try to use a usb A cable. There isn't any python script available that the SLAM like the ros project, or there is? It would be nice. Thanks!!

MartyG-RealSense commented 1 year ago

Some suggestions for Python SLAM projects are at https://github.com/IntelRealSense/realsense-ros/issues/2315#issuecomment-1094781448

pfcouto commented 1 year ago

Thanks again! You are awesome!

pfcouto commented 1 year ago

Hi again @MartyG-RealSense I believe I got past the previous problem with the cable. Followed this solution: https://kb.vmware.com/s/article/2128105 and it worked. However I am facing new issues.

Following these instructions -bookamark https://github.com/IntelRealSense/realsense-ros#usage-instructions. I ran all the commands and all outputed the same error.

image

Although the above commands didn´t work I tried to continue the guide - https://github.com/IntelRealSense/realsense-ros/wiki/SLAM-with-D435i#installation and none of these 3 installations work.

image

image

Can you help me out? Thanks again!!!

MartyG-RealSense commented 1 year ago

The 'kinetic' packages are for the very old ROS1 Kinetic and so would not be suitable for use with ROS2.

pfcouto commented 1 year ago

Ok. And what about the errors I showed in the first picture in the previous comment?

What can I do now to do a slam using D435i? Do I have to do all the process again with ROS1???

The "kinetic" packages might be for ROS1. But I believe that by running the commands it should be installed.

MartyG-RealSense commented 1 year ago

The Frames didn't arrived within 5 seconds message indicate a communication problem with the camera where new frames stopped arriving, and after 5 seconds had passed the connection 'timed out'.

If you would prefer not to reinstall for ROS1, the ROS2 version of rtabmap_ros may be a suitable SLAM option for you. It can be used with a minimum of ROS2 Foxy.

https://github.com/introlab/rtabmap_ros/tree/ros2#rtabmap_ros

pfcouto commented 1 year ago

And can it do a slam only using a D435i?

I don't have any knowledge about ROS. I just wanted to do a SLAM using the D435i and the only thing I found was your repo that by the looks of it uses ROS1.

MartyG-RealSense commented 1 year ago

rgbd_ptam is another Python SLAM example that does not use ROS, though it is quite old now.

https://github.com/uoip/rgbd_ptam?language=en_US

Major SLAM tools such as ORB-SLAM and Kimera tend to be based on C++ though, so Python SLAM tools are typically smaller-scale projects.

pfcouto commented 1 year ago

I actually don´t mind to reinstall ROS1. However, if those commands to install the 'kinectic' packages don't work, I won't be able to use ROS1 to make the SLAM.

The installation of those packages is with apt-get, it has nothong to do with the ROS version installed. Since it gives an error now it will also give an error after I install ROS1.

Replacing kinectic with galactic seems to work and all 3 packages were installed. I believe...

However when I try to run with the command roslaunch realsense2_camera opensource_tracking.launch I have the following errors:

image

MartyG-RealSense commented 1 year ago

Galactic is ROS2. The opensource_tracking.launch roslaunch is for use with the RealSense ROS1 wrapper that supports ROS1 Kinetic, Melodic and Noetic.

Your earlier log shows that you installed ROS2 wrapper 4.51.1. The correct ROS1 wrapper version for librealsense 2.51.1 compatibility would be 2.3.2.

There isn't actually a ROS1 wrapper for 2.51.1 but 2.3.2 - which was designed for librealsense 2.50.0 - should work.

https://github.com/IntelRealSense/realsense-ros/releases/tag/2.3.2

The numbering of ROS wrappers follows this pattern.

Versions that begin with 2. are for ROS1. Versions that begin with 3. are for an old ROS2 wrapper that is no longer updated. Versions that begin with 4. are the current, actively updated ROS2 wrapper.

pfcouto commented 1 year ago

I get what you are saying. However I don't know what I installed from ROS1. I always followed the guides for ROS2. And instead of image I ran the commands:

` sudo apt-get install ros-galactic-imu-filter-madgwick

sudo apt-get install ros-galactic-rtabmap-ros

sudo apt-get install ros-galactic-robot-localization ` Replacing Kinectic with Galactic

So I believe those errors you mentioned don't apply, unless I am missing something

MartyG-RealSense commented 1 year ago

opensource_tracking.launch is a RealSense launch file for ROS1. It was not designed for use with ROS2 launches.

Whilst you can individually install ROS2 versions of these three packages, it may not result in a similar SLAM system to the ROS1 one in Intel's D435i SLAM guide without a launch file that is a ROS2 equivalent of the functions that opensource_tracking.launch performs. Something like this ROS2 SLAM project:

https://github.com/halstar/RCAP

https://github.com/halstar/RCAP#install-navigation2-nav2

pfcouto commented 1 year ago

Hello again @MartyG-RealSense. With ROS1 I managed to do something. However as you can see in the image in the program nothing is being made and the terminal shows an error and several warnings. Can you help? Thanks!

image

MartyG-RealSense commented 1 year ago

The warning Not enough inliers can indicate that the camera is observing a scene with not many objects or surfaces in it.

Considering the error about no odometrty though, it may be occurring because the IMU topics (accel and gyro) are not enabled, as they are disabled by default in the RealSense ROS wrapper. Please try adding the commands below to the end of your roslaunch instruction.

enable_gyro:=true enable_accel:=true

For example:

roslaunch realsense2_camera opensource_tracking.launch enable_gyro:=true enable_accel:=true

pfcouto commented 1 year ago

Hello, running with that command did not fix the issue. IN addition to that issue there is also that "No Image" in RViz. And it doesn't do any pointcloud. So I have several and big problems here. I need help ahahah. Thanks!

pfcouto commented 1 year ago

In this link https://we.tl/t-y0dnf9H1Aj there is a video that shows the behavior of my camera. I don't understand the behavior of the camera, but I hope you do and that way you can help me.

However, the camera was always poiting at the same place not moving and no pointcloud is built.

Hope you can help me!

MartyG-RealSense commented 1 year ago

If you are following the D435i SLAM guide, have you expanded open the TF section of RViz, expanded open the Frames sub-section and then unticked all options in that section except camera_link and map. This step is not shown in your video.

image

It is normal for No Image to be shown until you have set the image topic and then it appears, as it does in your video.

image

pfcouto commented 1 year ago

I marked those options as you said. However, nothing is happening on the other side. And I still get the odometry errors.

image

pfcouto commented 1 year ago

Hello again, I tried something like described here #1871 https://github.com/IntelRealSense/realsense-ros/issues/1871

First I tried to change the opensource_tracking.launch file to use rs_d435_camera_with_model.lauch but when I tried to run with the command roslaunch realsense2_camera opensource_tracking.launch enable_gyro:=true enable_accel:=true enable_infra:=true it outputed an error:

image

After that I runned the command roslaunch realsense2_camera rs_d435_camera_with_model.launch enable_gyro:=true enable_accel:=true enable_infra:=true

image image

image

MartyG-RealSense commented 1 year ago

The rs_d435_camera_with_model.launch file is designed to be used with ROS robot simulation such as Gazebo (hence the 'model' part of the name).

The multiple RViz nodes error may be resulting from this line at the bottom of the launch file, as opensource_tracking.launch already has RViz set to True on line 24. rs_camera.launch does not set RViz to True.

rs_d435_camera_with_model.launch

https://github.com/IntelRealSense/realsense-ros/blob/ros1-legacy/realsense2_camera/launch/rs_d435_camera_with_model.launch#L110

opensource_tracking.launch

https://github.com/IntelRealSense/realsense-ros/blob/ros1-legacy/realsense2_camera/launch/opensource_tracking.launch#L24

pfcouto commented 1 year ago

Ok. My real question now is why isn't it making a point cloud?

As you can see in the video the arrow goes completely insane and the camera is stable. Also that odometry error might have something to do with it. I don't know, and I'm not finding an answer.

MartyG-RealSense commented 1 year ago

opensource_tracking.launch should automatically generate a point cloud (at least when used in conjunction with rs_camera.launch). You could try adding the pointcloud filter enabling instruction to the roslaunch though to see what happens.

roslaunch realsense2_camera opensource_tracking.launch filters:=pointcloud


You could also test the ROS1 wrapper's pointcloud example to see whether it is possible to generate a pointcloud on your ROS1 wrapper installation.

https://github.com/IntelRealSense/realsense-ros/tree/ros1-legacy#point-cloud