Closed zhangvia closed 2 years ago
Hi @zhangvia Applying post-processing first and then alignment afterwards is actually the approach that is recommended by Intel, otherwise distortion effects such as aliasing (jagged lines) may occur.
Please ensure that the stream resolution and FPS values that you are defining in the config.enable_stream instructions are the same as the resolution / FPS of the streams that are recorded in the bag file.
Also, the code for the post-processing filters should be placed after the pipeline.start instruction, like in the Python example code at https://github.com/IntelRealSense/librealsense/issues/1672#issuecomment-387438447
@MartyG-RealSense i‘m not sure what you mean. did you mean i should define the filters after the pipline.start in main function and do not define it in post-process function? like this ? but this cause the error:
Traceback (most recent call last):
File "D:/haidilao/readbag.py", line 73, in
Invoked with: <pyrealsense2.pyrealsense2.align object at 0x000001B21CEDABB0>, <pyrealsense2.frameset Z16 RGB8 #1288 @1658638448994.244873>
and besides, i'm sure the resolution and fps are the same as the resolution/fps in bag file
Yes, have the post-processing lines placed after pipeline.start like in your changed script above.
In your list of applied filters in the 'while True' section, please try removing .as_frameset() from the end of all the lines that have it. For example:
frames = decimation.process(frames)
Yes, have the post-processing lines placed after pipeline.start like in your changed script above.
In your list of applied filters in the 'while True' section, please try removing .as_frameset() from the end of all the lines that have it. For example:
frames = decimation.process(frames)
@MartyG-RealSense the same error happens:
The approach that the SDK's official align_depth2color.py Python alignment program is to define a separate aligned_frames variable instead of using frames, and then set the depth and color to use aligned_frames instead of frames. Note that instead of depth_frame, it defines a separate aligned_depth_frame variable.
aligned_frames = align.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()
The approach that the SDK's official align_depth2color.py Python alignment program is to define a separate aligned_frames variable instead of using frames, and then set the depth and color to use aligned_frames instead of frames. Note that instead of depth_frame, it defines a separate aligned_depth_frame variable.
aligned_frames = align.process(frames) aligned_depth_frame = aligned_frames.get_depth_frame() color_frame = aligned_frames.get_color_frame()
you mean this? the same error happened again...
Adapting it to use your 'align_to_color' variable, the code should look like this:
aligned_frames = align_to_color.process(frames)
aligned_depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()
Comment out the if not depth_frame or not color_frame: continue
instruction for now to test whether the script works without it.
Adapting it to use your 'align_to_color' variable, the code should look like this:
aligned_frames = align_to_color.process(frames) aligned_depth_frame = aligned_frames.get_depth_frame() color_frame = aligned_frames.get_color_frame()
Comment out the
if not depth_frame or not color_frame: continue
instruction for now to test whether the script works without it.
the code is the same as the example that you place, but the same error still exists. when i comment out the filters, the script works well. I think the filters change the type of variable 'frames'
@MartyG-RealSense thanks for your help,i solved it by adding the '.as_frameset()' to the last filter
That's great news, @zhangvia - thanks very much for your patience! :)
@MartyG-RealSense I am working on a somewhat similar issue - I want to use the decimation filter specifically to reduce the resolution of the image. I have tried using lower resolution during the color and depth capture, but the lower capture resolution results in inaccuracies in the depth data.
When I attempt to do it, I get an error a bit later, in the open3d.geometry.RGBDImage.create_from_color_and_depth() function - it reports "Unsupported image format." When I used Intellisense to examine the data, it seems that the color image had resolution 1280x720, but the depth had been downsampled to 428x240. ` frames = self.pipeline.wait_for_frames()
#Align the color and depth frames, to prevent colors from 'bleeding' onto different objects
align = rs.align(rs.stream.color)
frames = align.process(frames)
frames = decimation_filter.process(frames).as_frameset()
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
depth_image = np.asanyarray(depth_frame.get_data())
color_image = np.asanyarray(color_frame.get_data())
#Get pointcloud
img_depth = open3d.geometry.Image(depth_image)
img_color = open3d.geometry.Image(color_image)
rgbd = open3d.geometry.RGBDImage.create_from_color_and_depth(img_color, img_depth, convert_rgb_to_intensity=False, depth_trunc = 5)
pcd = open3d.geometry.PointCloud.create_from_rgbd_image(rgbd, pinhole_camera_intrinsic)
`
What is the best way to reduce the resolution of the pointcloud? There are a lot of guides that give general advice, but working Python code examples are usually missing (the example at https://github.com/IntelRealSense/librealsense/issues/2356#issuecomment-465766323 runs, though doesn't reduce the resolution - the example earlier in that thread does run at all). Our application is using the data from the camera in realtime, so the Realsense SDK's efficient filters would be a huge help.
@MartyG-RealSense i got a new problem. i want to save the image that is post-processed,but the amount of frames is wrong.and when i rerun the script,i got this error: [ERROR] "RuntimeError: Frame didn't arrive within 5000"
i google it and found that i need set realtime(false) like this:
playback = profile.get_device().as_playback() playback.set_real_time(False)
the amount of frames was bigger but i still got the error above
so i add the code:
playback.pause() playback.resume
but the speed of saving image was too slow,and sometime it just stuck in some frame. is there any good ways to save the image after handling the frame(like post-process, align)
Hi @Chris45215 Intel recommend that alignment is performed after post-processing filters are applied in order to avoid distortion effects such as aliasing (jagged lines).
The scaling with the decimation filter of 1280x720 depth resolution down to 428x240 is explained in the section of Intel's post-processing filter documentation at the link below.
https://dev.intelrealsense.com/docs/post-processing-filters#decimation-filter
Intel have a Python post-processing filter tutorial in the form of a Jupyter notebook here:
https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/depth_filters.ipynb
In regard to RGB decimation, my understanding of advice given about it by a RealSense team member at https://github.com/IntelRealSense/librealsense/issues/1970#issuecomment-401813401 is that if you know in advance what resolution the depth is being downsampled to (such as 428x240) then select the same lower resolution as your RGB resolution instead of using 1280x720 for color.
Hi @zhangvia If your recording duration is short (between 10 and 30 seconds) then using the SDK's Keep() instruction may suit your performance needs. It stores the frames in memory instead of writing them to file and then allows you to perform a batch processing operation on the entire stored set of frames in a single action once the pipeline is closed (for example, applying post-processing filters and alignment and then saving the frames to file).
https://github.com/IntelRealSense/librealsense/issues/7067 has an example of a script (under the Second approach heading) using Keep() with alignment.
Hi @zhangvia Do you require further assistance with this case, please? Thanks!
Case closed due to no further comments received.
Issue Description
I want to use post-process and align simultaneously, but i got this error:
Traceback (most recent call last): File "D:/readbag.py", line 78, in
frames = align_to_color.process(frames)
RuntimeError: Error occured during execution of the processing block! See the log for more info
this error happens when the images were shown for several seconds. so i think that means the first several frames were processed successfully,and then the code crashed
the code is below:
can anyone tell me how to use post-process first and then use align, becasue i don't want to reduce the resolution of the depth image. if i use align first,the decimation filter will reduce the resolution .