Closed nath-partha closed 3 years ago
Hi @parthapn98 The full C++ source code for rs-align can be found at the link below, though the descriptive tutorial text is the same as the page that you linked to.
https://github.com/IntelRealSense/librealsense/tree/master/examples/align
As described in the opening paragraph of the tutorial, algnment will resize the depth field of view (FOV) to match the size of the color field of view. As the color sensor FOV on D435 / D435i is smaller than the depth FOV, this can mean that the outer regions of the depth image will be excluded from the aligned image.
Since your programming language is listed as Python at the top of this case, the SDK's Python example program align_depth2color.py may be a more useful reference.
My understanding is that the align_depth function in the RealSense ROS wrapper makes use of the RealSense SDK's align processing block like examples such as rs-align and align_depth2color.py do. The align processing block will make automatic adjustments for differences in the depth and color resolution.
Depth to color alignment is a processing-intensive operation. If you are able to use a computer with Nvidia graphics GPU hardware, such as an Nvidia Jetson board or a PC with Nvidia graphics, then you could make use of the RealSense SDK's ability to be built with CUDA support for acceleration of color conversion, alignment and pointclouds.
The ROS wrapper is able to take advantage of CUDA acceleration if librealsense and the ROS wrapper are built separately instead of built together from packages with the wrapper's Method 1 installation method (as the Method 1 packages do not provide CUDA support).
https://github.com/IntelRealSense/librealsense/pull/2670 has CUDA performance figures relating to align acceleration.
Since i only have the python examples in my ide, navigating to the source of the function
leads to align.py containing
import pybind11_builtins as __pybind11_builtins
from .filter import filter
class align(filter):
""" Performs alignment between depth image and another image. """
def process(self, frames): # real signature unknown; restored from __doc__
"""
process(self: pyrealsense2.pyrealsense2.align, frames: pyrealsense2.pyrealsense2.composite_frame) -> pyrealsense2.pyrealsense2.composite_frame
Run thealignment process on the given frames to get an aligned set of frames
"""
pass
def __init__(self, align_to): # real signature unknown; restored from __doc__
"""
__init__(self: pyrealsense2.pyrealsense2.align, align_to: pyrealsense2.pyrealsense2.stream) -> None
To perform alignment of a depth image to the other, set the align_to parameter with the other stream type.
To perform alignment of a non depth image to a depth image, set the align_to parameter to RS2_STREAM_DEPTH.
Camera calibration and frame's stream type are determined on the fly, according to the first valid frameset passed to process().
"""
pass
As described in the opening paragraph of the tutorial, algnment will resize the depth field of view (FOV) to match the size of the color field of view.
Does Align resize only? Or does it perform reprojection from left stereo plane to rgb plane?
My understanding is that the processing blocks are handled by the SDK file rs-processing.hpp:
And the SDK file align.cpp includes rs_processing.hpp:
https://github.com/IntelRealSense/librealsense/blob/master/src/proc/align.cpp
Aside from the rs-align tutorial information, the other main sources of offical C++documentation for the align process are here:
https://intelrealsense.github.io/librealsense/doxygen/classrs2_1_1align.html
https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#section-frame-alignment
The pyrealsense2 version of the align process documentation is here:
https://intelrealsense.github.io/librealsense/python_docs/_generated/pyrealsense2.align.html
https://github.com/IntelRealSense/librealsense/issues/5743 may may a helpful reference in regard to alignment and reprojection.
Thank you Marty, the links have mostly answered my questions.
Ill go through the cpp file to understand the exact implementations in the future
Just glancing over the code i have some initial questions about how the extrinsics from left IR frame to rgb frame is calculated. If you find anything on this please let me know.
Thanks for the help again
The transform between two stream types can be retrieved with the SDK instruction get_extrinsics_to
https://dev.intelrealsense.com/docs/api-how-to#section-get-and-apply-depth-to-color-extrinsics
https://github.com/IntelRealSense/librealsense/issues/1231#issuecomment-368421888
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
How can i query the serial number from a bag file recording? {I can get the serial number from a device if its connected, i.e. rs.context.devices, but our setup requires us to id a camera after the recording was done to know the recording viewpoint.
Our additional cameras are still en route, so I havent tested this yet. But is it possible to record multiple cameras, either in a single or separate bag files, from the realsense viewer or in python script using enable_record_to_file()
Thanks!