Open keithahern opened 4 years ago
I believe that you wish to rotate the camera 90 degrees but rotate the image 90 degrees in the opposite direction so that it appears in its normal orientation, please?
no, thats not correct.
I want to turn the camera 90 degrees and still have X axis be horizontal, Y is vertical. So, for example in landscape the camera is 640x480 and in portrait it is 480x640.
If the camera is NOT rotated the follow code is correct dist = depth_frame.get_distance(x, y)
if the camera IS rotated then the code needs to be dist = depth_frame.get_distance(y, x)
which will get confusing very quickly, every x must swapped for y.
A solution could be done at the frame level depth_frame = depth_frame.rotate(rs2.90_DEGREES_CLOCKWISE) dist = depth_frame.get_distance(x, y)
but would probably be better at the device level so the color and IR frames also do the 'right thing' config = rs.config() config.device_orientation(rs2.90_DEGREES_CLOCKWISE) config.enable_stream(rs.stream.depth, 480, 640, rs.format.z16, 30) config.enable_stream(rs.stream.color, 480, 640, rs.format.bgr8, 30)
Or something like that. Does that help?
I am measuring a tall vertical groove at close range. When the camera is horizontal the cameras can not see deeply into the groove. When the camera is vertical it can.
On 11 Mar 2020, at 12:36, MartyG-RealSense notifications@github.com wrote:
I considered your question carefully. It sounds as though it would be much easier to keep the camera in its normal orientation and seek to improve the output results by other means. Could you tell me please what problems you have with the camera in its normal orientation that seem to be better when it is rotated 90 degrees please? Thanks very much!
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/6023#issuecomment-597607908, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMO7CS5R4KDC2VKWEUANLRG6AWTANCNFSM4LFTPTIA.
Hi @keithahern We have such capability on the upcoming L500 camera, and we may be able to port this to D400 cameras in one of the future releases. I agree it's not as easy as rotating the image, all calibration data and other parameters need to be updated, and we do want to enable it in the SDK at some point
Great.
Thank you Sergey and Marty.
On 11 Mar 2020, at 12:55, Sergey Dorodnicov notifications@github.com wrote:
Hi @keithahern https://github.com/keithahern We have such capability on the upcoming L500 camera, and we may be able to port this to D400 cameras in one of the future releases. I agree it's not as easy as rotating the image, all calibration data and other parameters need to be updated, and we do want to enable it in the SDK at some point
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/6023#issuecomment-597615585, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMO7HC7VI45JNXKZ7TR5DRG6C5BANCNFSM4LFTPTIA.
@dorodnic @MartyG-RealSense I would like to upvote this request, as we would like to use it in our application, which involves detecting human faces. The point is that having the largest field of view in the vertical (i.e. a portrait orientation rather than landscape) helps to allow for a wider range of subject heights, potentially including people sitting in wheelchairs, etc. It would be most helpful to have the requested feature for our application, too!!
@RMichaelPickering Thanks so much for your supportive input!
You could also increase the vertical field of view by having two cameras stacked vertically with their fields of view overlapping, and this should avoid the image rotation implications.
@MartyG-RealSense @dorodnic any update on this? a vertical orientation parameter or a function that can rotate realsense frames would be very helpfull!
My solution has been to just use RealSense for initial depth and color frame acquisition and then move to the (also Intel backed) Open3D library and do my processing from there (i.e. I'm not using any RealSense projection/deprojection APIs. In Open3d it's much easier to merge and manipulate point clouds, render to 2D, create cross platform UIs etc. The documentation is lacking in places but it's allowed me to move forward.
See below on how to get an rs frameset into an open3d PointCloud and transform it - its very fast.
` depth_frame = frameset.get_depth_frame() color_frame = frameset.get_color_frame()
depth_image = np.asanyarray(depth_frame.get_data()) color_image = np.asanyarray(color_frame.get_data())
img_depth = o3d.geometry.Image(depth_image) img_color = o3d.geometry.Image(color_image)
img_rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth(img_color,img_depth) intrinsics = profile.as_video_stream_profile().get_intrinsics() pinhole_camera_intrinsic = o3d.camera.PinholeCameraIntrinsic(intrinsics.width, intrinsics.height, intrinsics.fx, intrinsics.fy, intrinsics.ppx, intrinsics.ppy)
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(img_rgbd, pinhole_camera_intrinsic) pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
`
My solution has been to just used RealSense for initial depth and color frame acquisition and then move to the (also Intel backed) Open3D library and do my processing from there (i.e. I'm not using any RealSense projection/deprojection APIs. In Open3d it's much easier to merge and manipulate point clouds, render to 2D, create cross platform UIs etc. The documentation is lacking in places but it's allowed me to move forward.
See below on how to get an rs frameset into an open3d PointCloud and transform it - its very fast.
` depth_frame = frameset.get_depth_frame() color_frame = frameset.get_color_frame()
depth_image = np.asanyarray(depth_frame.get_data()) color_image = np.asanyarray(color_frame.get_data())
img_depth = o3d.geometry.Image(depth_image) img_color = o3d.geometry.Image(color_image)
img_rgbd = o3d.geometry.RGBDImage.create_from_color_and_depth(img_color,img_depth) intrinsics = profile.as_video_stream_profile().get_intrinsics() pinhole_camera_intrinsic = o3d.camera.PinholeCameraIntrinsic(intrinsics.width, intrinsics.height, intrinsics.fx, intrinsics.fy, intrinsics.ppx, intrinsics.ppy)
pcd = o3d.geometry.PointCloud.create_from_rgbd_image(img_rgbd, pinhole_camera_intrinsic) pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
`
Thanks this solves the problem!
Hi @keithahern We have such capability on the upcoming L500 camera, and we may be able to port this to D400 cameras in one of the future releases. I agree it's not as easy as rotating the image, all calibration data and other parameters need to be updated, and we do want to enable it in the SDK at some point
@dorodnic I am working with the L515. Could you kindly tell me how to do it in Python? I cant find it ...
FWIW, we've implemented some frame rotation code using OpenCV but it would of course be far better to do as much of this as possible in the camera! If the capability from the L500 could be ported to the D4xx cameras and fully supported in the SDK that would be excellent!!
Mike
R. Michael Pickering CBIP President & CEO CloudConstable Incorporated 10271 Yonge Street, Suite 321 Richmond Hill, ON L4C 3B5 Canada www.cloudconstable.com @.*** Toronto: 416-721-1826 Orlando: 321-337-2824
On Fri, 3 Dec 2021 at 08:45, Qualityland @.***> wrote:
Hi @keithahern https://github.com/keithahern We have such capability on the upcoming L500 camera, and we may be able to port this to D400 cameras in one of the future releases. I agree it's not as easy as rotating the image, all calibration data and other parameters need to be updated, and we do want to enable it in the SDK at some point
@dorodnic https://github.com/dorodnic I am working with the L515. Could you kindly tell me how to do it in Python? I cant find it ...
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/6023#issuecomment-985532931, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEABETJN34AXB2B37JF6QI3UPDCX7ANCNFSM4LFTPTIA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
Hi, I'm using L515 and I also need to rotate camera 180 degree for my project. This is very helpful to me, so can you tell me more about the usage ?
Hi @FeiSong123 If you rotate the L515 camera 180 degrees in the Y direction then although the image would be upside down, you would not need to make any changes to the image as its values should still be correct
If you needed to rotate / flip the image after rotating the camera 180 degrees so that the image was the 'right way up' then you could do that using the methods in this discussion or at https://github.com/IntelRealSense/librealsense/issues/4395#issuecomment-511113996 and https://github.com/IntelRealSense/librealsense/issues/9088
I also prefer to rotate at the camera level, and I see dorodinc commented that L515 has such capability. What should I do with C++(I can't find this function device_orientation with config).
The comment was made before the L515 was first released on sale, and not every planned feature is released for various reasons (for example, if the feature did not work well). As far as I am aware there is not a dedicated image rotation feature for the L515. I apologize for the inconvenience.
That's a pity. Nevertheless, I still want to thank you for your guidance.
@dorodnic @MartyG-RealSense According to what @dorodnic has written: "I agree it's not as easy as rotating the image, all calibration data and other parameters need to be updated, and we do want to enable it in the SDK at some point".
I would like to know if we can expect same depth/image accuracy, quality, performance, reponse, etc. if the camera is rotated 90* left/right? For some reason calibration data is mentioned. So rotating those frames manually using some other software like OpenCV is to be considered sub-optimal because camera is not positioned up-right?
Hi @maciejandrzejewski-digica There are no performance issues with physically rotating only the camera 90 degrees. There would be a problem with calibration values if the frame was then rotated with software such as OpenCV to make the image display in the normal orientation.
@MartyG-RealSense In the project we would like to rotate the camera to better suite the position in the device case. But for AI inference we need to rotate the images back to the up-right position. Can you elaborate more what does it mean "problem with calibration values" how that affects depth accuracy?
Depending on the maximum depth range required, the RealSense D405 model (or its D401 PCB board version) may be an easy fit for a device case as it is a small square rather than wide horizontal shape. Its ideal depth range is 0.7 cm to 50 cm, meaning that it is suited to very close range depth sensing applications rather than multiple-meter distances.
https://store.intelrealsense.com/buy-intel-realsense-depth-camera-d405.html
If a longer-distance camera model is required, a RealSense user earlier in this discussion at https://github.com/IntelRealSense/librealsense/issues/6023#issuecomment-785071263 shared a software workaround for frame rotation.
There is unfortunately not any further information available about the implications to camera parameters of frame rotation other than dorodnic's statement at https://github.com/IntelRealSense/librealsense/issues/6023#issuecomment-597615585
Hey guys, Checkout PR #13499 created by @noacoohen. We think this fulfill the need.
Note: This rotation is CPU implementation based.
Hi everyone, have any of you been able to test the new rotation post-processing filter implemented on the development branch of librealsense, please? Thanks!
https://github.com/IntelRealSense/librealsense/tree/development
I would like to test it but I haven't figured out how it even works. Do you know how to use it? My install is using the ROS 2 wrapper which he said it is not supported yet but I would like to know how to use it without the wrapper too. Maybe I can figure out a way to make it work that way.
I would like to test it but I haven't figured out how it even works. Do you know how to use it? My install is using the ROS 2 wrapper which he said it is not supported yet but I would like to know how to use it without the wrapper too. Maybe I can figure out a way to make it work that way.
You can use this code reference as a usage example:
| Camera Model | {D400 | | Firmware Version | V2.31.0 | | Operating System & Version | Linux / MacOS | | Kernel Version (Linux Only) | (e.g. 4.14.13) | | Platform | PC | | SDK Version | { 2.X } | | Language | {python} | | Segment | {/others } |
Issue Description
I have a developed application in python and some tests indicate the performance of the camera would be better if rotated 90 degrees. It would be extremely time consuming (and confusing!) to rewrite and change all the code which assumes x is horizontal and y is vertical. We use a lot of librealsense API calls. Is there an easer to to rotate the camera and still have x, y behave correctly? e.g. in 480x640 resolution?