IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.57k stars 4.82k forks source link

Frame didn't arrive within 5000 #12055

Closed SylvanSi closed 11 months ago

SylvanSi commented 1 year ago

Required Info
Camera Model D435f
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Ubuntu 20.04
Platform
SDK Version { legacy / 2.<?>.<?> }
Language python
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

Frame didn't arrive within 5000

current SWversion 2.54.1.0 rs-capture can get video realsense-viewer can open the program but when use python to run the code, it shows
frames = pipeline.wait_for_frames() RuntimeError: Frame didn't arrive within 5000

Please look at this problem for me. Thanks:)

SylvanSi commented 1 year ago

The code as follow: import pyrealsense2 as rs import numpy as np import cv2

  pipeline = rs.pipeline()
  config = rs.config()
  config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
  config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
  print("reset start")
  ctx = rs.context()
  devices = ctx.query_devices()
  for dev in devices:
      dev.hardware_reset()
  print("reset done")

  profile = pipeline.start(config)

  depth_sensor = profile.get_device().first_depth_sensor()
  depth_scale = depth_sensor.get_depth_scale()
  print("Depth Scale is: " , depth_scale)

  clipping_distance_in_meters = 1 
  clipping_distance = clipping_distance_in_meters / depth_scale

  align_to = rs.stream.color
  align = rs.align(align_to)

  try:
      while True:
          frames = pipeline.wait_for_frames()

          aligned_frames = align.process(frames)

          aligned_depth_frame = aligned_frames.get_depth_frame() 
          color_frame = aligned_frames.get_color_frame()

          if not aligned_depth_frame or not color_frame:
              continue

          depth_image = np.asanyarray(aligned_depth_frame.get_data())
          color_image = np.asanyarray(color_frame.get_data())

          grey_color = 153
          depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) 
          bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)

          depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
          images = np.hstack((bg_removed, depth_colormap))
          cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
          cv2.imshow('Align Example', images)
          key = cv2.waitKey(1)

          if key & 0xFF == ord('q') or key == 27:
              cv2.destroyAllWindows()
              break
  finally:
      pipeline.stop()

when I try this code from[ERROR] "RuntimeError: Frame didn't arrive within 5000"

6628

I got: reset start reset done Depth Scale is: 0.0010000000474974513 Traceback (most recent call last): File "/root/pan/install_test2.py", line 32, in frames = pipeline.wait_for_frames() RuntimeError: Frame didn't arrive within 5000

Does this means that I got a frame at the begining?

SylvanSi commented 1 year ago

I can use pyrealsense2 on my pc

SylvanSi commented 1 year ago

one more issue image the name is not D435f, is that a problem? image

SylvanSi commented 1 year ago

When I use realsense-viewer it calls back INFO [255085735026720] (rs.cpp:2697) Framebuffer size changed to 1066 x 652 31/07 08:50:25,480 INFO [255085735026720] (rs.cpp:2697) Window size changed to 1066 x 652 31/07 08:50:38,138 WARNING [255084977647808] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b 31/07 08:50:38,149 WARNING [255084977647808] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b 31/07 08:50:38,160 WARNING [255084977647808] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b

MartyG-RealSense commented 1 year ago

Hi @SylvanSi A D435f camera is detected as D435, so this is normal and not something to be concerned about.


It looks as though you have modified the align_depth2color-py example program and added a hardware reset mechanism.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py

When a camera is reset, it is disconnected and then reconnected. If it is not re-detected after the disconnection then this could cause frames not to arrive within the 5 second period (5000) allowed before the program time-outs and produces RuntimeError: Frame didn't arrive within 5000

Does the original align_depth2color.py program work if you run it without changes?


The Resource temporarily unavailable type of 'control_transfer returned' warning can indicate that there is a communication problem between the camera and the computer, such as an issue with the USB port or the USB cable.


Are you using the official 1 meter long USB cable supplied with the camera or a longer USB cable of your own choice, please?

SylvanSi commented 1 year ago

Hi @MartyG-RealSense,thanks for the reply. It doesn't work either without changes. I am using the official 1m USB cable, I also use other cable to make sure it is not caused by the cable. And as I said before, the camera can work on my PC, I can use pyrealsense2 to call it.

MartyG-RealSense commented 1 year ago

If pyrealsense2 works on your PC, is the 'Frame didn't arrive within 5000' error when you run your script occuring on that same PC or on a different computer / computing device?

SylvanSi commented 1 year ago

It won't occur. It works normally.

MartyG-RealSense commented 1 year ago

Do you mean you have pyrealsense2 installed on your PC and it works but you have this error when running your program?

SylvanSi commented 1 year ago

I am sorry that I don't illustrate my problem clearly. I mean when I connect D435 on my PC and use python to run my project, the camera can shows rgb and depth images properly. But when I connect the Ubuntu device to run another program, it doesn't work and shows 'Frame didn't arrive within 5000'. So I use 'align_depth2color-py' to verify, it still shows 'Frame didn't arrive within 5000'. I just want to explain that there is no problem with my project. Thanks:)

MartyG-RealSense commented 1 year ago

So the first time that you run a program it is fine, but the second time that you run a program (even one that is not your own) it says 'Frame didn't arrive within 5000'.

So the problem is not with your script but with running a program after the first one?

SylvanSi commented 1 year ago

Sorry again, it still works on PC, we don't have to discuss this. The problem is when I use camera on my ubuntu device, it says 'Frame didn't arrive within 5000'. It can't work even once.

MartyG-RealSense commented 1 year ago

I think the confusion is coming from 'ubuntu device'. So if the Ubuntu device is another computer, is the PC a Windows machine?

My apologies, and thanks for your patience!

SylvanSi commented 1 year ago

I am sorry to make you confused. Yes, 'PC' is my windows device, 'Ubuntu device' actually is a development board. So what make this happens. I also find that when in this 5 seconds, the camera's infrared laser was working(it was flashing).

MartyG-RealSense commented 1 year ago

It's no problem at all. Thanks very much for the confirmation.

What is the Ubuntu development board that you are using (Raspberry Pi, Nvidia Jetson, etc)

SylvanSi commented 1 year ago

It is atlas200IDK, ascend. I don't see any others use this with realsense.

MartyG-RealSense commented 1 year ago

It looks as though this is the model that you are using:

https://e.huawei.com/en/products/computing/ascend/atlas-200

There is not a previously reported use of this hardware with RealSense cameras. It has an Ascend 310 AI processor chip. RealSense cameras work with Intel, Arm and sometimes AMD processors. The Ascend 310 processor does not seem to use the architecture of any of these brands, so it may be not be fully compatible with the camera.

SylvanSi commented 1 year ago

Thanks for your patience. I think it is built on the arm architecture. Is it a compatibility issue or there is other problem. I tried using a C program to access the camera, and it worked successfully. I was able to save a video and play it back without any issues. However, I'm unsure how to access the depth stream. Have there been similar issues on other devices, and are there any solutions available?

MartyG-RealSense commented 1 year ago

'Frame didn't arrive within 5000' is one of the most common errors experienced by RealSense users. It basically means that the camera stopped delivering new messages for more than 5 seconds, causing a time-out.

If you are using C (not C++) then there is a C example program called rs-depth at the link below.

https://github.com/IntelRealSense/librealsense/tree/master/examples/C/depth

SylvanSi commented 1 year ago

'Frame didn't arrive within 5000' is one of the most common errors experienced by RealSense users. It basically means that the camera stopped delivering new messages for more than 5 seconds, causing a time-out.

If you are using C (not C++) then there is a C example program called rs-depth at the link below.

https://github.com/IntelRealSense/librealsense/tree/master/examples/C/depth

Thanks, but I think I still need to use python to complete the project. I have to try again, or get another camera. Thank you again for your patience:)

MartyG-RealSense commented 1 year ago

Hi @SylvanSi Do you have an update about this case that you can provide, please? Thanks!

SylvanSi commented 1 year ago

not yet

MartyG-RealSense commented 1 year ago

Okay, thanks very much @SylvanSi for the update.

Nataraj-github commented 1 year ago

tried to use the code given in the link https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-tutorial-1-depth.py dint work still getting the same error: C:\Users\Ne1\AppData\Local\Temp\ipykernel_17360\1471619645.py:37: SyntaxWarning: "is" with a literal. Did you mean "=="? if y%20 is 19: Frame didn't arrive within 5000

MartyG-RealSense commented 1 year ago

Hi @Nataraj-github Does the error still occur if you insert code at line 18 of the script (before the pipeline start line) to reset the camera when the script is launched?

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()
MartyG-RealSense commented 1 year ago

Hi @Nataraj-github Do you require further assistance with this case, please? Thanks!

Nataraj-github commented 1 year ago

Hi IntelRealSense/librealsense,

I basically took a lidar depth pic of a flat door @ 750mm distance from my L515 lidar camera,in .raw file format and it was converted into a CSV file format.The values in the excel seems to be around 3000 so i had to divide by 4 using excel operation ( for all 648 X 480 pixels).This looks like working for me but I am curious to understand why so ? AM i doing correct?

Attaching the related files for your reference. Thanks for your time and patience.

On Thu, Sep 28, 2023 at 12:00 PM MartyG-RealSense @.***> wrote:

Hi @Nataraj-github https://github.com/Nataraj-github Do you require further assistance with this case, please? Thanks!

— Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/12055#issuecomment-1739611252, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCXU4VIT5I4ZKWB7CEYIV5TX4WNJTANCNFSM6AAAAAA26EPJH4 . You are receiving this because you were mentioned.Message ID: @.***>

--

Thanks & Regards,

Nataraj Eswarachandra.

MartyG-RealSense commented 1 year ago

The raw depth values of the camera are 'pixel depth' values that do not represent real-world distance in meters. To get the real-world depth value in meters, you can multiply the raw depth value by the depth scale value of the particular RealSense camera model being used. The scale of L515 is 0.000250.

3000 x 0.000250 = 0.75 meters, or 750 mm.

Nataraj-github commented 1 year ago

Thank you for the quick reply..

Could you help me to find the documentation where you found this value? I had one more query regarding the .bag file.I need to regenerate the canopy of a group of plants so I took a video from L515 in bag format and just tried to export it into a ply format.When i loaded it in cloud compare (a point cloud supporting software it is loading like an image attached-below ( only single side).But the requirement is to regenerate the whole plant in all the 360 degrees. Which is the easiest way to regenerate a plant canopy from a .Bag file.

[image: image.png]

On Fri, Sep 29, 2023 at 11:28 AM MartyG-RealSense @.***> wrote:

The raw depth values of the camera are 'pixel depth' values that do not represent real-world distance in meters. To get the real-world depth value in meters, you can multiply the raw depth value by the depth scale value of the particular RealSense camera model being used. The scale of L515 is 0.000250.

3000 x 0.000250 = 0.75 meters, or 750 mm.

— Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/12055#issuecomment-1741074785, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCXU4VL4YQCO37JPCO366F3X43SIJANCNFSM6AAAAAA26EPJH4 . You are receiving this because you were mentioned.Message ID: @.***>

--

Thanks & Regards,

Nataraj Eswarachandra.

MartyG-RealSense commented 1 year ago

The L515 depth scale is not in documentation but you can confirm it in the RealSense Viewer tool by going to the 'L500 Depth Sensor > Controls' section of the Viewer's options side-panel and seeing the value '0.000250' beside the "Depth Units" option.

Instead of recording a bag file, you could export a separate ply file for each side of the plant from the Viewer's 3D pointcloud mode and then use CloudCompare to stitch the multiple ply files together into a single combined ply. The link below has information about doing so in CloudCompare.

https://github.com/IntelRealSense/librealsense/issues/10640#issuecomment-1172460776

Nataraj-github commented 1 year ago

Hi IntelRealSense/librealsense team,

I need to your help to get clarification regarding the measuring approach followed by the L515 Intel Real sense camera.

Does it measures the distances from the laser emitting point as reference ( A in image attached) or it considers the camera face a s plane for reference to measure (B in image attached) distances.?

Thanks and Regards, Nataraj Eswarachandra.

On Fri, Sep 29, 2023, 13:07 MartyG-RealSense @.***> wrote:

The L515 depth scale is not in documentation but you can confirm it in the RealSense Viewer tool by going to the 'L500 Depth Sensor > Controls' section of the Viewer's options side-panel and seeing the value '0.000250' beside the "Depth Units" option.

Instead of recording a bag file, you could export a separate ply file for each side of the plant from the Viewer's 3D pointcloud mode and then use CloudCompare to stitch the multiple ply files together into a single combined ply. The link below has information about doing so in CloudCompare.

10640 (comment)

https://github.com/IntelRealSense/librealsense/issues/10640#issuecomment-1172460776

— Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/12055#issuecomment-1741231813, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCXU4VNYEDIHRFG7PV6P6KLX4355ZANCNFSM6AAAAAA26EPJH4 . You are receiving this because you were mentioned.Message ID: @.***>

MartyG-RealSense commented 1 year ago

Hi @Nataraj-github Images cannot be posted to this forum by email and must instead be inserted into the comment writing box on the web-page.

The L515 camera scans an infrared laser beam over the entire field of vew (FOV). The surfaces reflect the light back to a photodiode component in the camera and the camera processes the data from the reflected beam. It then outputs a depth point representing a specific point in the scene. A depth pointcloud is generated by aggregating together all of the points in the scene that the camera is observing.

https://dev.intelrealsense.com/docs/lidar-camera-l515-datasheet

So answer 'A' will be the closest to the above explanation.

Nataraj-github commented 1 year ago

Thanks for the reply,

But, I notice that the image provided in the documentation shows it like distance between two planes ( B ) in the earlier image I shared with you.i.e perpendicular distance to the camera plane to a particular point ( not distance from laser emitting point to a particular point ( inclined distance crossed with red in the image attached below)).

[image: image.png] [image: image.png]

On Mon, Oct 2, 2023 at 11:59 AM MartyG-RealSense @.***> wrote:

Hi @Nataraj-github https://github.com/Nataraj-github Images cannot be posted to this forum by email and must instead be inserted into the comment writing box on the web-page.

The L515 camera scans an infrared laser beam over the entire field of vew (FOV). The surfaces reflect the light back to a photodiode component in the camera and the camera processes the data from the reflected beam. It then outputs a depth point representing a specific point in the scene. A depth pointcloud is generated by aggregating together all of the points in the scene that the camera is observing.

https://dev.intelrealsense.com/docs/lidar-camera-l515-datasheet

So answer 'A' will be the closest to the above explanation.

— Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/12055#issuecomment-1743288675, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCXU4VM3HHUTJF7TKOCU4RTX5LQFZAVCNFSM6AAAAAA26EPJH6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONBTGI4DQNRXGU . You are receiving this because you were mentioned.Message ID: @.***>

--

Thanks & Regards,

Nataraj Eswarachandra.

MartyG-RealSense commented 1 year ago

I cannot view your images, unfortunately. Please try pasting them into the comment box again. When the image is being inserted, it may take some time before it is loaded in so the Comment button should not be clicked until the import is complete or it will just display text saying [image: image.png]

The L515 lidar depth camera works on different principles to stereo depth cameras like the RealSense 400 Series. L515 calculates distance based on light reflected back to the camera from the surface of objects, as described in the quote from the L515 data sheet that I provided above.

MartyG-RealSense commented 1 year ago

In addition to the data sheet document, a User Guide for the L515 can be downloaded as a PDF file at the link below.

https://support.intelrealsense.com/hc/en-us/articles/360051646094-Intel-RealSense-LiDAR-Camera-L515-User-Guide

MartyG-RealSense commented 1 year ago

Hi @Nataraj-github Do you require further assistance with your problem, please? Thanks!

Nataraj-github commented 1 year ago

Thanks to IntelRealSense/librealsense, If I take a depth pic I am getting the distance values for each pixel, these distances are perpendicular distances to the face of the camera ( suppose X axis distances). I want to measure the volume and I need the other distances too ( Y axis and Z axis how to get them from in a csv/any other file format ) In other words how do I measure the height and length of a 640 X 480 pixel frame.

On Sun, Oct 8, 2023 at 4:15 AM MartyG-RealSense @.***> wrote:

Hi @Nataraj-github https://github.com/Nataraj-github Do you require further assistance with your problem, please? Thanks!

— Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/12055#issuecomment-1751957853, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCXU4VPEQB3JMU3WIZOUG2TX6JOJFAVCNFSM6AAAAAA26EPJH6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJRHE2TOOBVGM . You are receiving this because you were mentioned.Message ID: @.***>

--

Thanks & Regards,

Nataraj Eswarachandra.

MartyG-RealSense commented 1 year ago

@Nataraj-github The RealSense SDK has a Python example program for measuring volume called box_dimensioner_multicam

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam

MartyG-RealSense commented 12 months ago

Hi @Nataraj-github Do you require further assistance with this case, please? Thanks!

MartyG-RealSense commented 11 months ago

Case closed due to no further comments received.

Nataraj-github commented 11 months ago

Hi Team,

I hope now you can see the images i have attached. I am just not clear whether the reference for measuring distance on camera is a PLANE or the laser emitting POINT. From the images and written things I saw, I assume the reference is camera PLANE for measuring distances from any object. Correct me if I am wrong .I have taken screenshots such that you can see the page number and document name in case if you need more details to confirm, Thanks...!! 1) L515_User_Guide_v1.0 page numbers 21 and 22 2) Intel_RealSense_LiDAR_L515_Datasheet_Rev003 page numbers 10 and 11.

LASER POINT AS REFERENCE POINT AND CAMERA FACE AS REFERENCE PLANE L515_USER_GUIDE_V10

Lidar Measurement reference

MartyG-RealSense commented 11 months ago

As mentioned in the second image, the depth measurement reference of L515 (where depth = 0) is the front glass of the camera. This location is known as the starting point or plane.

Light bounces back from surfaces to the camera's photodiode component and generates a depth point. All individual depth points are combined together into a point cloud image.

Nataraj-github commented 11 months ago

Thanks for the reply ,but I am still not sure you understand my question clearly. We can see that the distance d1 (in the first image drawn by me) is different in the case of point as starting reference ( case A) and in the case of plane as reference ( case B in the same image). In other words the distance d1 when the stating reference on camera as a laser emitting point as reference and camera planar face as reference are not same. So I am interested to understand whether the system developed ( algorithm) uses Plane as reference or the point emitting source as reference because the distances vary based on the reference ( with plane those are perpendicular distances and with emitting point those are inclined ) ?

MartyG-RealSense commented 11 months ago

The depth algorithms of RealSense cameras are confidential closed-source information that is not available publicly, unfortunately. If the public data sheet or user guide does not contain the information then it will not be able to be disclosed.

However, page 19 of the user guide refers to an observed surface that is being depth-sensed as the plane. Pages 22 and 26 also refer to depth as a plane. Page 18 of the L515 data sheet document describes how the Depth Quality Tool can be used to test distance to plane accuracy.

Nataraj-github commented 11 months ago

Thankyou for your patience and explanation, I just measured the distances against a plain wall, apparently the values measured from the camera are done using the Plane as reference (i.e perpendicular distances from the camera measuring face to object).

On Thu, Oct 26, 2023 at 12:11 PM MartyG-RealSense @.***> wrote:

The depth algorithms of RealSense cameras are confidential closed-source information that is not available publicly, unfortunately. If the public data sheet or user guide does not contain the information then it will not be able to be disclosed.

However, page 19 of the user guide refers to an observed surface that is being depth-sensed as the plane. Pages 22 and 26 also refer to depth as a plane. Page 18 of the L515 data sheet document describes how the Depth Quality Tool can be used to test distance to plane accuracy.

— Reply to this email directly, view it on GitHub https://github.com/IntelRealSense/librealsense/issues/12055#issuecomment-1781427802, or unsubscribe https://github.com/notifications/unsubscribe-auth/BCXU4VMOTSWQRNXGRJ7CU6DYBKDRZAVCNFSM6AAAAAA26EPJH6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBRGQZDOOBQGI . You are receiving this because you were mentioned.Message ID: @.***>

--

Thanks & Regards,

Nataraj Eswarachandra.

MartyG-RealSense commented 11 months ago

You are very welcome. Thanks very much for the update!

mujiwob commented 8 months ago

It is atlas200IDK, ascend. I don't see any others use this with realsense.

Hi @SylvanSi ,I'm currently trying to use realsense on Atlas 200I DK A2.After I compiled librealsense, I could not detect the camera in realsense-viewer and pyrealsense. Now I can only get color and infrared frames from OpenCV, but I also need depth frames in my project. Have you found a way to use the realsense camera on Atlas 200I DK A2? Thanks!

MartyG-RealSense commented 8 months ago

Hi @woblitent Did you build librealsense from source code with CMake and include the flag -DFORCE_RSUSB_BACKEND=TRUE in the CMake build instruction, please? An RSUSB = true source code build of librealsense can work well with 'exotic' computing hardware such as an industrial board that is not like a typical PC computer.

mujiwob commented 8 months ago

Hi @woblitent Did you build librealsense from source code with CMake and include the flag -DFORCE_RSUSB_BACKEND=TRUE in the CMake build instruction, please? An RSUSB = true source code build of librealsense can work well with 'exotic' computing hardware such as an industrial board that is not like a typical PC computer.

Thank you for your response! This works!

MartyG-RealSense commented 8 months ago

You are very welcome. It's excellent to hear that RSUSB = true resolved your issue. Thanks very much for the update!