Closed czjczjczj1 closed 4 years ago
I suspect the camera might be recognized as USB2. It often leads to this type of error, since USB2 offer only limited resolutions, and hence is likely not to be sufficient. Please check using the RealSense Viewer if this is the case.
So I need to record a bag file first using RealSense Viewer, and then read this bag file if I am going to run box_dimensioner_multicam__demo.py?
Not necessarily. Just check using the Viewer that the camera is connected to USB3 (it is stated near the name of the device), and if it is, you should be able to run the python example. If it appears to be connected to USB2, you need to connect it to a different port, try switching cables, or make modifications to the sample to make it work with reduced resolutions.
I had the same issue. my solution was the same as @dorodnic which was reducing the resolutions. for my env(ubuntu 18.04, sdk 2.16.5), these 2 cases are working.
config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 6)
or
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
[Realsense Customer Engineering Team Comment] @czjczjczj1 Did you get chance to try with USB3.0 to see if the issue resolved or not? Please update. Thanks!
Hello, sorry for the late reply. Unfortunately, it is still not working.
[Realsense Customer Engineering Team Comment] @czjczjcaj1 How about other resolutions such as 640x480@30fps or 1280x720@6fps?
I've tried them both, it's still not working.
[Realsense Customer Engineering Team Comment] @czjczjczj1 Could you please confirm that you're running with Python 3.6? As the sample requires Python 3.6 to work and does not work with Python 2.7.
I'm trying to reproduce your issue and found that there're some other errors which is not the same as yours. And I did a bit modification in "realsense_device_manager.py" and got it running successfully now. Please have a try and update your result. Thanks!
The modification is as below. Remove two lines below.
frameset = rs.composite_frame(rs.frame( ))
device.pipeline.poll_for_frames(frameset)
And replaced them as below.
frameset = device.pipeline.wait_for_frames( )
Hello, it's working now. Many thanks. But I got another issue which is I cannot see the width and the height since they are out of the videos that are shown. How can I see them?
[Realsense Customer Engineering Team Comment] @czjczjczj1 You can modify "cv2.FONT_HERSHEY_PLAIN" from 2 to 1 to get smaller font size.
Hi,@czjczjczj1 , Very glad to see you debug successfully. I also want to run this demo. Do you need to print the checkerboard, can you be friends, and ask you some questions?
Hi, @honghande. Yes, I printed the chessboard provided in the demo file in an A4 size paper. Sure, feel free to ask any questions, although I might not be able to answer it : ) .
Hi,@czjczjczj1,Intel provides the checkerboard of png. Can you copy PNG directly to the world for A4 paper printing?
@honghande , Yes, I think you can do this, just drag the picture to the Word. But I just opened the picture and printed it in A4 size.
Hi,@czjczjczj1 Excuse me, I haven't printed the checkerboard on my hand yet. Open the camera and run the code. I have been prompting to put the information of the checkerboard. Can this code identify the checkerboard?
If you just want to check whether it works or nor for this step, you can try detecting the chessboard opened on your computer.
Hi,@czjczjczj1 ,Excuse me, as you said, I opened the checkerboard photo, ran the code, and the camera pointed at the photo, and reported the following error:
Traceback (most recent call last):
File "E:/intelRealsense/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py", line 143, in
Invoked with: <pyrealsense2.depth_frame object at 0x0000000009C33260>, 259.0, 23.0
Hi,@czjczjczj1,reported the following error:
1 devices have been found
Place the chessboard on the plane where the object needs to be detected..
Place the chessboard on the plane where the object needs to be detected..
Place the chessboard on the plane where the object needs to be detected..
Place the chessboard on the plane where the object needs to be detected..
RMS error for calibration with device number 828212141104 is : 0.0165878551 m
Calibration completed...
Place the box in the field of view of the devices...
Traceback (most recent call last):
File "E:/intelRealsense/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py", line 143, in
@honghande
Was your camera connected to USB3? The last error is saying the stream is infrared, which might mean you were using the stereo module. I think only If the cameras are connected to USB3, can you use the RGB camera view. I'm not sure so you'd better ask the developer. To check whether you can use the RGB camera view, open Intel RealSense Viewer and see whether there is an icon that allows you to open the RGB camera view. If there is not, it means your camera is connected to USB2.
Hi,@czjczjczj1,My camera has no color camera. Both are infrared cameras, but one of them has color. I want to use infrared camera with color data instead of color cameras. Is that OK? What effect does this interface have on USB 2.0 and USB 3.0?Must this project use two depth cameras? Can't a depth camera?
@honghande I don't know why we have to use USB3.0 to get RBG view either, maybe USB3 can transmit the frames faster than USB2. I tried using only one camera too, but the result seemed not as good as two cameras.
Hi,@czjczjczj1,@dorodnic My binocular camera only has two infrared cameras, no color cameras, but one of the infrared cameras is with color. How can I modify such hardware in the program?
Hi,@czjczjczj1,@dorodnic , My camera information as shown above: I want to open the infrared color camera, but the device seems to open all the transmitters, how to modify it, I modify the following:
rs_config = rs.config() rs_config.enable_stream(rs.stream.depth, resolution_width, resolution_height, rs.format.z16, frame_rate) rs_config.enable_stream(rs.stream.infrared, 1, resolution_width, resolution_height, rs.format.rgb8, frame_rate)
Errors were reported as follows:
1 devices have been found
Traceback (most recent call last):
File "E:/intelRealsense/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py", line 158, in
Hi,@czjczjczj1,@dorodnic , There is a big error with the actual size test. Does the camera have to be fixed?
Hi,@czjczjczj1,@dorodnic , Doesn't the camera already have internal and external references? Why do we have to calibrate the checkerboard?
That's a good question. I'm not sure about it either. But it seems that no matter what kind of camera it is, camera calibration is an important process. @honghande
hi,czjczjczj1 I have seen the basic principle, the reason why we need to calibrate is mainly to calculate the error between the measured value and the ideal value, which uses the kabsch_rmsd algorithm, but their calculation is not up to the principle.
Hi,@czjczjczj1 ,Excuse me, as you said, I opened the checkerboard photo, ran the code, and the camera pointed at the photo, and reported the following error: Traceback (most recent call last): File "E:/intelRealsense/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py", line 143, in run_demo() File "E:/intelRealsense/python/examples/box_dimensioner_multicam/box_dimensioner_multicam_demo.py", line 72, in run_demo transformation_result_kabsch = pose_estimator.perform_pose_estimation() File "E:\intelRealsense\python\examples\box_dimensioner_multicam\calibration_kabsch.py", line 198, in perform_pose_estimation corners3D = self.get_chessboard_corners_in3d() File "E:\intelRealsense\python\examples\box_dimensioner_multicam\calibration_kabsch.py", line 164, in get_chessboard_corners_in3d depth = get_depth_at_pixel(depth_frame, corner[0], corner[1]) File "E:\intelRealsense\python\examples\box_dimensioner_multicam\helper_functions.py", line 117, in get_depth_at_pixel return depth_frame.as_depth_frame().get_distance(round(pixel_x), round(pixel_y)) TypeError: get_distance(): incompatible function arguments. The following argument types are supported:
- (self: pyrealsense2.depth_frame, x: int, y: int) -> float
Invoked with: <pyrealsense2.depth_frame object at 0x0000000009C33260>, 259.0, 23.0
hi, replace help_functions.py "return depth_frame.as_depth_frame().get_distance(int(round(pixel_x)), int(round(pixel_y))) " is ok
hello, I'm wondering is there any document to explain how this works? Like explaining the principles etc. Thanks for any help.
@czjczjczj1 do you mean explain camera calibration? this is very off topic from the bug report, but I found that opencv has some good tutorials, e.g. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.html#calibration
I suspect the camera might be recognized as USB2. It often leads to this type of error, since USB2 offer only limited resolutions, and hence is likely not to be sufficient.
Hello @dorodnic, I ran into the same issue, where realsense viewer detects my camera is attached to a USB 2.1 port where in reality there are no USB 2.1 ports. I am working off a jetson nano, where there are only 3.0 ports. The problem occurs in my laptop too where there are only 3.1 ports. I was trying to run opencv c++ example and I ran into the same issue where it shows the error could not resolve requests. My device information is as follows - Camera - d410 realsense viewer version = 2.35.2(installed from vckg) Camera Driver - 5.12.05(installed recommended driver from viewer)
I have tried multiple cables and only the port connected to the camera is populated. Could anyone please suggest a solution to this ?
@AD2605 Could you please run rs-enumerate on your jetson nano and post the result? And do you have any other setup to see if the cables are really USB3? Thanks!
Sorry for the late reply @RealSenseSupport , the cable was USB-2.0, I will try with another cable...
@AD2605 Thanks for the update! Looking forward to your test update with USB3 cable. Thanks!
@AD2605 How about your test with USB3 cable? Looking forward to your update. Thanks!
@AD2605 Any test update from your side? Thanks!
@AD2605 Sorry that we didn't hear from you for weeks. Will close it at this point. Please feel free to create new ticket if you still have issue or questions. Thanks!
Hi, I had the same issue "Couldn't resolve requests" and I have read the thread above. I managed to tweak the resolution parameters (i.e. height and width) and was able to get the depth to stream without any errors. However, when I try to stream color using the same resolution, I get errors. Here's the code that I'm running:
import pyrealsense2 as rs
import math
import numpy as np
# Main content begins
if __name__ == "__main__":
try:
# Configure depth and color streams of the intel realsense
config = rs.config()
rs.config.enable_device_from_file(config, "chris.bag")
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30) # adding this line causes an error
# Start the realsense pipeline
pipeline = rs.pipeline()
pipeline.start(config)
# Create align object to align depth frames to color frames
align = rs.align(rs.stream.color)
# Get the intrinsics information for calculation of 3D point
unaligned_frames = pipeline.wait_for_frames()
frames = align.process(unaligned_frames)
depth = frames.get_depth_frame()
color = frames.get_color_frame()
depth_intrinsic = depth.profile.as_video_stream_profile().intrinsics
pipeline.stop()
print("Exited without errors.")
except Exception as ex:
print('Exception occured: "{}"'.format(ex))
I would appreciate any help in resolving this issue. Thanks in advance!
For what it's worth & since this page is the very first link on Google for this issue: updating firmware allowed me to resolve this problem.
Hello, I just wanted to report that I had this issue as well, and switching the cable solved it. However, one peculiar detail was that the USB 2.1-cable mysteriously worked for like 10 runs before giving the "Couldn't resolve requests" error. I suspect undefined behavior, this might be good to somehow specifically check for if it's possible.
Just remove the USB and put it back in
Hi,
I am trying the box_dimensioner_multicam_demo with my two D455 cameras and the same error as OP. They are both hooked up on USB3 and they have the latest Firmware installed.
The cameras work whenever I don't use the infrared camera like in this script
import numpy as np
import cv2
import pyrealsense2 as rs
connect_device = []
for d in rs.context().devices:
if d.get_info(rs.camera_info.name).lower() != 'platform camera':
serial = d.get_info(rs.camera_info.serial_number)
product_line = d.get_info(rs.camera_info.product_line)
device_info = (serial, product_line) # (serial_number, product_line)
connect_device.append(serial)
# Define the size of the chessboard pattern
pattern_size = (6, 9) # Change this to the size of your chessboard
# Define the criteria for corner detection
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# Create a RealSense pipeline
# config = rs.config()
# config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
# config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
# config.enable_stream(rs.stream.infrared, 1, 640, 480, rs.format.bgr8, 5)
pipelines = {}
pipeline_profiles = {}
for device in connect_device:
config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
# config.enable_stream(rs.stream.infrared, 1, 640, 480, rs.format.y8, 5)
# config.enable_stream(rs.stream.infrared, 2, 640, 480, rs.format.y8, 5)
pipeline = rs.pipeline()
config.enable_device(device)
pipeline_profiles[device] = pipeline.start(config)
pipelines[device] = pipeline
# Create a RealSense align object
align = rs.align(rs.stream.color)
# Initialize the arrays to store the object points and image points
object_points = [] # 3D coordinates of corners in the world coordinate system
image_points = [] # 2D coordinates of corners in the image plane
while True:
p = pipelines.items()
found = np.full(shape=len(p), fill_value=False)
i = 0
for serial, pipeline in p:
# Wait for a new frame from the RealSense camera
frames = pipeline.wait_for_frames()
# Align the depth frame to the color frame
aligned_frames = align.process(frames)
depth_frame = aligned_frames.get_depth_frame()
color_frame = aligned_frames.get_color_frame()
# Convert the color frame to a numpy array
color_image = np.asanyarray(color_frame.get_data())
# Find the chessboard corners in the color image
ret, corners = cv2.findChessboardCorners(cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY), pattern_size, None)
if ret:
found[i] = True
# Refine the corner locations to sub-pixel accuracy
corners = cv2.cornerSubPix(cv2.cvtColor(color_image, cv2.COLOR_BGR2GRAY), corners, (11, 11), (-1, -1),
criteria)
# Draw the corners on the color image
cv2.drawChessboardCorners(color_image, pattern_size, corners, ret)
# Create the object points for this frame
object_point = np.zeros((pattern_size[0] * pattern_size[1], 3), np.float32)
object_point[:, :2] = np.mgrid[0:pattern_size[0], 0:pattern_size[1]].T.reshape(-1, 2)
object_points.append(object_point)
# Add the refined corner locations to the image points array
image_points.append(corners)
# Show the color image with the detected corners
cv2.imshow(f'{serial}', color_image)
cv2.waitKey(1)
i += 1
if all(found):
break
I tried different configurations cables and systems but everything results in RuntimeError: Couldn't resolve requests
. In the RealSense Viewer I am able access and view the IR stream. I'm stuck on this for a whole day now, what can I do?
Hi @simhue Your stream definitions are correct for a D455, with 5 FPS as the minimum FPS instead of 6 on other 400 Serie cameras.
Do you have the options for Infrared and Infrared 2 listed in the Stereo Module section of the RealSense Viewer, please?
Thanks for the quick reply @MartyG-RealSense
No, I have not.
If Infrared 2 is not available then the RuntimeError: Couldn't resolve requests error will occur, because the script is requesting a stream configuration that is not supported by the camera at the time that the script is run.
Does the box_dimensioner_multicam project run if you have the Infrared 2 stream commented out but Infrared enabled?
Infrared 2 is not supported on USB 2 mode, only on USB 3, though if it says '3.2' beside the camera name then the camera should be on a USB 3 connection.
Are you using Windows or Linux?
I am on Windows 11.
I still get the Error with Infrared enabled, but if I change the format to UYVY the error disappears. Instead I get another error:
2 devices have been found
Traceback (most recent call last):
File "C:\Users\s.huening\repos\librealsense\wrappers\python\examples\box_dimensioner_multicam\box_dimensioner_multicam_demo.py", line 157, in <module>
run_demo()
File "C:\Users\s.huening\repos\librealsense\wrappers\python\examples\box_dimensioner_multicam\box_dimensioner_multicam_demo.py", line 85, in run_demo
transformation_result_kabsch = pose_estimator.perform_pose_estimation()
File "C:\Users\s.huening\repos\librealsense\wrappers\python\examples\box_dimensioner_multicam\calibration_kabsch.py", line 203, in perform_pose_estimation
corners3D = self.get_chessboard_corners_in3d()
File "C:\Users\s.huening\repos\librealsense\wrappers\python\examples\box_dimensioner_multicam\calibration_kabsch.py", line 162, in get_chessboard_corners_in3d
found_corners, points2D = cv_find_chessboard(depth_frame, infrared_frame, self.chessboard_params)
File "C:\Users\s.huening\repos\librealsense\wrappers\python\examples\box_dimensioner_multicam\helper_functions.py", line 90, in cv_find_chessboard
chessboard_params[0], chessboard_params[1]))
cv2.error: OpenCV(4.6.0) D:\a\opencv-python\opencv-python\opencv\modules\calib3d\src\calibinit.cpp:503: error: (-2:Unspecified error) in function 'bool __cdecl cv::findChessboardCorners(const class cv::_InputArray &,class cv::Size_<int>,const class cv::_OutputArray &,int)'
> Only 8-bit grayscale or color images are supported:
> 'depth == CV_8U && (cn == 1 || cn == 3 || cn == 4)'
> where
> 'type' is 2 (CV_16UC1)
Edit: I tried every USB port on my computer. Even the USB 2 ports which were recognized correctly by the Viewer. But only the Infrared stream appears, not the Infrared 2. The cameras are definitely plugged in USB3 ports.
I recommend next trying resetting the camera to its factory-new default settings in the RealSense Viewer using the instructions at https://github.com/IntelRealSense/librealsense/issues/10182#issuecomment-1019854487
If that does not work then please try uninstalling the Depth and RGB camera drivers in the Windows Device Manager and then re-installing them. This has been known to correct missing infrared driver options. Instructions for doing so are here:
https://support.intelrealsense.com/hc/en-us/community/posts/4419989666323/comments/4431239847443
On step 4, if your Device Manager interface does not provide a tick-box to select then skip step 4 and move on to step 5.
after reinstalling the drivers it works!
Thank you for your help!
You are very welcome, @simhue - it's great to hear that a driver uninstall-reinstall resolved your missing infrared problem!
Issue Description
Hello, when I ran box_dimensioner_multicam__demo.py, I got an error saying
RESTART: E:\project\test 3\box_dimensioner_multicam\box_dimensioner_multicam_demo.py 2 devices have been found Traceback (most recent call last): File "E:\project\test 3\box_dimensioner_multicam\box_dimensioner_multicam_demo.py", line 143, in
run_demo()
File "E:\project\test 3\box_dimensioner_multicam\box_dimensioner_multicam_demo.py", line 48, in run_demo
device_manager.enable_all_devices()
File "E:\project\test 3\box_dimensioner_multicam\realsense_device_manager.py", line 168, in enable_all_devices
self.enable_device(serial, enable_ir_emitter)
File "E:\project\test 3\box_dimensioner_multicam\realsense_device_manager.py", line 153, in enable_device
pipeline_profile = pipeline.start(self._config)
RuntimeError: Couldn't resolve requests
What should I do to get rid of this error?