IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.43k stars 4.8k forks source link

point cloud generation- x and y coordinates not accurate #12503

Open swishswish123 opened 6 months ago

swishswish123 commented 6 months ago

Required Info
Camera Model D400
Firmware Version
Operating System & Version MacOS
Kernel Version (Linux Only)
Platform
SDK Version
Language Python
Segment

Issue Description

I am generating a point cloud of an object with known dimensions and known distance from the camera. When I check the coordinates, the z distance seems correct, but the dimensions in the x and y directions is always smaller than it should be.

This is the method I use as part of a class to generate the point cloud from the images:

    def generate_pc(self, save_file):
        pc = rs.pointcloud()
        # apply texture to depth pc
        pc.map_to(self.color_frame)
        pointcloud = pc.calculate(self.depth_frame)
        pointcloud.export_to_ply(save_file, self.color_frame)
        self.pointcloud = pointcloud

        return pointcloud

Would appreciate any help!

Thanks so much

MartyG-RealSense commented 6 months ago

Hi @swishswish123 Exporting RealSense color data to a ply pointcloud fle with export_to_ply usually does not work in Python, unfortunately. The only Python export script that has been shown to successfully export color to ply is at https://github.com/IntelRealSense/librealsense/issues/6194#issuecomment-608371293

swishswish123 commented 6 months ago

Thanks @MartyG-RealSense for the quick reply!

So for me the problem is not getting the point cloud itself, I seem to get the point cloud with color, and the coordinate in Z is correct. My problem is that the size of my object in the X and Y directions is smaller than it should be.

When you say the export_to_ply doesn't usually work- what is usually the problem? Because from the comment you linked above it seemed they couldn't get both colours and the vertex normals, which isn't an issue I'm having.

MartyG-RealSense commented 6 months ago

export_to_ply usually does not export color data to ply except with that one example, but exporting depth data only to ply works fine. The reason for the color export problem is not known, unfortunately.

There is an alternative export instruction called save_to_ply that can export color data, though this is known to have problems too. You are wlcome to try the save_to_ply export script at https://github.com/IntelRealSense/librealsense/issues/7747#issuecomment-725346152 to see whether it works for you.

If your problem is one of incorrect scale when importing the exported ply file into another program (such as 3D modelling software like MeshLab and Blender), this can occur because the measurement scale in the program that the ply is imported into needs to be set to the same scale as the RealSense SDK that the ply was exported from. The default depth unit scale of the SDK for most RealSense 400 Series cameras (except D405) is 0.001 meters, or 1 millimeter.

swishswish123 commented 6 months ago

Is there a way of getting the points and colours of the point cloud as np arrays without saving it as a .ply file?

MartyG-RealSense commented 6 months ago

The Python script at https://github.com/IntelRealSense/librealsense/issues/4612#issuecomment-566864616 is an example of generating a depth and color pointcloud with pc.calculate, storing the depth and color data in separate numpy arrays and then retrieving the coordinates as vertices and the texture coordinates from the arrays with print instructions.

swishswish123 commented 6 months ago

Thanks @MartyG-RealSense that is very useful to know!

However, when I tried this method of creating point clouds the same happens-

My object of interest is over 50mm wide and when I measure the object width generated by the pointcloud it is only 43mm.

MartyG-RealSense commented 6 months ago

Higher resolution can provide better accuracy. So if in the script at https://github.com/IntelRealSense/librealsense/issues/4612#issuecomment-566864616 you change the resolutions from 640x480 to 848x480 for depth and color, does the XY measurement accuracy improve?

enkaoua commented 6 months ago

I think that works now! Thank you so much for the help :)

One final related question so I understand this correctly- when I align the pointcloud to the color stream, does that mean that the pointcloud generated is therefore relative to the RGB sensor?

If I now grab an image with an aruco marker that tells me the 3D position relative to the camera, and I use the RGB image for that, would the position I obtain of the marker be the same as in the pointcloud?

MartyG-RealSense commented 6 months ago

Yes, when depth is aligned to color, the center-line of the RGB sensor becomes the depth origin point.

During depth to color alignment, the depth field of view resizes to match the RGB sensor's field of view size. The color field of view size does not change. So a coordinate on the RGB image should correspond to the same point on the RGB-aligned pointcloud.

swishswish123 commented 6 months ago

Perfect thank you so much for your help @MartyG-RealSense and for the quick responses.

One final unrelated question before the issue is closed- I am struggling to get my camera working on my mac (Monterey) and looked everywhere probably most relevant issues and still haven’t managed to solve it- is there a specific issue I should continue the conversation on or should I start a new issue?

MartyG-RealSense commented 6 months ago

The two main ways to install librealsense for MacOS Monterey are:

  1. Use the guide at the link below.

https://lightbuzz.com/realsense-macos/

  1. Perform a brew installation.

https://formulae.brew.sh/formula/librealsense

MartyG-RealSense commented 6 months ago

Hi @swishswish123 Do you require further assistance with this case, please? Thanks!

enkaoua commented 6 months ago

Apologies for the late reply @MartyG-RealSense, the initial case is resolved :)

However, I still can't manage to get realsense working on new versions of Mac. Should I post on another related issue, a new one or continue the conversation here?

Thanks so much for the help!

MartyG-RealSense commented 6 months ago

You are very welcome!

As far as I am aware, issues with installing librealsense on MacOS Ventura and Sonoma have not yet been resolved, though it is possible to get it working on Monterey.

The two main ways to install librealsense for MacOS Monterey are:

Use the guide at the link below. https://lightbuzz.com/realsense-macos/

Perform a brew installation. https://formulae.brew.sh/formula/librealsense

The brew method is compatible with Ventura and Sonoma.

MartyG-RealSense commented 6 months ago

Hi @enkaoua Do you require further assistance with this case, please? Thanks!

swishswish123 commented 6 months ago

Unfortunately those methods don't work for me... The viewer installed with brew seems to crash every time I connect my camera. Despite the viewer, I ultimately need pyrealsense to work on my laptop and it always gives me an error that the camera isn't connected:

pipeline.start(config) RuntimeError: No device connected

MartyG-RealSense commented 6 months ago

Hi @swishswish123 If you are using brew then you are a MacOS user, yes? Which MacOS version are you using? For example, Monterey can work with RealSense cameras but Ventura has problems that are yet to be resolved.

Using a USB C to C cable instead of the official USB Type-C (A to C) can also cause problems on Mac.

swishswish123 commented 6 months ago

Yes, I'm a MacOS user and on Monterey. I've managed to get it working on an old Mac with macOS Big Sur using pyrealsense2-macosx but on my M1 it's giving the above connection error.

I am using a USB A with the official Mac converter to USB C- is that okay? my Mac doesn't have USB A ports.

MartyG-RealSense commented 6 months ago

Adaptors can be problematic, but it is difficult to avoid using them with Macs because of how common it is for them to have C-ports.

I note that the Lightbuzz source-code install guide or the brew method did not work for you.

Does the Viewer still crash if you launch it whilst the camera is already plugged in?

swishswish123 commented 6 months ago

So the viewer only crashes when I plug the camera in

MartyG-RealSense commented 6 months ago

Which version of the RealSense Viewer has been installed with brew? You can find this without inserting the camera by looking at the version number on top of the Viewer window.

swishswish123 commented 6 months ago

V2.54.2

MartyG-RealSense commented 6 months ago

Does your camera have firmware driver version 5.15.1.0 installed? This is the recommended firmware for 2.54.2.

You can check the firmware version in the Viewer on your Big Sur Mac by clicking the Info button near the top of the Viewer's options side-panel.

swishswish123 commented 6 months ago

Ah it seems I had v5.13.0.50. I've now updated it and checked again but it still crashes when I connect the camera.

In case it's of any use, this is the log I see in the terminal when I connect the camera:

02/01 14:24:48,929 INFO [0x104d70580] (rs.cpp:2697) Framebuffer size changed to 2116 x 1374 02/01 14:24:48,929 INFO [0x104d70580] (rs.cpp:2697) Scale Factor is now 1 02/01 14:24:56,920 INFO [0x16bae7000] (context.cpp:336) Found 1 RealSense devices (mask 0xff) 02/01 14:24:56,927 ERROR [0x16c0eb000] (handle-libusb.h:127) failed to claim usb interface: 0, error: RS2_USB_STATUS_ACCESS 02/01 14:24:56,927 ERROR [0x104d70580] (sensor.cpp:661) acquire_power failed: failed to set power state 02/01 14:24:56,927 WARNING [0x104d70580] (rs.cpp:312) null pointer passed for argument "device" 02/01 14:24:56,927 WARNING [0x104d70580] (rs.cpp:2700) Couldn't refresh devices - failed to set power state Assertion failed: (list_empty(&darwin_cached_devices)), function darwin_init, file darwin_usb.c, line 605. zsh: abort realsense-viewer

MartyG-RealSense commented 6 months ago

The log message Found 1 RealSense devices indicates that the camera was initially detected when inserted. But then it is unable to be accessed. The subsequent message containing the file darwin_usb.c reference has appeared in a couple of other Monterey Mac cases in the past, suggesting that it is a Mac-specific issue. One RealSense user suggested running the Viewer from the terminal with sudo, as described at https://github.com/IntelRealSense/librealsense/issues/9916#issuecomment-1026756545

swishswish123 commented 6 months ago

With sudo, it seems to maintain the app open, but I get the following error so I am not able to do anything within the viewer:

Screenshot 2024-01-02 at 15 18 37

With pyrealsense-macosx I still get "No device connected" even when running on sudo

MartyG-RealSense commented 6 months ago

What happens when you click OK to close the box. Is the Stereo Module able to be started when clicking on the red 'off' icon?

swishswish123 commented 6 months ago

Unfortunately I can't get to the stereo module as when I click OK it just comes back, even if I press on "don't show this error again"

MartyG-RealSense commented 6 months ago

Does it make any difference if you unplug the micro-sized end of the USB cable from the camera, turn it around the other way and re-insert it into the camera (USB Type-C cables are two-way insertion at the micro-sized end).

swishswish123 commented 6 months ago

we're definitely getting somewhere, the realsense-viewer is now working when I turn the cable around! 🥳

When using pyrealsense2 however I still get an error when I do pipeline.start(config), although this time it is a segmentation fault instead of camera connection... any ideas?

MartyG-RealSense commented 6 months ago

Segmentation faults are a difficult error to solve and do not have a clear cause, unfortunately.

Does it still occur if you remove 'config' from the pipe start line's bracket so that the script ignores the config lines and applies the camera's default stream configuration instead?

swishswish123 commented 6 months ago

Yeah.. and it also doesn't give much info :(

without passing the config, it passes the initialisation smoothly but then when I grab the information from the frameset with left_frame.get_data() it throws the following error: RuntimeError: null pointer passed for argument "frame_ref"

MartyG-RealSense commented 6 months ago

Are you using an index number of '1' to retrieve the left infrared image?

left_frame = frames.get_infrared_frame(1)

swishswish123 commented 6 months ago

Yes precisely

MartyG-RealSense commented 6 months ago

What happens if you remove the 1 from the bracket? If an index number is not provided then the left infrared stream is automatically selected by default.

swishswish123 commented 6 months ago

Sorry not sure why I closed the issue, must've been a mistake!

Without the 1 in the bracket I get the same error: null pointer passed for argument "frame_ref"

MartyG-RealSense commented 6 months ago

It's no trouble at all. Can you post your complete script in a comment, please?

swishswish123 commented 6 months ago

Sure, it's part of a much longer script so will put the relevant code-

rs = RealsenseVideoSourceAPI()
rs.initialise_stereo_live_stream()

while True:
    ret, right_frame, left_frame, right_image, left_image, color_image, depth_image = rs.read_stereo_image()

    if not ret:
        continue

    cv2.imshow('', color_image)

    k = cv2.waitKey(1)
    if k%256 == 27:
        # ESC pressed
        print("Escape hit, closing...")
        break
    elif k == ord('q'):
        print('q pressed')
        break

where the class RealsenseVideoSourceAPI() is as follows:


class RealsenseVideoSourceAPI():
    def __init__(self):

        # camera general
        self.pipeline = rs.pipeline()
        self.config = rs.config()
        self.ret = False
        self.color_intrinsics = None
        self.depth_intrinsics = None
        self.streaming = False
        # recorded frames   
        self.frame = None
        self.depth_frame = None
        self.color_frame = None

        self.depth_image = None
        self.color_image = None

        # 3D reconstruction
        self.pointcloud = None

    def initialise_stereo_live_stream(self):
        self.config.enable_stream(rs.stream.infrared, 1, 848, 480, rs.format.y8, 30)
        self.config.enable_stream(rs.stream.infrared, 2, 848, 480, rs.format.y8, 30)
        self.config.enable_stream(rs.stream.color, 848,480, rs.format.rgb8, 30)
        self.config.enable_stream(rs.stream.depth, 848, 480, rs.format.z16, 30)

        self.pipeline.start()
        self.streaming = True

        for _ in range(5):
            self.read_stereo_image()

        self.estimate_calibration_live()

    def stop(self):
        self.pipeline.stop()
        self.streaming = False

    def read_stereo_image(self):
        try:
            frames = self.pipeline.wait_for_frames()
            frames.keep()
            self.frame = frames
            self.ret = True
        except:
            return False,False,False,False, False, False, False

        # obtain data from frames
        left_frame = frames.get_infrared_frame(1)
        right_frame = frames.get_infrared_frame(2)
        self.color_frame = frames.get_color_frame()
        self.depth_frame = frames.get_depth_frame()

        if not self.ret:
            return False,False,False,False, False, False, False

        # Convert images to numpy arrays
        left_image = np.asanyarray(left_frame.get_data())
        right_image = np.asanyarray(right_frame.get_data()) # extract RGB data to np array
        color_image = np.asanyarray(self.color_frame.get_data())
        depth_image = np.asanyarray(self.depth_frame.get_data())

        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)

        images = np.hstack((cv2.cvtColor(left_image,cv2.COLOR_GRAY2RGB), cv2.cvtColor(right_image,cv2.COLOR_GRAY2RGB), color_image, depth_colormap))

        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('RealSense', images)
        cv2.waitKey(1)

        self.color_image = color_image
        self.depth_image = depth_image

        return True, right_frame, left_frame, right_image, left_image, color_image, depth_image

Note that at the moment I have removed the config from pipeline.start() as otherwise it gets stuck on a segmentation error on that line.

MartyG-RealSense commented 6 months ago

Please comment out frames.keep() to check whether storing the frames in memory with Keep() is causing a problem.

swishswish123 commented 6 months ago

still same error :(

the only reason I have that line in is because I used to get segmentation errors randomly when reading frames and that seemed to reduce the number of times it happened

MartyG-RealSense commented 6 months ago

Next, please try commenting out these sections as they would not usually appear in a RealSense Python script.

 except:
 return False,False,False,False, False, False, False
 if not self.ret:
 return False,False,False,False, False, False, False
swishswish123 commented 6 months ago

sure, although same problem as those sections aren't ran. When on debugger mode, I can see this is skipped and the code breaks at:

    left_image = np.asanyarray(left_frame.get_data())

With the same error RuntimeError: null pointer passed for argument "frame_ref"

MartyG-RealSense commented 6 months ago

Are the infrared streams required for your application? They are not used for the generation of a pointcloud. Only depth and color are.

The depth frame is constructed from raw left and right infrared frames inside the camera hardware and not the infrared streams. This is why depth can be published without the infrared stream being enabled, because the raw infrared frames are not the same thing as the infrared streams.

swishswish123 commented 6 months ago

oh wow that actually works!!! Thank you SOSO much @MartyG-RealSense, should've contacted you 6 months ago!!

So I guess I will have to stick to the default camera settings. Do you know what they are by any chance? I wanted to try and at least change the stream to RGB so the reconstruction has sensible colours

MartyG-RealSense commented 6 months ago

You are very welcome!

The default stream profile that is usually applied if config is not used will be 848x480 depth, 848x480 left infrared and 1280x720 RGB, all at 30 FPS.

On the D415 camera model, 1280x720 depth is the default instead of 848x480.

Right infrared (index 2) is not enabled in the default stream profile. If it is required then it has to be defined in a config instruction and is only accessible on a USB3 connection. It cannot be used on USB2.

swishswish123 commented 5 months ago

Hey @MartyG-RealSense sorry for taking a while to reply, I was testing adding the config files but it wasn't working, and then when I went back to the code I had with the default settings but the camera connection again stopped working and not so sure why... so I'm back to the same problem.

With the real sense-viewer it seems to be a bit unpredictable- with sudo it sometimes works and sometimes it just gives me the error of the power state that I can't exit

MartyG-RealSense commented 5 months ago

When you run the Python script, is the RealSense Viewer closed down? And when the Viewer is used, is your Python script inactive? You can only enable a stream in one active program at a time, as that stream becomes inaccessible to other programs that are subsequently run until the program that originally started the stream is shut off.

swishswish123 commented 5 months ago

I’ve tried three different ways and none worked- with the viewer closed, with the viewer running but not streaming and finally with the viewer running and streaming

MartyG-RealSense commented 5 months ago

Does the situation improve if you add the code below to your Python script to reset the camera when the script is run? It should be placed directly before your pipeline start line.

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()
swishswish123 commented 5 months ago

I guess that helps give us more information about the error. The script fails in the line for dev in devices: with the following error-

RuntimeError: failed to set power state