luxonis / depthai-core

DepthAI C++ Library
MIT License
235 stars 127 forks source link

OAK-D-PRO-POE depth data is significantly warped #397

Closed diablodale closed 7 months ago

diablodale commented 2 years ago

The depth data I can getting from my pre-production OAK-D-PRO-POE is significantly warped. As the depth xy location moves away from the principal point, the depth is too large. This causes flat surfaces relative to the sensor to appear convex.

Calibrating the sensor did not help or make any noticeable change. Using my test app...still running...I can switch to my OAK-D sensor and that depth data is good. Therefore, the errant depth data on the OAK-D-PRO-POE is caused by something external to my app.

Setup

Repro

  1. config, build, and install depthai-core for x64, release, shared lib
  2. write an app that visualizes a pointcloud combining the device-side aligned color and stereodepth feed. Use the StereoDepth::depth data feed.
  3. Place your OAK-D-PRO-POE approx 3 meters from a flat wall. Ensure the sensor is perpendicular to the wall.
  4. Start your app and move your app's viewpoint of the pointcloud so you can see/verify the wall is perpendicular and flat in the pointcloud visual.

Result

It isn't flat. The wall is convex. The perpendicular wall should be flat. And the hallway to the right should have straight walls. oak-d-pro-poe2

oak-d-pro-poe1-aftercali

Expected

Flat walls. Here is the exact same viewpoint from an OAK-D. This is good (noisy) depth data. oak-d

Same viewpoint from a Kinect Azure. This is a good reference from which to compare the others. kinect3-2

Luxonis-Brandon commented 2 years ago

Thanks for the thorough testing here! @saching13 - do you think the lack of factory-calibration is the culprit on OAK-D-Pro-PoE? As the OAK-D that @diablodale has is factory calibrated, and no OAK-D-Pro-PoE are factory calibrated yet - simply hand-calibrated in-office instead.

Thoughts?

diablodale commented 2 years ago

@Luxonis-Brandon, hi. No factory calibration here. My OAK-D was the "early run" before you did factory cali. I have manually calibrated it and in this app use the default cali/spec behavior of StereoDepth.

I also manually calibrated my pre-production (thank u) OAK-D-PRO-POE. Here is a set of pictures all from this PoE. As you see, all of them show the convex wall.

Before calibration oak-d-pro-poe1

After calibration, no calls to setFocalLengthFromCalibration() api oak-d-pro-poe1-aftercali

After cali and setFocalLengthFromCalibration(false) oak-d-pro-poe1-aftercali-sflfc-false

After cali and setFocalLengthFromCalibration(true) oak-d-pro-poe1-aftercali-sflfc-true

Luxonis-Brandon commented 2 years ago

Thanks! So actually I think your OAK-D was an early-run of our factory calibration. We initially had a very-small (50 unit?) batch that did not include RGB calibration as part of the factory calibration. So I think yours is in fact factory calibrated, as I don't think we had a single unit that was sold in an enclosure that wasn't factory calibrated - only bare PCBA like below were hand-calibrated: image

That said, I think it's a moot point, as you recalibrated so it's a hand calibration. So my new guess is that there's something off with the settings/understanding of the optics/FOV/etc. that is causing this. But I'll leave this to Sachin/Szabi who understand this a LOT better than me.

diablodale commented 2 years ago

Got it. That makes sense. Pointclouds are great for our eyes to see errant data and it engages so much of the system/math.

During my POE calibration it created mesh files. I updated my app to use those mesh files. Below are two pictures. The wall appears more straight with the mesh files. But...there is significant quantization occurring. I can see it here in video..maybe these pics will help.

Seems more straight, but I can see weird artifact on the left side of the wall. Notice how it suddenly protrudes forward? oak-d-pro-poe1-aftercali-mesh1

Moving my virtual camera forward and above, I can see quantization and a block of incorrect values on that left side of the wall. oak-d-pro-poe1-aftercali-mesh2

diablodale commented 2 years ago

From my OAK-D-Lite, it has factory calibration. Somewhat difficult to discern with the significantly increased noise. I ?think? the walls are straight. oak-d-lite1

saching13 commented 2 years ago

image image

The above results are from OAK-D-PRP-POE. I didn't enable the projector in the above test. And it looks flat on my case. And since you mentioned that it reduces when you use mesh files that means that the CCM in that might have more distortions than usual. So in this case during calibration, you might need to make sure you get good amout of calibration board around the edges which will help in overcoming the issue. Can you try that and let me know if that helps in making it better ?

To reduce the noise can you set the stereo to High_ACCURACY mode ? stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_ACCURACY)

diablodale commented 2 years ago

Hi. Would you please describe this calibration process request in more detail? 🤓 I don't understand the process you want me to follow. Once I understand, I can do it again. If I remember, my epi was ~0.11.

It is my understanding that the current calibration.py creates intrinsic/extrinsics. And that the current depthai-core multicam branch does not use distortion coeffs. How could a different intrin/extrin improve what looks like barrel/pin distortion. Yes...I understand this is a pre-production unit. So maybe it is not manufactured well and the lens is poor. Thinking forward...can this happen with a production unit? If so, is that production PRO unit (who's main feature is more accurate depth) forever not going to have accurate depth because the only choice is either have barrel/pin (which creates covex surfaces) or low quality meshfile quantized depth? Something here seems...off target. 🤔

I have low confidence in meshfile being a fix. The two meshfile pictures look flat because everything is quantized. The wall is perpendicular, therefore when it is quantized, it will appear perfectly flat as it does in the two screenshots. It is not possible to be convex when it is quantized. Whatever the quant is (trunc, bitshift, round, div, etc.) ... I could only notice that it jumped one depth value lower on the left side like a Lego(tm). Moving the gl camera in the 2nd meshfile picture, we can clearly see it jumped two legos.

Does the meshfile codepath intentionally cause low-fidelity quantized depth? I hope not. It looks like a bug. It is a tremendous loss of depth fidelity. Unusable if that is intended behavior. Well, maybe if I was coding minecraft. 😂 This loss of fidelity seems an issue independent from the convex depth OP.

I don't use profiles. Instead I use individual settings. I see the enum profile->settings map, my settings are slightly different and with the high noise oak-d-lite and somewhat oak-d...I sometimes had to adjust confidence to get enough pixels to discern shape - in a realtime video I can see shape without adjusting confidence. The screenshots I posted are a fair representation of shape even though I might have needed to adjust conf to get enough pixels at the moment of screenshot.

Most of the OAK pictures you saw above are with setConfidenceThreshold(191); setLeftRightCheck(true); setLeftRightCheckThreshold(10); setSubpixel(true) setMedianFilter(dai::MedianFilter::KERNEL_5x5) setExtendedDisparity(false) setRectifyEdgeFillColor(0) setDepthAlign(dai::StereoDepthConfig::AlgorithmControl::DepthAlign::CENTER) depthsensor=400p aligned to color=832x480 (I also tried 480p -> 1080p...it was distorted also)

Would you please retest your PRO-POE? Use the laser dot projector as I did. There's not many voxels for me to discern shape in your pictures. I used 200mA, naturally you may need other power. And use settings similar to mine above.

saching13 commented 2 years ago

Would you please describe this calibration process request in more detail? nerd_face I don't understand the process you want me to follow. Once I understand, I can do it again. If I remember, my epi was ~0.11.

One this my first question would be can you see distortion on the left/right camera frame ? If yes. what I'm suggesting is collect more data at the edges of the device for example: Hold the checkerboard lightly at the corners in some of the images such that when you are done with 13 images you should have a board covering every single place of an image is what I meant.

I just did another test and Yes. I agree that mesh is not the fix and distortions are not the reason why you are seeing a curve.

Does the meshfile codepath intentionally cause low-fidelity quantized depth?

No. Mesh file is quantized which will be expanded in the firmware and applied using wrap engine to remove distortions during rectification stage of the stereo.

Here are the settings of stereo for my test. they are slightly different in LR check because I have noticed that removing lot of noise when it comes to point cloud.

setConfidenceThreshold(200); // Not much diff with 191 setLeftRightCheck(true); setLeftRightCheckThreshold(4); // This will return only more accurate matches in LR check. setSubpixel(true) setMedianFilter(dai::MedianFilter::KERNEL_5x5) setExtendedDisparity(false) setRectifyEdgeFillColor(0) setDepthAlign(dai::StereoDepthConfig::AlgorithmControl::DepthAlign::CENTER) depthsensor=400p

OAK-D-POE-PRO: image In this image, you can see it is straight but at the corner, it looks like slightly notches. From what I see it is the speckles jumping

OAK-D-PRO-POE-TOP Here is a proper top view that shows the same. Due to small fluctuations, you see that the wall seems to be over some additional area. which is formed by speckles here. Adding filtering might make it look like a flat plane.

OAK-D: image

In this, you can see much of the information from the scene is lost. And the top view looks like it is not in the plane. image

saching13 commented 2 years ago

Missed to add the scene information. You can see the image at the bottom left corner. image

diablodale commented 2 years ago

Hi. I see similar variations in data at the edges. Your last picture, on each edge, you can see the larger variation...too large of values. I see similar things in mine.

In this video https://www.youtube.com/watch?v=uwG_j5OFPIs the d-pro-poe is on a table angled down to the floor (lots of texture), dot emitter on, and with sensor imager pointing to a point 1.8m from the floor. (the measured value is 2m which is 10% incorrect...a separate issue or cali problem).

In the video I move the virt camera around to see how "flat" is the floor. In general is it flat until the edges. It has the same "too large" distance at those edges; again creating the convex...the same that appears in your last picture.

I do not yet have coefficient undistortion (unless it is automatically done by StereoDepth aligning). Does your app above use any undistortion coeffs?

diablodale commented 2 years ago

I think there is something unexpected happening with the oak-d-pro-poe calibration... or... something wrong with the lenses/sensors on my unit.

Today I got a big flat aluminum calibration target it is a 13x9 squares, each whole square is 6.0cm and symbol 4.7cm. Since the POE can't really do 30fps with the cali script, I changed to 10fps.

I ran the calibration script with this D-Pro-PoE unit. I ran cali twice, and both times I noticed the same unexpected result.

  1. A copied the OAK-D-PRO json file and renamed it ...POE and changed the string within it to match
  2. Ran cali with python calibrate.py -s 6.0 -ms 4.7 -nx 13 -ny 9 -fps 10 -brd OAK-D-PRO-POE. Using -cm perspective had no effect on the results.
  3. Captured all the images
  4. It processes them and then shows me the first set of images with green horz lines. I think the first set is the left->right camera.

Look at the significant distortions in the corners. And slightly at the bottom. The green lines do go through the same pixels. image

The 2nd set of pictures (I think is rgb->right) do not have this distortion.

Luxonis-Brandon commented 2 years ago

Thanks for the report. Yours was a very early prototype. So there's high chance that something could be amiss with it in particular. But let me ask internally to see what could be happening here. Looks super odd. This would fit with the need for mesh that you're seeing too.

That said, I seem to remember some odd bits about visualization... so this could just be a weird visualization thing.

diablodale commented 2 years ago

Thanks for verifying. It is suspicious (like funky cheese) if the calibration end-visual is weird on POE (yet not affecting data). Same cali a few minutes earlier on OAK-D had no such weirdness at the corners.

The pointcloud projection after today's cali still renders D-Pro-PoE flat surfaces as convex. In this new video the brown floor and the white wall are flat. Yet when I move my camera over their edges the convex shape (off center pixel's distance progressively too large). https://www.youtube.com/watch?v=6MpoG8hvjCc

Same app with Kinect3 is flat. And OAK-D is flat-ish though wall is noisy https://www.youtube.com/watch?v=uzwhq3OQ1Ww

Is this weird visual and convex surfaces all lens distortion? From what I know, no part of the depthai pipeline uses the distortion coeffs. Yet?

Luxonis-Brandon commented 2 years ago

Odd. So is this calibration using mesh calibration? Sorry if I missed it. It is seeming mesh is required in this case.

saching13 commented 2 years ago

Hello @diablodale ,

see similar variations in data at the edges. Your last picture, on each edge, you can see the larger variation...too large of values. I see similar things in mine

I see large variations in mine. But in yours, I don't see large variations. I see a complte convex. is your post-processing discarding the variations and choosing that distance as the right distance?

I do not yet have coefficient undistortion (unless it is automatically done by StereoDepth aligning). Does your app above use any undistortion coeffs?

No. it doesn't too. Can you add that to yours and see if it fixes. Here is how to create mesh and feed it.

diablodale commented 2 years ago

@Luxonis-Brandon no mesh is being used in today's images/videos. Last time I experimented with mesh, it was unusable because the mesh codepath quantized depth values like lego(tm) blocks. the quant value was unusably large.

@saching13 I have similar settings as I wrote above for confidence, LR thresh, etc. Only the median5x5 would be altering z-depth values. Distance, x, y is using the pinhole matrix formula with Z-depth from the OAK sensor and calibration data from the depthai cali apis.

Tomorrow, I'll create a video with Kinect, Oak-D, and OAK-D-Pro-Poe positioned next to each other, pointing to same wall, and rendering the same pointcloud and camera view. The OAK sensors have the exact same codepath. The Kinect shares the majority of code including the codepath that turns an aligned depthmap + color -> pointcloud.

Within a week, I'll add undistortion. I doubt I'll use depthai meshfiles because of the quantization and the PoE cpu is already overloaded. Instead, I'll use host cv::remap as I already have good experience with it on the Kinect.

saching13 commented 2 years ago

@saching13 I have similar settings as I wrote above for confidence, LR thresh, etc. Only the median5x5 would be altering z-depth values. Distance, x, y is using the pinhole matrix formula with Z-depth from the OAK sensor and calibration data from the depthai cali apis.

Got it. What I meant was do you do any post-processing after getting the pointcloud on the host ?

diablodale commented 2 years ago

I don't. To generate the final pointcloud is only z-depth given into pinhole matrix formula. All filtering/postproc is done earlier device-side on the disparity/z-depth with the depthai api.

saching13 commented 2 years ago

Wow. yours looks cleaner than mine then. I think we can check two things for the convex.

  1. Using mesh on device should fix this If it is distortion. Even though it is quantized it is big enough to recreate any barrel distortions.
  2. Try turning off the dot projector and see if you still see the convex (This is a stretch)
Luxonis-Brandon commented 2 years ago

@Luxonis-Brandon no mesh is being used in today's images/videos. Last time I experimented with mesh, it was unusable because the mesh codepath quantized depth values like lego(tm) blocks. the quant value was unusably large.

Odd. I haven't seen this before. Asking.

saching13 commented 2 years ago

Last time I experimented with mesh, it was unusable because the mesh codepath quantized depth values like lego(tm) blocks. the quant value was unusably large.

On this. Was subpixel enabled on this ? Can you share a sample if possible ?

diablodale commented 2 years ago

You can see the quantization above in an earlier post https://github.com/luxonis/depthai-core/issues/397#issuecomment-1047331034 and the post after that I discuss how mesh files won't help https://github.com/luxonis/depthai-core/issues/397#issuecomment-1047426285

When everything is quantized, everything is flat....because it is all quantized to the same value. A convex shape will be hidden in the round up/down to the unit of quantization.

Yes, subpixel was enabled in all my above scenarios.

Tomorrow I will...

If you want to test my app yourself (without mesh), its available at https://hidale.com/shop/dp-oak/. It is a plugin for the Max patching environment so you would first need a trial or license for Max itself.

diablodale commented 2 years ago

Videos...

Good news. I could not reproduce the mesh quantization seen above. Later, I will experiment with mesh files at different resolutions/steps; might be where I saw it or some bug has since been fixed. I also got my head around what those mesh files are; downsampled OpenCV remap/distortion lookup maps.

Mesh files corrected most of the convex shape https://www.youtube.com/watch?v=hda1IAwRN58. This suggests to me that there is measurable distortion on my pro-poe and undistortion using opencv apis or mesh files is needed for reliable depth.

The ir dot projector did not make a difference in the convex share. Of course, the noise is greatly affected https://www.youtube.com/watch?v=-DLddLj2gvc

This last one was fun. I have 4 sensors (d, d-lite, pro-d-poe, and kinectazure) all with same code, values, position on table, view, etc. https://www.youtube.com/watch?v=t-x886U_fds in the video you can see some interesting comparisons

My oak-d-lite performs poorly compared to all other sensors for generating depth values. This concerned me so I did a calibration from the lite-calibration branch. It was difficult to get through the whole cali sequence; many "can't see checkerboard" in a fully lit room with a big clear cali target. Unsure why. I got through it after ~20 minutes 🙄 and...no improvement. The d-lite continued to have significantly less valid depth pixels than other OAK sensors even though the host code and config/settings are the same. To double-check, I attached my 2nd oak-d-lite and chose to not calibrate this 2nd d-lite. This video shows the two oak-d-lite's side-by-side. The left side is the 1st and calibrated oak-d-lite...the same one seen in the 4-way video. The right side is my 2nd oak-d-lite that still has the factory calibration. https://www.youtube.com/watch?v=PA5AGDt-Swg

Luxonis-Brandon commented 2 years ago

(I'm behind but wanted to just write at least for now to say THANK YOU for all the testing, feedback, and data here!)

diablodale commented 7 months ago

The bug is caused by OAK hardware itself. Some OAK devices have significantly distorted lens/sensors for mono. When that highly distorted data (left and right each have distinct distortion) is processed by dai::node::StereoDepth, it leads to unacceptable errant depth values.

All mono input into StereoDepth must be undistorted first. Create "mesh" data from the raw OAK calibration values and load the mesh with dai::node::StereoDepth::loadMeshData() is the fix. Closing with 2 years of verified results.