google-ar / arcore-android-sdk

ARCore SDK for Android Studio
https://developers.google.com/ar
Other
4.96k stars 1.22k forks source link

Depth Resolution mismatch #1314

Closed kaj777 closed 2 years ago

kaj777 commented 2 years ago

SPECIFIC ISSUE ENCOUNTERED

I have 2 questions:

  1. I got different depth + camera image resolution aspect ratio on Samsung S8 device, how can I map the depth distance to the camera image (x,y) since the ratio is different even though stretch? Is there something to be done when getMillimetersDepth to produce a non 160x90 (14400) distance values? Samsung S8

    • camera image: 640x480
    • depth image: 160x90
  2. For deriving depth distance for this depth image, is the total size depth image width x height? 160x90 depth image should produce array size 14400 ? pixel stride 2, row stride 320

    
    val depthRangeArray = FloatArray(depthImage.width * depthImage.height)
    for (y in 0 until depthImage.height) {
    for (x in 0 until depthImage.width) {
     val byteIndex = x * plane.pixelStride + y * plane.rowStride
      val index = y * width + x
     depthRangeArray[index]= shortDepthBuffer[byteIndex]
    }
    }

with byte index,  I’m getting index=14400 out of bounds (limit=14400).
It only works with depthRangeArray[index] = shortDepthBuffer[Index] instead of byteIndex , but that is different from what is stated in getMillimetersDepth
how should this be adapted to map to camera image?

### VERSIONS USED
- Android Studio: 4.1.2
- ARCore SDK for Android: 1.27.0
- Device manufacturer, model, and O/S: Samsung S8
- Google Play Services for AR (ARCore):  
  versionName=1.28.212840223
- Output of `adb shell getprop ro.build.fingerprint`: 
kaj777 commented 2 years ago

@devbridie I've come across https://github.com/google-ar/arcore-android-sdk/issues/1102 but don't get what is meant by scaling coordinate, appreciate your input

devbridie commented 2 years ago
  1. Let me confer with the team about official guidance on how to tackle this problem; this is lacking in our docs. It seems that the depth image is meant to (visually) be stretched to obtain a depth image that maps over the camera image; see e.g. the fragment shader for depth visualization.

  2. In your example, shortDepthBuffer should not be a ShortBuffer, but a ByteBuffer. The calculation shown in the example is the correct indexing for bytes, not shorts, which is why I think the indexing is taking place out of bounds.

devbridie commented 2 years ago

Sorry; I was incorrect about the depth image. The depth image is meant to visually coincide with the GPU image, and are the same aspect ratio as the display. This means that the depth image actually has the following relationship to CPU images (e.g. from acquireCameraImage:

image

In other words, it's possible that there are pixels in the CPU image for which no depth sample exists when the aspect ratios of the various images do not line up.

kaj777 commented 2 years ago

Thanks for your prompt reply @devbridie . according to your image , it satisfies on my Huawei P30 Pro where the aspect ratio are the same, 640x480 and 160x120. In the case of Samsung S8, it happens for all images. Instead of 90 it should've been 120. Since the height is missing 30, does it mean it is the last 30 rows that are missing, or it is random? This device will never get depth for a section of the image?

my use case is to extract depth distance list and pass over to cv for 3d reconstruction (point cloud). Does it mean even though the device supports depth api but it should not be used when aspect ratio does not match?

devbridie commented 2 years ago

Yes, that means there are sections of the image that will not receive a depth estimation. The samples that are available are centered in the image (in your example, there will be 15 rows missing on the top side, and 15 missing on the bottom side of the image).

Does it mean even though the device supports depth api but it should not be used when aspect ratio does not match?

You can use the depth API, but you'll need to be aware that not all pixels in the CPU image have appropriate corresponding depth samples.

kaj777 commented 2 years ago

@devbridie thanks for your helpful input. Does it mean there might also be possibility where the right and left side has missing depth estimates? or just the top and bottom( height) ? may I know is this a bug to be fixed or a default behaviour?

kaj777 commented 2 years ago

@devbridie it seems difficult to cater missing sections especially on 3:2 or 1:1 aspect ratio (for example: huawei p30 pro has camera 1536x2048 (3:4)and depth 2048x1536(4:3))

As alternative, I've tried on cameraconfig, we could find and set similar image size and texture size (the 6th entry in config list returned on S8), as such it produces the same aspect ratio for depth image. can we safely assume that on all devices we can get matching size on both image and texture.

camera config list example: index -image size -texture size [0] 640x480 1920x1080 [6] 640x480 640x480 (matches first entry image size) -> use this

ptc-emaggio commented 2 years ago

Isn't one supposed to use transformCoordinates2d to map CPU Image coordinates to TEXTURE coordinates and vice-versa? This is at least what it is shown here

Still transformCoordinates2d does not seem to work for a Samsung S20 Ultra 5G. On this device the aspect ratio of the Camera and Depth images is the same (Camera 1920x1080, and depth 640x480), and `transformCoordinates2d' returns a 1 to 1 mapping between IMAGE and TEXTURE coordinates (assuming same resolution). However the CPU and Depth images do not overlap correctly. The first row of the depth image corresponds to row ~60 of the camera image. It is not stretched either it is simply translated up.

devbridie commented 2 years ago

I've published some official guidance on how this should be done:

For individual (device specific?) issues, please open a separate issue and include reference images.

ynhuang commented 1 year ago

@devbridie Can I convert the coordinates offline? I got the CPU image by acquireCameraImage and the depth image by acquireDepthImage.

raju535482 commented 1 year ago

@devbridie Instead of leaving everyone with question marks, just add an implementation inside depthData class file, in arcore raw depth api sample.

FrankFeng-23 commented 5 months ago

@devbridie it seems difficult to cater missing sections especially on 3:2 or 1:1 aspect ratio (for example: huawei p30 pro has camera 1536x2048 (3:4)and depth 2048x1536(4:3))

As alternative, I've tried on cameraconfig, we could find and set similar image size and texture size (the 6th entry in config list returned on S8), as such it produces the same aspect ratio for depth image. can we safely assume that on all devices we can get matching size on both image and texture.

camera config list example: index -image size -texture size [0] 640x480 1920x1080 [6] 640x480 640x480 (matches first entry image size) -> use this

Thanks so much for this solution! It also works well on Samsung S10 5G.