google-ar / arcore-android-sdk

ARCore SDK for Android Studio
https://developers.google.com/ar
Other
4.91k stars 1.2k forks source link

Feature Request: Dense Pointcloud from depth sensors or stereo #120

Open SimonScholl opened 6 years ago

SimonScholl commented 6 years ago

I know there were already several requests related to the possible support of Tango Devices with ARCore. My question is based on the fact, that we want to use the already available depth sensors to capture dense depth data with our devices. So is there any plan (short, mid or long term) that the ARCore will use the hardware capabilities of Tango Devices?

Is it possible that ArFrame_acquirePointCloud() not only can deliver depth data about features, but also gives us the pointcloud we already got via tango, when hardware is there?

AndrewRogers1994 commented 4 years ago

Honor View 20 and Huawei P30 Pro

Also the front ToF on Huawei Mate 20 Pro is working

Great, Thanks!

AndrewRogers1994 commented 4 years ago

Honor View 20 and Huawei P30 Pro

Also the front ToF on Huawei Mate 20 Pro is working

Is it the honor 20 pro?

lvonasek commented 4 years ago

I have no informations about Honor 20 Pro - that's too new device for me.

AndrewRogers1994 commented 4 years ago

Great thanks again, Btw you may already be aware but samsung plan to release their TOF in the coming months.

lvonasek commented 4 years ago

Do you have a source of this information?

AndrewRogers1994 commented 4 years ago

Do you have a source of this information?

Hi, Yes I contacted Samsung technical support from the developer forum and got the following response.

image

image

lvonasek commented 4 years ago

Thank you, that means that ARCore ToF support is most likely more far away than I thought.

I will keep using Huawei AREngine for AR.

kexar commented 4 years ago

This is from ARCore 1.11 changelog released yesterday. Added MinFPS, MaxFPS, and DepthSensorUsage properties to CameraConfig.

No description what DepthSensorUsage means though. Will test a demo app.

lvonasek commented 4 years ago

I tested the depth sensor function and it is currently not supported on any ToF device I have here.

AndrewRogers1994 commented 4 years ago

Why when I test your night vision / ToF viewer the 240x180 resolution doesn't work?

lvonasek commented 4 years ago

It depends on device software. Huawei and Honor enabled 240x180 on most devices with software update 9.1.xxx

AndrewRogers1994 commented 4 years ago

It depends on device software. Huawei and Honor enabled 240x180 on most devices with software update 9.1.xxx

Ahh okay it's a brand new honor 20 so will see if there is an update

AndrewRogers1994 commented 4 years ago

It depends on device software. Huawei and Honor enabled 240x180 on most devices with software update 9.1.xxx

All working great after updating, do you know what the maximum resolution of the honor 20 is? can it go above 240x180 but it just hasn't been made supported yet or is that the highest resolution of the sensor?

lvonasek commented 4 years ago

240x180 is 0.04MP. I heard that Huawei P30 Pro has the same ToF like Honor View 20 (not sure if it is true). However Huawei P30 Pro shows ToF resolution 992x558 which is 0.55MP but it returns no data (just like 240x180 on Honor before updating).

Note that Huawei P30 Pro is getting all ToF updates earlier than Honor View 20.

kexar commented 4 years ago

In a GoogleARCore.ARCoreCameraConfigFilter.DepthSensorUsageFilter class reference there is a description:

bool RequireAndUse = true Filters for camera configs that require a depth sensor to be present on the device, and that will be used by ARCore.

See the ARCore Supported Devices (https://developers.google.com/ar/discover/supported-devices) page for a list of devices that currently have supported depth sensors.

Unfortunately there is no information about depth sensors compatible devices on that link.

lvonasek commented 4 years ago

@kexar there is no information because Google does not currently support any single device

Sheng-Xuan commented 4 years ago

240x180 is 0.04MP. I heard that Huawei P30 Pro has the same ToF like Honor View 20 (not sure if it is true). However Huawei P30 Pro shows ToF resolution 992x558 which is 0.55MP but it returns no data (just like 240x180 on Honor before updating).

Note that Huawei P30 Pro is getting all ToF updates earlier than Honor View 20.

Hi, I have tried your Night Vision app it works well on my p30 pro, I am just curious to know how did you detect the supported resolution of the tof camera? I checked the camera2 info, only cameraId=0 supports to output DEPTH16 image, there are more resolutions in the list. How did you check it actually only supports 240*180? Is it trial and error or is there any tricks to do it? Thank you for your great work!

lvonasek commented 4 years ago

Hi @Sheng-Xuan,

there is no way how to test it. The thing that Huawei P30 Pro gives you unsupported resolution as supported is wrong and we as app developers should not have to deal with this at all.

What I did in my app is detecting if the resolution is higher than 240x180 and if so then I label it as unsupported. I did this to avoid thousands of users writing me "it does not work". Now it is hundreds only :D

The app helps me to have an overview which devices currently support ToF using Camera2 API. I reached 10k installations and it is currently supported only on Huawei P30 Pro and Honor View 20.

Sheng-Xuan commented 4 years ago

Hi @Sheng-Xuan,

there is no way how to test it. The thing that Huawei P30 Pro gives you unsupported resolution as supported is wrong and we as app developers should not have to deal with this at all.

What I did in my app is detecting if the resolution is higher than 240x180 and if so then I label it as unsupported. I did this to avoid thousands of users writing me "it does not work". Now it is hundreds only :D

The app helps me to have an overview which devices currently support ToF using Camera2 API. I reached 10k installations and it is currently supported only on Huawei P30 Pro and Honor View 20.

I also found that in AREngine, the the depth image I can get is 240*180. Btw, I am trying to make an app on producing both RGB image and depth image. It seems quite troublesome to use camera2 api to do so. I found I can use ARFrame.acquireDepthImage() and acquireCameraImage() in AREngine. But I could not get correct RGB image and depth images from the ARFrame, the DEPTH16 value after decoding is wrong and I could not convert YUV_420_888 format to jpeg successfully either. I am not sure if you tried these methods. There is too little discussion about AREngine online. Thank you in advance if you have any hint on this, but it's ok if you don't have time to explain. 😃

lvonasek commented 4 years ago

I have successfully converted YUV to RGB and DEPTH16 to float array (only the depth information) however the Z is somehow wrong, in the center of camera it seems to be correct but on sides it is not.

mpottinger commented 4 years ago

@lvonasek So even in AREngine we can't reliably use depth for AR? The depth in AREngine is inaccurate?

Why did they even advertise the AR benefits of a tof sensor if that is true.

Things are progressing much more slowly in this area than I had hoped.

lvonasek commented 4 years ago

@mpottinger - if you just enable depth sensor then you get in AREngine depth data instead of feature points. However you get less than 300 points per frame. If you use CPU access to depth data then you get full resolution depthmap. Which works great but it is really not easy to convert it into the world coordinates.

mpottinger commented 4 years ago

@lvonasek Oh i see, thanks. So the issue is converting the depth map to point cloud or world coordinates.

From what I know so far, that requires the intrinsic parameters of the camera. Is that what is missing in this case?

With my depth sensor for the pc, I have those available and it is easier to get world coordinates.

lvonasek commented 4 years ago

@mpottinger I have it working only on two devices and there has the depth camera the same intrinsic parameters like color camera.

mpottinger commented 4 years ago

@lvonasek Strange, so there is something else going on there?

Usually it should be straightforward to get a point cloud from a depth sensor as long as you have intrinsics, correct?

If it were possible, I would be very tempted to buy a P30 pro. I am struggling with PC based solutions for other reasons.

lvonasek commented 4 years ago

@mponttiger - yeah, you just do not know what's the depthmap orientation, also you can hardcode it or detect it from the projection matrix.

mpottinger commented 4 years ago

@lvonasek So you mean the rotation of the phone causes the issue? If that is the case wouldn't locking the auto-rotate solve the issue?

I had to lock the rotation when I was testing out computer vision in ARCore, to keep it consistent and predictable. Would that solve it in this case? If so im getting a P30 pro asap ;)

lvonasek commented 4 years ago

@mpottinger Basically yes, you get depthmap in DEPTH16 format (that's documented how to parse it), in portrait the orientation was something like x,y ->-y,x and then you transform the points into world coordinates

mpottinger commented 4 years ago

@lvonasek Ok based on that I decided to just go and buy a P30 Pro.

I was not disappointed! It is exactly what I needed. Your 3D scanner app works much faster and more accurately on it.

Tried the Night vision app, and TOF viewer. Yes the range and resolution are limited, but for my use, after testing I found that is not a problem at all for what I need.

Then tried AREngine and was very pleased with the results. It perfectly fixes what was missing in ARcore for my use.

I haven't tried using the DEPTH16 image yet. Even without that it performs very well for me. I just disable plane detection and change the hit test on points to accept any point, and I can place AR markers wherever I want instantly, on any surface with no detection delay and without needing to wave the phone around.

It seems it already uses to depth sensor to do what I wanted out of the box, minus occlusion.

Perfect! Not everyone will have the same use case as myself, but for anyone who just wants to be able to place markers/placeholders in AR without limitation, Huawei P30 Pro and AREngine are the way to go 100%

Hope Google takes note.

davidhsv commented 4 years ago

Can you make a video showing the limitations and capabilities of huawei P30 Pro and AREngine?

Thanks in advance!

Best regards, David Vieira

On Thu, Sep 19, 2019 at 11:57 AM mpottinger notifications@github.com wrote:

@lvonasek https://github.com/lvonasek Ok based on that I decided to just go and buy a P30 Pro.

I was not disappointed! It is exactly what I needed. Your 3D scanner app works much faster and more accurately on it.

Tried the Night vision app, and TOF viewer. Yes the range and resolution are limited, but for my use, after testing I found that is not a problem at all for what I need.

Then tried AREngine and was very pleased with the results. It perfectly fixes what was missing in ARcore for my use.

I haven't tried using the DEPTH16 image yet. Even without that it performs very well for me. I just disable plane detection and change the hit test on points to accept any point, and I can place AR markers wherever I want instantly, on any surface with no detection delay and without needing to wave the phone around.

It seems it already uses to depth sensor to do what I wanted out of the box, minus occlusion.

Perfect! Not everyone will have the same use case as myself, but for anyone who just wants to be able to place markers/placeholders in AR without limitation, Huawei P30 Pro and AREngine are the way to go 100%

Hope Google takes note.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/google-ar/arcore-android-sdk/issues/120?email_source=notifications&email_token=AAAJJGUH4YOZBFZ43N7MT6TQKOHM3A5CNFSM4ELEBRI2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7DYR6Q#issuecomment-533170426, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAJJGXKK53XEMFPBPDVRZ3QKOHM3ANCNFSM4ELEBRIQ .

mpottinger commented 4 years ago

@davidhsv Yes definitely after I play around with it more. I want to see if occlusion is possible as well using the raw depth.

In plane detection mode it detects planes all over everything, walls, ceiling, etc, which would be good for some people.

The big advantage for me is that I do not need or want plane detection, and when I just do hit testing on feature points, there are feature points to anchor to everywhere, without even needing to move the phone around.

In ARCore I need to wave the phone around a lot, and often on smooth surfaces there are no points to set an anchor, it is hit or miss. In AREngine I can always place an object on any surface except maybe a mirror surface, with no delay, no phone waving.

For sure I will post a comparison later.

davidhsv commented 4 years ago

That is great news! I need a way to see anchor points without waving my phone. I want to make a audio assisted visualization for blind people, but I'm afraid of spending $1000 to buy the p30 pro :(

Best regards, David Vieira

On Thu, Sep 19, 2019 at 2:56 PM mpottinger notifications@github.com wrote:

@davidhsv https://github.com/davidhsv Yes definitely after I play around with it more. I want to see if occlusion is possible as well using the raw depth.

In plane detection mode it detects planes all over everything, walls, ceiling, etc, which would be good for some people.

The big advantage for me is that I do not need or want plane detection, and when I just do hit testing on feature points, there are feature points to anchor to everywhere, without even needing to move the phone around.

In ARCore I need to wave the phone around a lot, and often on smooth surfaces there are no points to set an anchor, it is hit or miss. In AREngine I can always place an object on any surface except maybe a mirror surface, with no delay, no phone waving.

For sure I will post a comparison later.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google-ar/arcore-android-sdk/issues/120?email_source=notifications&email_token=AAAJJGVPU6GNH75B6AWIZPTQKO4MHA5CNFSM4ELEBRI2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7EJ2GY#issuecomment-533241115, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAJJGRRVBPOH2OFPSDFL6TQKO4MHANCNFSM4ELEBRIQ .

lvonasek commented 4 years ago

@mpottinger - there are two ways how to do occlusion. The easier way is to enable meshing and use material which renders no color, only depth data before rendering camera background (this is how 6d.ai does it), however this way does not occlude dynamic objects. The second way is to use depthmap and render it into depth buffer (this is how it was done on Tango)

@davidhsv - you can buy Honor View 20 - it has the same capability like Huawei P30 Pro

AndrewRogers1994 commented 4 years ago

@lvonasek Ok based on that I decided to just go and buy a P30 Pro.

I was not disappointed! It is exactly what I needed. Your 3D scanner app works much faster and more accurately on it.

Tried the Night vision app, and TOF viewer. Yes the range and resolution are limited, but for my use, after testing I found that is not a problem at all for what I need.

Then tried AREngine and was very pleased with the results. It perfectly fixes what was missing in ARcore for my use.

I haven't tried using the DEPTH16 image yet. Even without that it performs very well for me. I just disable plane detection and change the hit test on points to accept any point, and I can place AR markers wherever I want instantly, on any surface with no detection delay and without needing to wave the phone around.

It seems it already uses to depth sensor to do what I wanted out of the box, minus occlusion.

Perfect! Not everyone will have the same use case as myself, but for anyone who just wants to be able to place markers/placeholders in AR without limitation, Huawei P30 Pro and AREngine are the way to go 100%

Hope Google takes note.

Has the quality of the P30 depth increased at all or is the highest still 240x180? Further up in the thread it was said that the 992x558 option did not return data, Is this still the case or is 992x558 supported now?

mpottinger commented 4 years ago

@Murded Still 240x180, but I find that is more than good enough for my use. I intend to do object detection and then AR marking of detected objects. Those detectors operate at not much higher resolution than that anyway

mpottinger commented 4 years ago

@davidhsv I don't think you would regret it. It looks to be the only platform right now that gives developers what they really want for AR so far, short of theb$3000 HoloLens or Magic Leap.

They just released a new version of the SDK yesterday, and it looks nice. I'm away from home for the weekend though.

If it makes you feel any better I have spent more than $1,000 on hardware to experiment with.

Bought an iPhone XS Max to test 6d.ai, then gave it to my wife.

Bought a Note 9, that I will try to sell.

Have a $500 ZED mini camera that is collecting dust, a $500 Structure Core, a $200 T265 that will probably collect dust now, and so on hah.

Oh btw I am also doing computer vision. If you are doing object detection you may or may not havr noticed the same thing I have.

Object detection on Android is currently too slow to implement with AR. Running SSD Mobilnet usually runs at 5-10fps or less. By time the object is detected, the phone may have moved way off the object.

I am going to have to write code to get the Edge TPU to work with Android, which works at 60+fps.

mpottinger commented 4 years ago

@lvonasek Yes I tried 6d.ai and it was impressive for only working via a mono RGB camera, but still not good enough for my purposes, too slow, holes, some innacuracies etc.

When you mentioned meshing in AREngine I did a search and couldn't find the code to do it. Then I just noticed this morning they uploaded a new version that includes scene meshing and occlusion!! I am very excited about that.

I tried it out and I'm very impressed. Need to try it out more! Why does Google not have this? You'd think they'd be leading this area not Huawei. Their Pixel phones should have had TOF sensors and demonstrate what Huawei is already beating them at.

I do prefer the second method though via the depth map and not via meshing. I need that instant responsiveness to the environment so the user can move quickly.

I am not very experienced with OpenGl but I do know it can be done with a custom Shader which I have example code for on PC but not Android.

davidhsv commented 4 years ago

hhahahahahhahaha I feel I little bit better now ! X)

Do you tried the Azure kinect DK? It looks promessing...

Att., David Vieira

On Fri, Sep 20, 2019 at 6:20 PM mpottinger notifications@github.com wrote:

@davidhsv https://github.com/davidhsv I don't think you would regret it. It looks to be the only platform right now that gives developers what they really want for AR so far, short of theb$3000 HoloLens or Magic Leap.

They just released a new version of the SDK yesterday, and it looks nice. I'm away from home for the weekend though.

If it makes you feel any better I have spent more than $1,000 on hardware to experiment with.

Bought an iPhone XS Max to test 6d.ai, then gave it to my wife.

Bought a Note 9, that I will try to sell.

Have a $500 ZED mini camera that is collecting dust, a $500 Structure Core, a $200 T265 that will probably collect dust now, and so on hah.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google-ar/arcore-android-sdk/issues/120?email_source=notifications&email_token=AAAJJGQWRG3C44DDVHH2DFDQKU5CPA5CNFSM4ELEBRI2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7H45QQ#issuecomment-533712578, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAJJGXMQLBSAAPGS6MPFFDQKU5CPANCNFSM4ELEBRIQ .

lvonasek commented 4 years ago

Guys who like my stuff and have P30 Pro, could you please test for me this APK? https://drive.google.com/folderview?id=1TjKrcyXdbwkOMdc0QoNWzoIRohkBYCcE

Thank you in advance

kexar commented 4 years ago

@lvonasek I tried it, it is working great. Did you use the latest AR Engine 2 for mesh building?

lvonasek commented 4 years ago

@kexar No, it is using AREngine 1.7 for reading depth data and Tango3DR for meshing.

kexar commented 4 years ago

@lvonasek BTW Do you know how to read Camera texture in Unity & AR Engine? I tried ARFrame.CameraTexture but it always returns texture with 0 size.

lvonasek commented 4 years ago

@kexar I do not know. I do not use CPU access to RGB camera at all. I would have to transform it to screen coordinates anyway. Also it is much better for me to render the GL texture and then read the bytes.

AbiNyte commented 4 years ago

@Ivonasek Hi, I was wondering if you tried your app with Samsung S10 5G, Samsung Note 10+ and huawei mate 30 pro? , Note10 has a TOF 3D VGA camera, but there is no information about the Mate 30 pro TOF camera.

I also found out Samsung is deprecating their camera SDK and there is no information about the replacement or if their new SDK would support TOF camera, do you have any new information about this?

Qualcomm also has some SDKs for products with snapdragon platform but there is no information about TOF camera and if it can support any of them.

What I found out is that google somehow is taking over most of the software development efforts around Android, at least for the smartphone cameras, Qualcomm discontinued their Android SDK few years ago and Samsung is deprecating the Camera SDK, I don't know about other OEMs, I didn't check if they have any specific SDK for their smartphones camera.

The thing is that Google has to make something that works with all the platforms and OEMs and it's very difficult and time consuming, it's the reason they are moving so slow and there is not much software support for TOF cameras already on many smartphones. Huawei is progressing faster than others in this field because they can't rely on Android and Google anymore and have to develop their own softwares and make their own developers community, and I think they are going toward an Apple like business model somehow, Samsung should be thinking about this too.

In an ideal world developers should have access to the hardware and especially cameras in 3 levels:

SoC leve: Qualcomm, Apple, MediaTek, Samsung, HiSilicon (Huawei), Intel, AMD, NVIDIA Middle level: Samsung, Apple, Huawei, Intel, NVIDIA and all the other OEMs OS level: Google, Apple, Microsoft

But it's not an Ideal world! :))

lvonasek commented 4 years ago

@AbiNyte - There is Camera2 API for ToF sensor access since 2017. However the first integration was added by Huawei P30 Pro in March 2019 and by Honor View 20 in May 2019. I am using this API camera on my night vision app (originally it was just to testing ToF sensor).

Samsung confirmed that they will add Camera2 API ToF sensor integration in Q4/2019. It might be added together with Android 10 update. I would be curious if my night vision app runs on Samsung 10 5G/Note 10+ with Android 10 beta but I am not going to buy the device because of it.

I was following for some time Samsung WhARe but they do not support it anymore and it feels dead. Google ARCore's support is also terrible since @inio is not doing it anymore but at least Google still develops it.

In 2020 there should be Camera2 API to access ToF data on middle level and ARCore to access it on AR level. HW level you won't receive because it would be make possible to burn the device.

AbiNyte commented 4 years ago

@Ivonasek - Thank you, very good points, I was wondering if you have done any test to measure the Huawei TOF camera depth resolution? based on your tests the planar resolution seems to be 240x180, but what about depth?

Have you tried your app with firebase test lab? https://firebase.google.com/docs/test-lab

mpottinger commented 4 years ago

Guys who like my stuff and have P30 Pro, could you please test for me this APK? https://drive.google.com/folderview?id=1TjKrcyXdbwkOMdc0QoNWzoIRohkBYCcE

Thank you in advance

Ok I was able to test today and had time to upload some videos of your app in action. The demos certainly work better on the P30 Pro with AREngine vs ARCore.

However I can see from the demos my preferred methods won't include meshing, as there are still imperfections. I much prefer the z-buffer method of occlusion, with a fullscreen quad, which I am busy working on right now to get working.

I am making good progress on that, just need to make sure I line up the CPU images correctly with the display.

Here are the videos from your apk tested in my house:

P30 Pro TOF viewer https://youtu.be/TPXCOo6Cba8

3D Scanner: https://youtu.be/T6vKqvrGKMM

Plane detection: https://youtu.be/fdFEj_bzT5M

Minecrafter: https://youtu.be/o0vMU1jDPQE

Mesh demo: https://youtu.be/JKznJqDSubg

Cube demo: https://youtu.be/fCw1Wn7eeHI

mpottinger commented 4 years ago

hhahahahahhahaha I feel I little bit better now ! X) Do you tried the Azure kinect DK? It looks promessing... Att., David Vieira

I almost forgot about the kinect DK! Yes it looks very interesting, after my experience with the Structure Core and T265 I don't think I will be spending more money for a PC based solution. My app really needs to be mobile. I found even a tablet is cumbersome, and with external cameras like that, you end up having a big mess of wires and dongles attached to the tablet.

I did get very close with AR with the Depth Cam and T265 combo. It worked, sorta, but I would need something like a Raspberry Pi 4 with an attached screen in order for it to be portable.

I was using Panda3D because it is easy to interface with OpenCV and Python, and will work on ARM Linux.

Alignment/calibration between two devices is a big issue for me, I am not expert enough to do it properly.

I didn't properly figure out how to align the depth camera pose with the T265 exactly. I crazy glued the Structure Core to the top of the T265 and just took the pose info direct from the T265 and AR Images from the Structure Core. Suprisingly it worked half-decently at close range, but because of the misalignment, it was waaaay off at a distance. Too much hassle to fix.

Anything other than smartphone form factor just isn't going to work out is what I decided. AREngine is wonderful, I am liking it more and more and probably will not need or want anything else.

bastiankayser commented 4 years ago

Samsung confirmed that they will add Camera2 API ToF sensor integration in Q4/2019. It might be added together with Android 10 update. I would be curious if my night vision app runs on Samsung 10 5G/Note 10+ with Android 10 beta but I am not going to buy the device because of it.

The ToF camera of the Samsung S10 5G is enabled and perfectly accessible via the Camera 2 API (having one here myself for development, build G977BXXU1ASD8). The camera resolution is 240x180. Depth-Format is Depth16 as specified in the Camera2 Api (depth in mm, 3 bits for confidence, 13 bits for actual depth value) The depth range is from around 12 cm to 2.5m, with closer ranges having much better depth resolution as it seems. The Night Vision App does not work btw, shows "Camera2 API: ToF not found".

lvonasek commented 4 years ago

@AbiNyte - the depth resolution is dependent on light conditions (outside it is bit worse) and on materials (black materials are visible only from really close distance). I am getting depth data until 3-4m.

@mpottinger Thank you a lot.

@bastiankayser Could you share how you read the depth sensor resolutions? On my end Samsung returns always nothing.