Closed matttidridge closed 4 years ago
You've found that ARCore Unity SDK is using ARKit for Cloud Anchors on iOS? Are you sure?
Sorry if I misspoke (my Objective-C/Swift knowledge is a bit limited) but when building to iOS using the ARCoreSDK, without also adding ARKit you will get a black screen and a log message telling you the ARKit SDK is required (it's also in their docs). Looking at the XCode build, it does look like it has pods for ARCore & CloudAnchors (along with the ARKit framework) so I'm not too sure where one ends and another begins. I assume it uses ARCore to generate the Cloud Anchors and translates them to an ARKit session but that would just be a guess.
When building the native XCode face anchor demo, there is an additional ARCore pod called 'AugmentedFaces' which is what I am after but does not look like it has support via the Unity plugin.
It sounds like this might be a question better suited for https://github.com/google-ar/arcore-unity-sdk/issues
What Tim said. But also, I believe that ARCore on iOS is only used for Cloud Anchors, then ARCore passes everything else to ARKit.
Yeah I guess all I'm really interested in is the TensorFlow-based face tracking that ARCore provides natively in XCode, but does not have support for in Unity. Maybe an odd one since for most cases ARKit Face Tracking will do the trick.
Thanks guys and sorry about dropping in on the wrong repo. I have an open issue over there as well, but it seems to be a hill many have died on on that particular SDK.
I have an open issue over there as well
For future readers, @HidingGlass opened https://github.com/google-ar/arcore-unity-sdk/issues/754
Currently in ARFoundation, building to iOS will always default to utilizing ARKit. We have found several specific cases for using ARCore on iOS instead of ARKit. After some native XCode testing, we have found that the ARCore for iOS SDK had much stronger face tracking while users were wearing face masks/coverings (something becoming more and more prevalent in our projects), ARCore currently supports face tracking on older iOS devices without the TrueDepth camera due to being TensorFlow based instead of using depth sensors, and it would be great to unify the use of ARCore Face Regions between iOS and Android instead of writing a separate set of code for mapping BlendShapes to do the same thing.
It would be great if there was an option to build ARFoundation projects to iOS using ARCore instead of ARKit with support for the two ARCore iOS features currently available (Augmented Faces & Cloud Anchors).
We have explored ways to improve the ARKit face tracking to support users wearing face masks with no success. We have explored the ARCore Unity SDK, but this project also defaults to using ARKit on iOS for Cloud Anchors and does not suport the ARCore iOS SDK. We have explored custom OpenCV & TensorFlow models in Unity, but with less success than ARKit/ARCore.