Closed gustavolsson-ikea closed 4 years ago
Scene Understanding should work in conjunction with MRTK. In fact there's a fully supported MRTK integration that is in the works right now. I'm not an expert on the specifics of the Spatial Awareness system, but I'll follow up with the MRTK team and see if we can get you some help.
You should absolutely be able to get this to work.
I followed up, the general consensus is that this should work, and there's no obvious reasons why the Spatial Awareness system would interfere.
We do have 2 avenues to pursue to help unblock:
There were a number of bug fixes/improvements in more recent builds of the SU SDK. It's possible that the SDK fixed some issues that you've hit, the MRTK had found some. Upgrading is recommended, we will also probably upgrade the sample to latest shortly.
MRTK has a pr out for integration. The code that was written may answer some questions, or perhaps using it as a starting point may make things easier. It's still in code review, but the PR is public:
https://github.com/microsoft/MixedRealityToolkit-Unity/pull/7458
Let us know how things are going and we'll try to help in any way we can
Thank you for the swift response! By looking at the PR mentioned above, I managed to compute the correct "to unity" transform. However, I'm still having the intermittency problem. I'm looking into it next.
For people who want to follow along, here is the old code for getting the root coordinate system (which didn't seem to work, but it can be hard to tell when it doesn't show up every time I deploy & run):
var sceneSpatialCoordinateSystem = Microsoft.Windows.Perception.Spatial.Preview.SpatialGraphInteropPreview.CreateCoordinateSystemForNode(nodeId);
var holographicFrameNativeData = Marshal.PtrToStructure<HolographicFrameNativeData>(UnityEngine.XR.XRDevice.GetNativePtr());
var unitySpatialCoordinateSystem = Microsoft.Windows.Perception.Spatial.SpatialCoordinateSystem.FromNativePtr(holographicFrameNativeData.ISpatialCoordinateSystemPtr);
var sceneToUnity = sceneSpatialCoordinateSystem.TryGetTransformTo(unitySpatialCoordinateSystem);
// Then perform right-to-left handedness conversion...
And here is the new code that correctly aligns the scene with the world on Hololens 2:
var sceneOrigin = Microsoft.Windows.Perception.Spatial.Preview.SpatialGraphInteropPreview.CreateCoordinateSystemForNode(nodeId);
var nativePtr = UnityEngine.XR.WSA.WorldManager.GetNativeISpatialCoordinateSystemPtr();
var worldOrigin = SpatialCoordinateSystem.FromNativePtr(nativePtr);
var sceneToUnity = sceneOrigin.TryGetTransformTo(worldOrigin);
// Then perform right-to-left handedness conversion...
Versions: Unity 2019.3.0f6, Windows Mixed Reality 4.1.0, Mixed Reality Toolkit 2.2, Scene Understanding SDK 0.5.2064.
Thanks for sharing! We've been running aligned holograms with the sample app for several months, so it's known to work standalone, but there may be subtleties with MRTK. Thanks for posting, we'll review this and try to find a unified solution that will work in both.
No success unfortunately. Maybe I'm just doing something wrong but I cannot see what it would be. Here is the code I use to initialize the scene understanding:
public async void Start()
{
await Task.Delay(4000);
var inEditor = Application.isEditor;
if (inEditor)
{
Debug.Log("Scene understanding falling back to editor-mode and pre-recorded mesh...");
if (editorFallbackScene == null)
{
throw new System.ArgumentNullException(nameof(editorFallbackScene));
}
scene = Scene.Deserialize(editorFallbackScene.bytes);
}
else
{
Debug.Log("Scene understanding requesting access...");
var access = await SceneObserver.RequestAccessAsync();
if (access != SceneObserverAccessStatus.Allowed)
{
Debug.LogError("Access to scene understanding was denied");
}
Debug.Log("Scene understanding ok");
Debug.Log("Scene understanding getting data...");
scene = await GetData(10.0f, true, true, SceneMeshLevelOfDetail.Coarse);
}
if (scene != null)
{
Debug.Log("Scene understanding successfully started (object count: " + scene.SceneObjects.Count + ")");
Spawn(sceneRootObject, scene, sceneMaterial);
}
else
{
Debug.LogError("Could not load scene understanding scene");
}
}
Here is GetData():
protected async Task<Scene> GetData(float boundingSphereRadius, bool enableQuads, bool enableMeshes, SceneMeshLevelOfDetail lod)
{
var settings = new SceneQuerySettings
{
EnableSceneObjectQuads = enableQuads,
EnableSceneObjectMeshes = enableMeshes,
RequestedMeshLevelOfDetail = lod
};
return await SceneObserver.ComputeAsync(settings, boundingSphereRadius);
}
Here is the Spatial Awareness settings I use:
The Spatial Awareness system always starts (because I can see the wireframe model appearing), but the scene understanding only works sometimes (The log gets stuck at "Scene understanding getting data..." every other start). Either something is wrong with my code or there is a race condition somewhere?
Interesting... agreed that this should work. We'll try to set up a repro and then try to understand it. Can you tell me which version of the OS version you are running?
I've run across similar weirdness in creating the PR for MRTK!
Specifically, the await pattern seems to be buggy, so I force a synchronous behavior which is not ideal. Try adding the GetAwaiter().GetResult() and see if that clears things up.
That did the trick, thanks!!
I switched to using Unity coroutines using this helper class:
using System.Threading.Tasks;
using UnityEngine;
public class TaskCoroutine : CustomYieldInstruction
{
protected Task task;
public TaskCoroutine(Task task)
{
this.task = task;
if (this.task.Status == TaskStatus.Created)
{
this.task.Start();
}
}
public override bool keepWaiting => !task.IsCompleted;
}
public class TaskCoroutine<T> : CustomYieldInstruction
{
protected Task<T> task;
public TaskCoroutine(Task<T> task)
{
this.task = task;
if (this.task.Status == TaskStatus.Created)
{
this.task.Start();
}
}
public override bool keepWaiting => !task.IsCompleted;
public T Result => task.Result;
}
My updated script then becomes:
public IEnumerator Start()
{
yield return new WaitForSeconds(4.0f);
var inEditor = Application.isEditor;
if (inEditor)
{
Debug.Log("Scene understanding falling back to editor-mode and pre-recorded mesh...");
if (editorFallbackScene == null)
{
throw new System.ArgumentNullException(nameof(editorFallbackScene));
}
scene = Scene.Deserialize(editorFallbackScene.bytes);
}
else
{
Debug.Log("Scene understanding requesting access...");
var accessTask = new TaskCoroutine<SceneObserverAccessStatus>(SceneObserver.RequestAccessAsync());
yield return accessTask;
if (accessTask.Result != SceneObserverAccessStatus.Allowed)
{
Debug.LogError("Access to scene understanding was denied");
}
Debug.Log("Scene understanding ok");
Debug.Log("Scene understanding getting data...");
var sceneTask = new TaskCoroutine<Scene>(GetData(10.0f, true, true, SceneMeshLevelOfDetail.Coarse));
yield return sceneTask;
scene = sceneTask.Result;
}
if (scene != null)
{
Debug.Log("Scene understanding successfully started (object count: " + scene.SceneObjects.Count + ")");
Spawn(sceneRootObject, scene, sceneMaterial);
}
else
{
Debug.LogError("Could not load scene understanding scene");
}
}
using an updated GetData():
protected Task<Scene> GetData(float boundingSphereRadius, bool enableQuads, bool enableMeshes, SceneMeshLevelOfDetail lod)
{
var settings = new SceneQuerySettings
{
EnableSceneObjectQuads = enableQuads,
EnableSceneObjectMeshes = enableMeshes,
RequestedMeshLevelOfDetail = lod
};
return SceneObserver.ComputeAsync(settings, boundingSphereRadius);
}
Thanks again for the support @pinkwerks @sceneunderstanding-microsoft !
Glad it helped! This is a great example that will be useful when tracking down the why it doesn't work as expected. @sceneunderstanding-microsoft .
I guess we can close this now?
Yes, thanks again for the help!
I'm having trouble getting the scene understanding to work with the Mixed Reality Toolkit 2 on the Hololens 2. Is this supposed to work?
I have written a small MonoBehaviour that retrieves a scene from the Scene Understanding SDK and then renders it as a Unity mesh, based on this sample. It works in the editor (using the serialized SU_Kitchen file) but only intermittently on the HL2. My mesh only renders every other time I start the project up (no changes).
If I disable automatic startup on the Spatial Awareness System of the MRTK2 it seems to be started automatically by the Scene Understanding SDK but it doesn't help with the intermittency.
The SDK seems to be very useful, I would love to be able to use it in my current project.