Open LimingRenPA opened 2 years ago
And I can do RestSimulationStreamSink.Create to HoloLens 2 emulator on this machine successfully.
Hello,
To answer your questions:
(1) The call works with "http://127.0.0.1:10080/" if the device is connected to my PC using a cable.
(2) I used the ip address from the hololens' device portal URL, the same IP if I get from HoloLens setting-> Wifi -> Advanced options -> Ip v4 address.
(3) Http gives error "HTTP request failed: Temporary Redirect".
I got the same error using wither UsbNcm or Wi-fi (device portal->system->networking):
Thank you for the additional details. I am investigating. This is expected to work regardless of WiFi or USB. I hope to have more information in the next 1-2 days.
Just a quick update. I'm continuing to investigate. It appears that the token is not being captured by the client when using https, so it is not being included in subsequent requests which results in the error when using https. The token cookie itself hasn't changed -- it is still being provided by the server, and it continues to be captured by our native C++ code, but the managed client appears to lose it. This could be a security-related change in the .NET Framework. I'm hoping to come up with a trivial workaround but do not have one yet.
Another issue, after connecting to my REAL HoloLens 2 using "http://127.0.0.1:10080" and run some code to move my right-hand, the HoloLens device stops responding to any hand gestures (even after reboot).
My code:
var protocol = "http";
emulatoreIpaddress = "127.0.0.1:10080";
var uri = new Uri($"{protocol}://{emulatoreIpaddress}");
sink = await RestSimulationStreamSink.Create(
// use the IP address for your device/emulator
uri,
// no credentials are needed for the emulator
new System.Net.NetworkCredential("???", "????"),
// normal priority
true,
// cancel token
token);
var manager = PerceptionSimulationManager.CreatePerceptionSimulationManager(sink);
manager.Reset();
// Activate the right hand
manager.Human.RightHand.Activated = true;
var human = (ISimulatedHuman2)manager.Human;
var rightHand = (ISimulatedHand2)(human.RightHand);
float x = 0.05f;
float y = 0;
float z = 0;
rightHand.Move(new Vector3(x, y, z));
Thread.Sleep(2000);
and this is what I see, but I cannot enter the pin (Hololens not responding to any hand movements).
Thanks,
This suggests that you left the device in Simulation mode. Your code should restore the original Control Mode upon exit. To manually change it, go to the Simulation page in Device Portal and set the Control Mode dropdown to 'Default'.
You are right, thanks
We are seeing exceptions after a few tries (the same code and finger press and release, to a real HoloLens 2 device).
(0) The device is default mode, (1) RestSimulationStreamSink.Create works, (2) Then any call will generate an exception as in below pic. The exception message is
{"Exception of type 'System.Exception' was thrown."}
and no inner exception.
The device stays in this state (exception whenever perception simulation code is used). But I can use the device when it is in control mode.
Let me know if you need anything.
@LimingRenPA any updates on your end? Were you able to resolve this issue on your own or still need further assistance?
Nothing has changed on my side, thanks.
I also face the connection issue when trying to use perception simulation in my real Hololens 2. I solve this problem in this way:
I disable the "SSL connection required" option in the Preferences of windows portal and then use HTTP connection instead of HTTPS
Moreover, using USB connection also works. Just change the url to ``http://127.0.0.1:10080/'' as the previous discussion. Hope this helps.
This issue is addressed in the preview release: https://learn.microsoft.com/en-us/windows/mixed-reality/develop/advanced-concepts/perception-simulation-standalone
This issue is addressed in the preview release: https://learn.microsoft.com/en-us/windows/mixed-reality/develop/advanced-concepts/perception-simulation-standalone
Yeah I also try to use the preview release. It works fine. But it looks like it cannot meet my requirement. I want to write an autotest tool for my application, which needs head movement and gaze movement in the headset. But preview release can only control the movement via the keyboard and mouse. It cannot either record the actual movement or use code to control the movement. Am I right? Or actually the preview release can do that but I miss it?
The preview release has everything that's in the existing product and adds more - it does not remove anything. You can still control head, hands, eyes, controllers, etc. via API calls or manual input.
The preview release has everything that's in the existing product and adds more - it does not remove anything. You can still control head, hands, eyes, controllers, etc. via API calls or manual input.
Sounds great! Can I record my physical movement wearing the headset and playback it? I find the preview can only record the keyboard- and mouse-control movement but cannot record my physical movement when I wear the headset
Recording using physical headset movements is not enabled in the current preview build. Please stay tuned!
Recording using physical headset movements is not enabled in the current preview build. Please stay tuned!
Thanks. So if I want to play back my physical movement, I should collect the sensor data first, then convert it into the format of perception simulation to play back my movement. Is it right?
We will collect the head 6DoF position (x,y,z, yaw, pitch, raw), gaze position (x,y,z), and gaze direction (X,Y,Z). Can I replay this movement using perception simulation?
That is correct - you can capture the data yourself and create a Simulation recording that will play back. You can create the recording using the Simulation APIs - PerceptionSimulationManager.CreatePerceptionSimulationRecording to create the recording, then use the movement APIs and pass in the data from your capture. The resulting .xef file can be played back using either the user interface or PerceptionSimulationManager.LoadPerceptionSimulationRecording.
That is correct - you can capture the data yourself and create a Simulation recording that will play back. You can create the recording using the Simulation APIs - PerceptionSimulationManager.CreatePerceptionSimulationRecording to create the recording, then use the movement APIs and pass in the data from your capture. The resulting .xef file can be played back using either the user interface or PerceptionSimulationManager.LoadPerceptionSimulationRecording.
Thanks! I am a newbie of C# and am confused about how to call the API. I am looking at this API documentation: https://learn.microsoft.com/en-us/windows/mixed-reality/develop/advanced-concepts/perception-simulation-overview
Now I want to use ISimulatedEyes
to get/set gaze data. The ISimulatedEyes
is in ISimulatedHead2
. To use ISimulatedHead2
, the documentation says Additional properties are available by casting an ISimulatedHead to ISimulatedHead2
But I don't know how to case ISimulatedHead to ISimulatedHead2. I am using the example code in https://learn.microsoft.com/en-us/windows/mixed-reality/develop/advanced-concepts/perception-simulation-overview. And the code uses ISimulatedHead
just in this way:
IPerceptionSimulationManager manager = PerceptionSimulationManager.CreatePerceptionSimulationManager(sink);
manager.Human.Head.Rotate(new Rotation3(0.04f, 0.0f, 0.0f));
Where should I cast ISimulatedHead to ISimulatedHead2 and how to do that?
That is correct - you can capture the data yourself and create a Simulation recording that will play back. You can create the recording using the Simulation APIs - PerceptionSimulationManager.CreatePerceptionSimulationRecording to create the recording, then use the movement APIs and pass in the data from your capture. The resulting .xef file can be played back using either the user interface or PerceptionSimulationManager.LoadPerceptionSimulationRecording.
Thanks! I am a newbie of C# and am confused about how to call the API. I am looking at this API documentation: https://learn.microsoft.com/en-us/windows/mixed-reality/develop/advanced-concepts/perception-simulation-overview
Now I want to use
ISimulatedEyes
to get/set gaze data. TheISimulatedEyes
is inISimulatedHead2
. To useISimulatedHead2
, the documentation saysAdditional properties are available by casting an ISimulatedHead to ISimulatedHead2
But I don't know how to case ISimulatedHead to ISimulatedHead2. I am using the example code in https://learn.microsoft.com/en-us/windows/mixed-reality/develop/advanced-concepts/perception-simulation-overview. And the code uses
ISimulatedHead
just in this way:
IPerceptionSimulationManager manager = PerceptionSimulationManager.CreatePerceptionSimulationManager(sink);
manager.Human.Head.Rotate(new Rotation3(0.04f, 0.0f, 0.0f));
Where should I cast ISimulatedHead to ISimulatedHead2 and how to do that?
Just try the following code and it looks like it works. But I could not feel any gaze movement when I wear the headset. Could you help me find the reason?
ISimulatedHead head = manager.Human.Head;
ISimulatedHead2 head2 = head as ISimulatedHead2;
ISimulatedEyes eyes = head2.Eyes;
eyes.Rotate(new Rotation3(0.04f, 0.17f, 0.88f));
Thread.Sleep(2000);
eyes.Rotate(new Rotation3(0.94f, 0.0f, 0.17f));
Thread.Sleep(2000);
eyes.Rotate(new Rotation3(0.24f, 0.3f, 0.17f));
Thread.Sleep(2000);
eyes.Rotate(new Rotation3(0.1f, 0.4f, 0.17f));
Thread.Sleep(2000);
There is no visual indicator in the headset when moving the eyes unless the application you're running on the headset has visual feedback of eye movement (i.e., the system has no built-in eye gaze indicator).
Also note that if you'd prefer to use C++, the preview version of Perception Simulation now includes headers and libs for C++ and the documentation page includes a code sample in C++. All functionality available in C# is available in C++. (The C# API is basically a wrapper over the C++ API.)
There is no visual indicator in the headset when moving the eyes unless the application you're running on the headset has visual feedback of eye movement (i.e., the system has no built-in eye gaze indicator).
Also note that if you'd prefer to use C++, the preview version of Perception Simulation now includes headers and libs for C++ and the documentation page includes a code sample in C++. All functionality available in C# is available in C++. (The C# API is basically a wrapper over the C++ API.)
Thank you very much! I would like to ask if is it possible to control the head and eye position? I notice that in the ISimulatedHead
and ISimulatedEyes
interfaces, we can only set the rotation but not position. But we want to control the 6DoF movement of head and gaze. How could we do that?
The eyes are attached to the head and the head to the body, so if you move the ISimulatedHuman (Manager.Human.Move()), the head and eyes move with it in the world. You cannot detach the eyes from the head or the head from the body just as with a real living human.
Note that if you are looking to modify the human characteristics, such as changing the distance between the eyes for purposes of changing the stereo rendering parameters, you can do that via ISimulatedDevice2 (Manager.Device as ISimulatedDevice2).DisplayConfiguration
Background info: We are exploring how to create some automated UI tests for our HoloLens 2 app.
I have a HoloLen2 device which is connected to my PC ( fully updated win 10 pro). We also enabled Developer Mode on the HoloLens.
The device portal with https works fine after downloading the HoloLens certificate (as attached pic). However with correct HoloLens user name and password,
RestSimulationStreamSink.Create
returns exception {"HTTP request failed: CSRF Token Invalid"} `
(Please scroll down to see a screenshot).
My questions: (1) Does Perception Simulation work with real HoloLens 2? (2) Is Perception Simulation a good way to create automated HoloLens UI tests? (2) If yes, what did we do wrong?
Sincerely,
[Enter feedback here]
Document Details
⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.