WEKIT-ECS / MIRAGE-XR

MirageXR is a reference implementation of an XR training system. MirageXR enables experts and learners to share experience via XR and wearables using ghost tracks, realtime feedback, and anchored instruction.
Other
28 stars 5 forks source link

Marker-based task stations #147

Closed wekitecs closed 2 years ago

wekitecs commented 4 years ago

In GitLab by @fwild on Oct 15, 2020, 17:43

In the future, would be cool to have the possibility of attaching a marker-augmentation, which allows to take a photo of an object (or select a pretrained image target) and thus allow to move task stations with the marker around. Useful for example for: attaching content to the front panel of a ultrasound machine, which is moving around on a trolley.

wekitecs commented 3 years ago

In GitLab by @fwild on Jan 6, 2021, 13:27

Could be related to OpenPose body tracking and AR foundation face tracking

wekitecs commented 3 years ago

In GitLab by @fwild on Jan 14, 2021, 12:52

Can be used to implement the place the flag game with the Africa map (map is used as a marker; pick&place glyphs are used to place the flags)

wekitecs commented 3 years ago

In GitLab by @fwild on Jan 14, 2021, 13:07

first simply create the marker on vuforia and import the marker database fix into mirageXR, then we can later work on the upload using the Vuforia API

wekitecs commented 3 years ago

In GitLab by @william.guest on Jan 28, 2021, 15:19

For 2D markers only

wekitecs commented 3 years ago

In GitLab by @robhillman97 on Feb 1, 2021, 11:19

Update on the current state of this;

actionMenu

you can now create an image marker augmentation from the action selection menu

IMUI

when selected this is the current UI for the image marker augmentation. This will change as I need to add the ability to select the 3D object/task station used with the image marker

takePhoto

clicking the shutter button on the left takes a photo using the device camera

Crop

this image can then be cropped using the scroll bar at the bottom and re positioned by dragging the image, once its in the correct location click the crop button on the right to save the cropped image.

ImageMarker

clicking the accept button creates an image marker using the cropped image and spawns a 3D object at the image markers location. I am having issues with getting the image marker to spawn in the right location hence the gif being in scene view, however, I think this is due the image mark being created relative to my laptops webcam and the object dose moves with the image marker as it should.

Still to do:

wekitecs commented 3 years ago

In GitLab by @wild on Feb 1, 2021, 11:36

Looks awesome - looking forward to testing it :)

wekitecs commented 3 years ago

In GitLab by @wild on Feb 4, 2021, 23:22

Ready to merge? Or still debugging for HL? Would love to try the feature... ;)

wekitecs commented 3 years ago

In GitLab by @robhillman97 on Feb 5, 2021, 13:06

Still debugging on the HL at the moment, it seems to be creating the trackable object correctly with the cropped image and I am not getting any errors when it runs, however, it is not recognising the marker. I'm looking into it now, it looks like it could be something to do with the image taken by the HL being distorted due to the aspect ratio of the HL camera and the aspect ratio expected by the image marker. I will keep digging and give an update when I figure out exactly what's going on

wekitecs commented 3 years ago

In GitLab by @wild on Feb 16, 2021, 18:10

Did you find the issue? Camera handling and Vuforia has been a pain in the neck in the past...

wekitecs commented 3 years ago

In GitLab by @wild on Feb 18, 2021, 11:54

@robhillman97 Can we call this augmentation 'detect'? and link it from the 'glyphs' menu? My general UI concept proposal is that we rename glyphs to 'actions' - and then make all of them more functional, bit by bit. See also this idea for Sprint 6 here: https://platform.xr4all.eu/wekit-ecs/mirage-xr/issues/258 (allow to "trigger" state change when gazed at with certain duration to move to next step). Nota bene: "locate" and "detect" are two very different functions! Locate is about finding something - and once found we can move on, hence the trigger. "Detect" is about doing something with what's been detected, so more about displaying all other attached augmentations, in particular any visual overlays. In ARLEM terms: I think once a marker "detect" augmentation is added in a task station, it modifies all other augmentations attached to it, adding "target": "mymarker" to each of them? This way, we just need to configure the "detect" glyph with the marker as you programmed already - but can use than in combination with all other augmentations?

@BenediktHensen would be great if you can take a look - does this make sense?

wekitecs commented 3 years ago

In GitLab by @william.guest on Feb 28, 2021, 12:27

image target issue being addressed in #311. Moving to sprint 6.

wekitecs commented 3 years ago

In GitLab by @a85jafari on Apr 6, 2021, 18:37

closed