This unity-package allows to make annotations on arbitrary Unity-scenes of architectural sites. The generated annotation-data can be transformed into real world coordinates and because the data-format is JSON-ld, it can be cross referenced with other open databases.
VR-Annotate is a research tool of the project »Analogue Storage Media II - Auralisation of Archaeological Spaces« run by the Cluster of Excellence »Image Knowledge Gestaltung - Cluster of Excellence« at the Humboldt-University Berlin.
Research project contact: Una Ulrike Schäfer
It's developed by Framefield.com
This project is released under M.I.T. license.
Part of the package are the following non-unity dependencies:
Once you added to required prefabs to a part of your scene, you use the Vive-controller to select it’s geometry, browse the hierarchical structure of the scene, place comments to certain parts of the geometry by pressing trigger and using a virtual keyboard or reading the comments of other users.
The package consists of several components:
Open the demo scene stoa-demo.unity
stoa-demo
to start with.[geometry]
folder is where your annotatable geometry GameObject should go.stoa_example
.Replace the geometry / annotation target (if you do not want to add your own custom target and just use the demo content, continue at step 9. )
[geometry]
.vr-annotate/Generate GUIDs for all Children
to generate unique ids for every node of the geometry. Select its root object and add the component Target
to make it an annotatable target.
You can use a local directory or a REST-server to store and load annotations and targets. In the Target
component on the root object of the annotation target set your database location.
Use the file Assets/vr_annotate/ff.vr.annotate/helpers/Serialization.cs
to define the local or remote uri you want to use.
Your geometry is ready to be annotated.
On app startup Unity uses the GUIDs to check if the target exists in the database. it will add it if necessary and load all annotations that you created in a previous session.
vr-annotate-example
and copy the following groups into your scene:
[CameraRig]
for SteamVR. InteractiveController
and Teleportation
vr-annotate/SteamVR_WithTeleportation
prefabvendors/punchkeyboard/prefabs/DrumStickForController
prefab to both controllersInteractiveController
s and the DrumStickForController
-instancesT
arget and add it. This will scan the geometry of your node when starting the scene and make it annotatable.vr-annotate/Generate GUIDs for all TargetNodes
to generate the missing GUIDsTeleportationZone
-prefab in your scene.GeolocationMarker
-prefab.Until we have a proper database, the current user is defined in through the parameters place-holder script:
CurrentUserDefinition
-prefab to your scene.The following section gives a quick introduction into the code-structure. The folder-structure within the package follows the namespace of the c# scripts. The package consists of several namespaces, each focusing on a different aspect.
This MonoBehaviors of this namespace scan a nested structure of GameObjects within a unity-scene, looks for MeshRenderers and use BoundingBoxes to build data representation of the scene that can be used for hit-testing and picking. We decided to implement our own Ray-HitTesting because creating many instances of Unity-colliders had a big performance impact and required manipulation of your scene. By contrast just scanning the scene and saving an interpretation as data allowed use to use this data for highlighting, picking, and storing precise target-references for annotations.
The Components in the ff.annotation
namespace handle the creation, serialisation, and interaction with Annotations. Most elements are combined into prefabs.
The components inside this namespace help to converts Unity Coordinates into real-world WGS84 coordinates. For this you place two or more markers in your unity-scene and add precise geo-location coordinates for these markers. Good candidates are existing real-world remains like columns or base-structures. From this columns, unity-positions can be converted into lat-long.
This can be very useful for…
The components in these namespaces are related to showing additional information about the current selection. Currently selections can be Node
s inside the NodeGraph, and AnnotationsGizmos
. There are several methods to change the selection (clicking on objects, clicking into the NodeGraphOutliner, etc.). The SelectionManager handles to current active selection of object implementing the ISelectable
-interface. Components like the NodeSelectionMarker
or the InfoPanels
subscribe to the SelectionChanged-event to update their content.
GUIDs are used to identify Targets, Annotations and Nodes.
The following schema is used for 3d-Models within Maya, Cinema4D, Unity, etc:
**name**
+ **#**
+ **GUID**
****
e.g.:
***Crepidoma#936DA01F-9ABD-4D9D-80C7-02AF85C822A8***
**name**
****is not used to identify the object and can be changed any time.
Workflow
The 3d-modeller creates geometry and decides on topology of the modell (its number of submeshes and their relationship).
Before the modell is exported for Unity a GUID should be generated for each submesh and stored as a suffix of their name:
Mesh#936DA01F-9ABD-4D9D-80C7-02AF85C822A8
The model is imported into Unity and used within a scene as a Target.
In case the modeller did not identify all submeshes correctly, you can use the Unity-Helper
vr-annotate/Generate GUIDs for all TargetNodes
to generate the missing GUID suffixes.
As defined in neonion-rest, Targets and Annotations IDs declare their type in a suffix, like
type
+ :
+ GUID
, e.g. target:234234-234234-234234
.
Workflow In Unity all Targets- and Annotations-IDs are created automatically in the right format before serialization.
At the moment Targets and Annotations are stored on a local mock server:
Targets are stored at:
**localhost:8301/targets/**
e.g.
**localhost:8301/targets/target:**``***936DA01F-9ABD-4D9D-80C7-02AF85C822A8***
`
Annotations at:
**localhost:8301/targets/targetid/annotations/**
e.g.
**localhost:8301/targets/target:936DA01F-9ABD-4D9D-80C7-02AF85C822A8**
**/annotations/annotation:AB12E3GD-9ABD-4D9D-80C7-KH6TGE8DNB7W**
They are stored using the following JSON-schemas
Following the JSON-LD format was an important requirements for the implementation (for details please refer to +vr-annotate / Standards and this comment). During the development we invested some time into suggesting a JSON-schema which closely follows the W3C-web annotation standard but makes some changes where necessary. The implementation of the serialization is done in Annotation.cs
. We started with a light weight template mechanism for quick iterations. For now, the Annotations are stored as JSON-files on disk to Assets/db/*.json
. Once a database or API is available it should be integrated and the JSON-serialization in Annotation.cs
should be properly implemented through structs and classes.
The JSON schema should be treated as work in progress and leaves many details with potential for refinement. Especially details on simulation state and archeological time-formats need further work. (see section future work below).
{
"@context": "http://www.w3.org/ns/anno.jsonld",
"id": "annotation:0a1d64db-5b3d-41b8-8a92-14e8493a2fc2",
"type": "Annotation",
"creator": {
"id": "_alan",
"name": "Alan",
"email": "alan@google.com"
},
"created": "9/20/2017 3:27:55 PM",
"generator": {
"id": "http://vr-annotator/v/01",
"type": "Software",
"name": "VR-Annotator v0.1"
},
"body": [
{
"type": "TextualBody",
"purpose": "describing",
"value": "annotation"
}
],
"target": {
"id": "{target:8fa0f93c-6b2a-4eb0-bd16-3ddccca72e64}",
"type": "http://vr-annotator/feature/",
"targetNodeName": "Mauer_lp",
"state": {
"type": "VRSimulation",
"refinedBy": {
"type": "SimulationTime",
"sourceDate": "302 BC",
"timeOfDay": "18:23:12"
}
},
"selector": {
"type": "nodeGraphPath",
"guidPath": "8fa0f93c-6b2a-4eb0-bd16-3ddccca72e64/84280e5f-ab67-4ef7-bce6-2bd71d922152/"
},
"annotationPosition": {
"position": {
"x": -2.088443,
"y": 1.86834,
"z": -2.16
},
"type": "GeoCoordinates",
"coordinateSystem": "Unity.WorldSpace",
"latitude": 37.97617,
"longitude": 23.7223,
"elevation": 1.879078E-05
},
"viewPortPosition": {
"position": {
"x": 0.8888178,
"y": 0.9042585,
"z": -7.195422
},
"type": "GeoCoordinates",
"coordinateSystem": "Unity.WorldSpace",
"latitude": 37.97619,
"longitude": 23.72224,
"elevation": 9.094559E-06
}
}
}
{
"@context": {
"@vocab":"http://www.w3.org/ns/target.jsonld",
"@base": "http://annotator/target/",
},
"id": "{TargetGUID}",
"@type": "AnnotationTarget",
"creator": {"id":"_alan","name":"Alan","email":"alan@google.com"},
"created": "7/10/2017 7:02:44 PM",
"generator": {
"id": "http://vr-annotator/v/02",
"type": "Software",
"name": "VR-Annotator v0.2"
},
"interpretation": {
"refinedBy": {
"modellerName":"",
"modellingSoftware":"",
"references": []
}
},
"nodeGraph": [
{
"type":"AnnotatableNode",
"@id":"{{TargetGUID}}",
"children":[
{
"type":"AnnotatableNode",
"@id":"{{ChildnodeNodeGUID}}",
"children":[
...
]
}
]
}
],
}
During picking of TargetNodes
we use three types of raycast hittests to determine whether an object was hit. The first test against the Global Bounding Box is the fastest but least precise, while the test against the Mesh Collider is precise and expensive. Only if the Global Bounding Box is hit, the next more precise hittest is carried out, etc.
Optimizing …
Model must not use “/” in GameObject names for Geometry-nodes inside NodeGraph
s
Not yet, but a porting the InteractiveController
-Script to the Rift should be very easy.
Currently the Interaction heavily relies on pointing and clicking with a controller. But adapting the interaction to triggering by gaze should be possible.
We suggest the following steps to build on this ground-work
Expand models for Users and AnnotationSeries to allow cross-referencing and filtering annotations.
Currently all annotations are always visible. Eventually these need to be filtered by things like: Tags, users, date-ranges, etc.
Examples would be time of day, weather conditions, year etc.
Examples could be:
With technology like HoloLens and location-markers it is very feasible to show markers created in VR within their context. You can also port the components to work on the HoloLens to generate annotations.