fernandojsg / aframe-input-mapping-component

https://fernandojsg.github.io/aframe-input-mapping-component
MIT License
24 stars 7 forks source link

Discussion: extensibility #2

Open netpro2k opened 6 years ago

netpro2k commented 6 years ago

Looking into options for #1 I wanted to think through the general issue of extensibility. There are going to be more things people want configure about their input mappings. I thought through this for a bit and came up with 3 options:

1. Do Nothing

The scope of this component remains simply mapping a set of named events for a specific set of controllers in a set of states, to another set of named events. Any customization beyond that should be handled outside of this component.

For the dpad case this means either creating a separate component to handle listening to controller events and broadcasting a different set of events (as I did above), which can then be mapped by the input-mapping-component, or simply moving this functionality into the relevant controller components themselves (similar to how triggers, touchpads, etc are handled now). The later seems somewhat preferable otherwise each of these such components might needs ways to detect different controllers and configure behavior between them.

➕ The advantage of this approach is simplicity, this component focuses on doing one simple thing well, and doesn't do anything else beyond that. ➖ The big disadvantage is it means some of your controller mapping code may need to live other places. There are already several layers of remapping happening with tracked-controllers, vive-controllers, etc, so maybe this is not a big leap.

2. Extend the mapping schema

Pick and choose specific cases and extend the mapping schema with case specific values to handle any special cases. In the dpad example this might be something like:

"vive-controls": {
  menudown: "action_mute",
  touchpadmove: {
    type: "dpad",
    requires_press: "touchpaddown",
    center_radius: 0.5,
    mapping: {
      left_press: "action_snap_rotate_left",
      right_press: "action_snap_rotate_left",
      center_down: "action_teleport_aim",
      center_up: "action_teleport_teleport"
    }
  }
},
"oculus-touch-controls": {
  xbuttondown: "action_mute",
  thumbstickmove: {
    type: "dpad",
    requires_press: false,
    center_requires_press: "thumbstickdown",
    center_radius: 0.5,
    mapping: {
      left_press: "action_snap_rotate_left",
      right_press: "action_snap_rotate_left",
      center_down: "action_teleport_aim",
      center_up: "action_teleport_teleport"
    }
  }
},

And for another example, binding a shake gesture might be:

"vive-controls": {
  posemove: { // this event doesn't actually exist, but you could imagine it being fired by tracked-controller on tick
    type: "shake",
    threshold: 0.6,
    mapping: "action_undo"
  }
},

➕ The advantage of this is that it keeps all the input configuration in one place. You can simply look at the input map and determine everything about how each input device mapped and configured in your application ➖ The big disadvantage here is that the schema becomes more complicated and it can be difficult to decide what specific options make the cut

3. All mappings are function handlers

All mappings just become a mapping from an event to a function that handles that even (and typically broadcasts another event). For convenience the default handler could be used for strings to keep the simple format for one to one mapping as exists today.

Simplest example being:

"oculus-touch-controls": {
  // This is the same as `xbuttondown: "action_mute"`
  xbuttondown: (evt) => {
    evt.detail.target.emit("action_mute", event.detail);
  },
},

The dpad and shake cases might be handled by included or third party convenience functions that generate handlers for you. For example (semi-psudo-code)

AFRAME.registerInputMappings({
  default: {
    "vive-controls": {
      touchpadmove: dpad_mapping("[vive-controls]",
        {
          left_press: "action_snap_rotate_left",
          right_press: "action_snap_rotate_left",
          center_down: "action_teleport_aim",
          center_up: "action_teleport_teleport"
        },
        { requries_press: "touchpaddown", center_radius: 0.5 }
      ),
      posemove: shake("action_undo", {threshold: 0.5})
    },
    "oculus-touch-controls": {
      xbuttondown: always("action_mute"),
      thumbstickmove: dpad_mapping("[oculus-touch-controls]",
        {
          left_press: "action_snap_rotate_left",
          right_press: "action_snap_rotate_left",
          center_down: "action_teleport_aim",
          center_up: "action_teleport_teleport"
        },
        {
          requries_press: false,
          center_requires_press: "thumbstickdown",
          center_radius: 0.5
        }
      ),
      posemove: shake("action_undo", 0.8)
    },
  }
});

// This is the default handler, same as passing a string, always fires the given event name, forwarding event details
function always(emitEventName) {
  return (evt) => evt.detail.target.emit(emitEventName, evt.detail);
}

// Emit a specified event when the tracked controller is shaken
function shake(emitEventName,{ threshold }) {
  let lastPos = [0,0];
  return (evt) => {
    let newPos = evt.detail.pose;
    if(isShake(lastPos, newPos, threshold)) {
      evt.detail.target.emit(emitEventName, evt.detail);
    }
    lastPos = newPos;
  }
}

// emit a set of events treating a pair of axis (and optionally a button) as 4 directional buttons and a center button
function dpad_mapping(selector, mappings, options) {
  let lastPos = [0,0];

  if(options.requires_press) {
    document.querySelector(selector).addEventListener(options.requires_press, function(evt) {
      const dir = directionForPos(lastPos, options);
      if(!options.requires_press && dir && mappings[dir]) {
        evt.detail.target.emit(mappings[dir], evt.detail);
      }
    });
  }

  return (evt) => {
    lastPos = evt.details.axis;
    const dir = directionForPos(lastPos, options);
    if(!options.requires_press && dir && mappings[dir]) {
      evt.detail.target.emit(mappings[dir], evt.detail);
    }
  }
}

➕ The main advantage of this is that the core component remains very simple while allowing pretty much infinite flexibility for higher order handlers. You can imagine some of these handlers coming bundled with the library and others being created by third parties or bundled with other relevant components. ➖ The main disadvantage here is that the mappings are no longer just data, they can now do arbitrarily complicated things, which makes the contract less simple.


I personally like options 1 and 3 the most, option 2 feels like it would get out of hand very quickly. There is something really nice about the flexibility of option 3, but the loss of "it's just data" bothers me a bit.

Tangentially related, I opened #3 to discuss a possible alternative to the schema format that I considered while looking into these 3 options. It should work just as well with any of them.

Anyway, large brain dump, would love other pepople's thoughts and other proposed options.

fernandojsg commented 6 years ago

I admit I was thinking initially about going for 3 but then I thought I could be a bit overengineering using callback for each button. As you comment it will be hard to use as it's not just data so we'll end up mixing data (like labels for each button, or buttons names) and functions. Also harder to modify the mapping. Of course it will be more powerful, but I'll vote for going simple (1) right now and let the tracked-controls generate common events, like trackpad/joystick-left,right,up,down, and leave the more advanced processing to the application itself on each action callback. If we find that most of the applications needs a lot of custom interpretation of the event data we could come back to decide if we go all the way to 3.

johnshaughnessy commented 6 years ago

I have been doing some thinking about how to enable people to set up context-sensitive input mappings, and whether the input mapping component should allow for more than one input map active at a time.

Below are some scenarios in pseudo-code that helped me come upon some potential issues with having multiple input mappings or with having more detailed events to bind to (e.g. I bind to an event named "left_vive_controller_menu_click", which fires for a quick buttondown, buttonupbut not for a buttondown, delay, buttonup. Another example is left_vive_controller_menu_held_and_left_vive_controller_trigger_down because I wasn't sure how to convey something like this within input mappings.)

I use some words (like action sets) that are not used in this input mapping component just to help aid my thinking. I was thinking of them in roughly the same way as the steam controller api's action sets, except that it's not obvious to me if only ONE action map should be considered "active" at a time.

Suppose a developer wants to create an audio settings widget that, when open, allows the user to adjust volume and toggle between two common modes of voice input - push to talk or open mic.

At the start of the application, the developer calls AFRAME.registerInputMappings with :

   Action Sets
    microphone actions 
     start audio settings interaction #localization_key_goes_here
     toggle microphone                #localization_key_goes_here
     enable microphone                #etc...
     disable microphone
     show audio settings widget preview
     hide audio settings widget preview
     toggle audio settings widget
    audio settings widget actions
     toggle push to talk and open mic
     volume up
     volume down
     toggle audio settings widget
   Mappings
    microphone actions
     push_to_talk
      start audio settings interaction
       left_vive_controller_menu_down
      enable_microphone 
       left_vive_controller_menu_down
      disable_microphone
       left_vive_controller_menu_up
      show audio settings widget preview
       left_vive_controller_menu_click_hold
      hide audio settings widget preview
       left_vive_controller_menu_up
      toggle_audio_settings_widget
       left_vive_controller_menu_held_and_left_vive_controller_trigger_down
     open mic
      start audio settings interaction
       left_vive_controller_menu_down
      toggle microphone 
       left_vive_controller_menu_click
      show audio settings widget preview
       left_vive_controller_menu_hold
      hide audio settings widget preview
       left_vive_controller_menu_up
      toggle audio settings widget
       left_vive_controller_menu_held_and_left_vive_controller_trigger_down
    audio settings widget actions
     default
      toggle_push_to_talk_and_open_mic
       left_vive_controller_dpad_up_down
      volume_up
       left_vive_controller_dpad_right_down
      volume_down
       left_vive_controller_dpad_left_down
      toggle_audio_settings_widget
       left_vive_controller_dpad_down_down

Once the input mappings are registered, the developer can activate them: AFRAME.pushActiveInputMapping("microphone_actions", "open_mic"); so that AFRAME.currentInputMappingsis

[{action_set : microphone_actions, mapping : "microphone_actions_open_mic"}];

Eventually the user wants to open the audio settings widget to switch to push_to_talk. They press a button to trigger the "toggle_audio_settings_widget" event, which will call

AFRAME.pushActiveInputMapping("audio_settings_widget_actions", "default") so that 
AFRAME.currentInputMappings === [{action_set : audio_settings_widget_actions, mapping : default },
                                 {action_set : microphone_actions, mapping : open_mic }]

Then, toggle_push_to_talk_and_open_mic is called, which replaces the mapping for the microphone_actions action set.

AFRAME.replaceActiveInputMapping("microphone_actions", "push_to_talk");
AFRAME.currentInputMappings === [{action_set : audio_settings_widget_actions, mapping : default },
                                 {action_set : microphone_actions, mapping : push_to_talk }]

Finally, toggle_audio_settings_widgetis called which removes the audio_settings_widget_actions mapping.

AFRAME.popActiveInputMapping("audio_settings_widget_actions");
AFRAME.currentInputMappings === [{action_set : microphone_actions, mapping : push_to_talk }]
   Action Sets
    character controller actions
     simple_move_x
     simple_move_z
     snap_rotate_left
     snap_rotate_right
     simple_rotate_y
     boost
   Mappings
    character controller actions
     simple mover
      simple_move_z 
       left_vive_controller_touchpad_pressed_axis_move_y
      snap_rotate_left
       left_vive_controller_touchpad_dpad_left
      snap_rotate_right
       left_vive_controller_touchpad_dpad_right
     classic fps mover
      simple_move_x 
       left_vive_controller_touchpad_pressed_axis_move_x
       left_touchscreen_joystick_axis_x
      simple_move_z 
       left_vive_controller_touchpad_pressed_axis_move_y
       left_touchscreen_joystick_axis_y
      snap_rotate_left
       right_vive_controller_touchpad_dpad_left
      snap_rotate_right
       right_vive_controller_touchpad_dpad_right
      simple_rotate_y
       right_touchscreen_joystick_axis_x

Motion and microphone controls together.

   Action Sets
    microphone actions
     start audio settings interaction
     toggle microphone
     enable microphone
     disable microphone
     show audio settings widget preview
     hide audio settings widget preview
    character controller actions
     simple_move_x
     simple_move_z
     snap_rotate_left
     snap_rotate_right
     simple_rotate_y
     boost
    audio settings widget actions
     toggle_push_to_talk_and_open_mic
     volume_change
     volume_up
     volume_down
     toggle_audio_settings_widget
   Mappings
    microphone actions
     push_to_talk
      start audio settings interaction
       left_vive_controller_menu_down
      enable_microphone 
       left_vive_controller_menu_down
      disable_microphone
       left_vive_controller_menu_up
      show audio settings widget preview
       left_vive_controller_menu_click_hold
      hide audio settings widget preview
       left_vive_controller_menu_up
      toggle_audio_settings_widget
       left_vive_controller_menu_held_and_left_vive_controller_trigger_down
     open mic (default)
      start audio settings interaction
       left_vive_controller_menu_down
      toggle microphone 
       left_vive_controller_menu_click
      show audio settings widget preview
       left_vive_controller_menu_hold
      hide audio settings widget preview
       left_vive_controller_menu_up
      toggle audio settings widget
       left_vive_controller_menu_held_and_left_vive_controller_trigger_down
    audio settings widget actions
     gradual volume (default)
      toggle_push_to_talk_and_open_mic
       left_vive_controller_dpad_up_down
      volume_change
       left_vive_controller_touchpad_pressed_axis_move_y
      toggle_audio_settings_widget
       left_vive_controller_dpad_down_down
     dpad volume
      toggle_push_to_talk_and_open_mic
       left_vive_controller_dpad_up_down
      volume_up
       left_vive_controller_dpad_right_down
      volume_down
       left_vive_controller_dpad_left_down
      toggle_audio_settings_widget
       left_vive_controller_dpad_down_down
    character controller actions
     simple mover
      simple_move_z 
       left_vive_controller_touchpad_pressed_axis_move_y
      snap_rotate_left
       left_vive_controller_touchpad_dpad_left
      snap_rotate_right
       left_vive_controller_touchpad_dpad_right
     classic fps mover
      simple_move_x 
       left_vive_controller_touchpad_pressed_axis_move_x
       left_touchscreen_joystick_axis_x
      simple_move_z 
       left_vive_controller_touchpad_pressed_axis_move_y
       left_touchscreen_joystick_axis_y
      snap_rotate_left
       right_vive_controller_touchpad_dpad_left
      snap_rotate_right
       right_vive_controller_touchpad_dpad_right
      simple_rotate_y
       right_touchscreen_joystick_axis_x

What happens when the user wants to toggle the audio settings widget? Should AFRAME.currentInputMappingslook like this?


[{ action_set : "audio_settings_widget_actions", mapping: "gradual_volume"},
 { action_set : "character_controller_actions", mapping: "classic_fps_mover"},
 { action_set : "microphone_actions", mapping: "open_mic"}]

Now, with the audio settings widget open, suppose the user provides left_vive_controller_trouchpad_pressed_axis_move_y There are now two active mappings that want this event : { action_set : "audio_settings_widget_actions", mapping: "gradual_volume"} wants to emit"volume_change" and { action_set : "character_controller_actions", mapping: "classic_fps_mover"}, wants to emit "simple_move_z".

Should both events get called? ShouldcurrentInputMappings be an ordered list so that only the first matching event is called?

If so, what would have happen if the user pressed the touchpad while the currentInputMappingis this:

[{ action_set : "audio_settings_widget_actions", mapping: "dpad_volume"},
 { action_set : "character_controller_actions", mapping: "classic_fps_mover"},
 { action_set : "microphone_actions", mapping: "open_mic"}]

Now, the audio_settings_widget_actions mapping is listening for left_vive_controller_touchpad_dpad_up_down instead of left_vive_controller_touchpad_pressed_axis_move_y, so there is no conflict. However, in this case, should there be a conflict? Both events will get fired when the user presses the touchpad, which means both actions would be triggered. It seems unlikely that the user wants to increase volume AND move forward with the same input axis.

Since the user is free to remap the buttons for different action mappings, the developer cannot plan for these collisions. Perhaps it is better to assume that only one action map (per device?) will ever be active at a time.

Is it ok to have a simple_character_controller_mapping active and bound to your left hand and a simple_paint_tool_mapping active and bound to your right hand?

  Action Sets
    character controller actions
     simple_move_x
     simple_move_z
     snap_rotate_left
     snap_rotate_right
     simple_rotate_y
     boost
    paint brush actions
     update brush primary color
     update brush width
     stroke start
     stroke end
     undo

   Mappings
    one handed paint brush mapping
     (here are actions mapped to right hand inputs)
    one handed character controller mapping 
     (here are actions mapped to left hand inputs)
    two handed character controller mapping
   Active Mapping
    one handed character controller mapping 
    one handed paint brush mapping
    two handed character controller mapping 

Although this mapping is "active", the two mappings above it capture all of its events, so it is actually not able to do anything.

Questions : What would happen if some of a map's buttons were already taken by another map? What is the right way to enable a particular hardware button to do different things on the context, when the context isn't as simple as "world map" and "menu" modes? The answer here may just be that context needs to be made as simple as "world map" and "menu" modes that are clearly distiguished so that only one may be active at a time. Should AFRAME.inputMappings provide middleware that gates or throttles input from a gamepad before bubbling it up to the application? For example, suppose we wrote throttle( left_vive_controller_touchpad_dpad_left_pressed, 500) : snap_rotate_left so that the left_vive_controller_touchpad_dpad_left_pressed event would trigger thesnap_rotate_left action, but only every 500ms while the button remains pressed.

johnshaughnessy commented 6 years ago

Another way to handle the need for a kind of "left_vive_controller_menu_button_held_and_left_vive_controller_trigger_button_pressed" event is what dom suggested in https://github.com/fernandojsg/aframe-teleport-controls/issues/44#issuecomment-341830959

AFRAME.input.getAxis("some_named_axis") returning a floatbetween -1 and 1, and AFRAME.input.getButtonDown("some_named_button")returning a boolean. Then in teleport-controlstick you could just query the current axis values to render the teleport target with facing information

In this case it would be AFRAME.input.isButtonHeld("audio_button") && AFRAME.input.getButtonDown("audio_details_button") and somewhere, something like

buttons : {
  "audio_button" : "left_vive_controller_menu" ,
  "audio_details_button" : "left_vive_controller_trigger_button" ,
....
}

(I know that the trigger is an axis but I'd like it to treat it like a button here.)

machenmusik commented 6 years ago

IMO quick reactions to the originally stated 3 options

1 is basically the aframe approach of "let another component provide additional functionality" which is fine as long as monkey-patching this one isn't required to get the desired effect, in which case some affordances may be worthwhile.

2 is more consistent with having a data construct to provide description separated from implementation details. W.r.t. "out of hand" maybe the question is whether this basically turns into its own DSL, or remains concise and compact

3 seems prone to inline functions that may turn this component into the basis for approaches described in response to 1.

machenmusik commented 6 years ago

It seems there is a call for tracked-controls to expose instantaneous accessory for button / axis states, and perhaps for higher order components to expose by mapped label? If so, does anyone have time and energy to submit a PR to core?

netpro2k commented 6 years ago

Is it ok to have a simple_character_controller_mapping active and bound to your left hand and a simple_paint_tool_mapping active and bound to your right hand?

Hmm, yeah maybe different mappings can be active on different devices? That seems correct, my right hadn might be in the painting state but my left hand is in the color selections state. That may also help solving the left vs right hand issue we have been discussing as this just means activating different mappings per hand.

What would happen if some of a map's buttons were already taken by another map?

The html-ey way to handle this is that the event should trigger all things bound to it unless something stops its propagation... That said it feels a bit odd to do it this way as it would require the thing handling the even to deal with these possible conflicts which mgiht only arise as a result of a user's configuration. Maybe this is fine and preventing conflicts is just the responsibility of whatever UI helps a user configure their mappings?

What is the right way to enable a particular hardware button to do different things on the context, when the context isn't as simple as "world map" and "menu" modes? The answer here may just be that context needs to be made as simple as "world map" and "menu" modes that are clearly distinguished so that only one may be active at a time.

Yeah, the question of wetehr multiple mappings should be able to be active at once is complicated. I like the simplicity of saying only 1 map is ever active, but I can definitely imagine scenarios where I want to merge maps.

Should AFRAME.inputMappings provide middleware that gates or throttles input from a gamepad before bubbling it up to the application? For example, suppose we wrote throttle( left_vive_controller_touchpad_dpad_left_pressed, 500) : snap_rotate_left so that the left_vive_controller_touchpad_dpad_left_pressed event would trigger thesnap_rotate_left action, but only every 500ms while the button remains pressed.

Hard to say where this should live. Certainly it shouldn't happen "after" input-mapping in the chain but the qestion of if it should be part of tracked-controls, input-mappings, or a third thing that lives in between is tough. My gut says probably something that lives in between, but would need to run through some usecases to figure it out.

johnshaughnessy commented 6 years ago

Sorry for posting org-style notes above. I fixed the formatting so it is easier to read.

johnshaughnessy commented 6 years ago

The aframe-input-mappings component creates mappings from one event name to another. When application logic is written around named actions instead of hardware-focused raw button and axis events, it is easier for the user or developer to support new hardware or custom controller configurations. For example, the "changeTask" action is configured to fire on gripdown for vive and abuttondown for oculus touch while the default action set is active. https://github.com/fernandojsg/aframe-input-mapping-component/blob/master/examples/basic/index.html#L46

This configurable layer of abstraction over raw input helps developers support a diverse range of hardware devices. For pairs of handheld devices like the oculus touch controllers and the vive controllers, an improved input-mappings-component should support configuration on a per-hand basis. Otherwise, it is up to the application developer to make the distinction where we'd rather they write only against named actions.

This abstraction layer is present in apis like SDL, https://wiki.libsdl.org/FrontPage Gainput, http://gainput.johanneskuhlmann.de/api/ and XInput. https://msdn.microsoft.com/en-us/library/windows/desktop/hh405050(v=vs.85).aspx Game developers rely on these tools to support as many devices as possible without adding extra work for each one. Input mappings in SDL look very similar to what we have in aframe-input-mappings component, and also looks like Unity's Input API: https://docs.unity3d.com/ScriptReference/Input.html

These API's help, but in my opinion, they do not go far enough. These are still some problems that remain:

Configuration is chosen by the developer, rather than the user. Users with specific needs cannot easily adjust the configuration to match.

In-game actions are mapped too closely with hardware events to be repurposed as different virtual devices. Currently, if a user wants to use a touchpad as a DPAD or a mouse, this must be handled on a per-application basis.

A better input API will help the user provide the application with the configuration that best matches the user's preferences and her devices' capabilities. The best example of an input API that does that I know of is Steam's Controller API. https://partner.steamgames.com/doc/api/ISteamController The steam controller api is forwards-compatible. If you want to play a game that relies on input from a particular device rather than on specific actions, steam controller configuration allows you to map your physical devices inputs to a virtual device's inputs. For example, many PC games rely only on raw keyboard and mouse events but are playable with two touchpads - one mapped as a mouse and another as a touch-menu that appears in an overlay on screen. The API goes one step further. Users configures how they want would steam to process the raw events before emitting them as virtual devices events to the underlying hardware. Steam relies on SDL, but its controller software goes above and beyond the basic capabilities.

The aframe-input-mappings component might be made better with a processing layer between raw input events and virtualized hardware events or application-specific actions.

An in-browser and in-headset configuration tool might also alleviate developers from the need to teach users the very BASICS of interacting with a VR/AR application, since many people who try VR for the first time do not know where the buttons on their handheld devices are or what the basic capabilities they have in interacting with the system. This may be first be as simple as showing glyphs and tooltips in appropriate positions while the user is looking toward their handheld devices, without allowing any dynamic configuration. Support for new hardware devices should be the responsibility of software like SDL. Support for new interaction contexts (e.g. one-handed mode) should be the responsibility of software like steam and user-generated configuration files.

Unfortunately, the code that drives the steam controller is not open source. Users must run steam in order to gain the benefits. There are those that would love to see the source made available (myself included), but there has been no promise of this. https://www.reddit.com/r/SteamController/comments/5mdpb9/an_affirmative_business_case_for_valve_to_open/

Some of the capabilities of the steam controller are not available in the desktop environment except by using third-party tools (which may break with any new release of steam). https://behind.flatspot.pictures/third-party-steam-controller-software-part2-my-take-on-it/ For example, the touch menu overlay is unavailable outside of steam big picture mode.

None of the capabilities of the steam controller are available for mobile devices and switching action sets or action layers happens only on a per-application basis. If you want a set of actions sets to be active on a per-webpage basis, this is not currently possible.

More background on the steam controller is presented as a video here: https://www.youtube.com/watch?v=hbbabj0_ZCs Specific input mapping capabilities are explained here: https://www.youtube.com/watch?v=jq6T6n-5Dps

I think the aframe-input-mappings component could play a critical role for the web by providing a free-software, cross-platform input configuration tool that helps developers focus on their applications and helps users interact in whatever modes work best for them.

Lastly, there are a set of capabilities that most vr/ar software will depend on for basic interaction, but that are not currently standard. Navigation and selection are two such capabilities, but there may be many more that become useful for developers. Dr. Kreylos at UC Davis has developed a system of middleware for VR that includes tools that can reach all the way into application land, performing actions like measuring a part of the 3D game world or moving the player character. These tools may be replicated for aframe as components, rather than as the input mapping itself, but I am including them here because they seem relevant to what WE wish to do as "VR/AR tools developers" a la our proposed "entity kit" (needs renaming). http://idav.ucdavis.edu/~okreylos/ResDev/Vrui/Documentation/VruiToolConfigurationFileReference.html