Open misslivirose opened 4 years ago
I suggest adding the functionality that a user when touching an object will be transported to a previously defined waypoint.
I would suggest SCA logic bricks, or logic nodes to make the behaviors the most flexible long term.
and defining multiple 'triggers' that execute this code. as well as permission levels you can give an object.
I would be careful on the scope because this sort of thing has the potential to become a massive resource sucking monster. Understanding what is really useful / needed seems hard. Sure, we'd like to be able to completely program an interactive experience, but without creating a full-blown programming environment, what are the problems that can be solved relative to the things that could be done instead?
For example, with the ability to self-host, what percentage of people who want to leverage this also want to self-host, and would be better served by the ability to write and use custom components in their hosted client?
In particular, what is the balance between this an #1976, and where does programmer-oriented custom code come in? People (i.e, me) will want to do more than either this or #1976 implies. For example, I will likely want to create custom content elements akin to AFrame components that let me do some three.js content completely under my own control.
I fully understand that such components are likely to break, especially if a future internal architecture does something radical like move from AFrame. But one of the benefits of custom hosting is that I don't need to upgrade ... if I make a custom setup and need time to update it when hubs changes, I can take my time and upgrade when things are ready.
For us, just being able to start/stop video and glb animations would just go a real long way. I think we all know that in 10 years Spoke will look like Unreal or Unity but today, we are grappling with stones :) If we could send an argument in the chat box to start and stop these two media types we will get to bronze age by teleporting ;)
All - checking in on this from the future, I am running Hubs on a personal AWS account and am evaluating the project for use in a number of situations. Wondering if interactivity (shared state, IO, etc) is possible/ has been built out?
Background We have received a fair amount of feedback that Spoke could support some degree of interactive components in published scenes. Examples of types of interactivity that have been requested include:
Design Considerations Our discussions for this feature have centered largely around the ability to provide a visual scripting style interface in Spoke. @robertlong has done a design pass and background research into node-based editors, including:
Existing Work in Spoke Spoke currently has an experimental "Trigger Volume" feature that can be enabled that creates an element that has a very small subset of interactive properties. The trigger volume can play or pause a video node that is in the scene, or be used to play or stop an animation that is included in a glTF model. These experimental features are not intended for production-level scene publication, and may not be backwards-compatible with future work done to allow for the creation of interactive elements.
First release exclusions
Relation to custom components work In parallel with any work that we do to facilitate interactive elements in Spoke, we should also consider relevant work that is or could be done to more widely facilitate the creation of custom components for Hubs. However, it can be anticipated that the audiences for each of these projects will be different.