Open nikgraf opened 7 years ago
This is something I've begin to explore, but it's a ways out. In addition to our support of the readable-yet-inefficient OBJ format, I'd like to support arbitrary vertices in an efficiently packed manner, so that complex objects can be built from React, and values can be tweaked to perform keyframe animations. This is also likely necessary as we begin exploring how to bring glTF scenes into React VR, and need a way to represent subsets of the environment through meshes.
I had to do a custom packed format and shader to render some quills in my non-VR react app with Three.js: http://floating.ink/#/work/5634472569470976 (a sourcemap is there if you want to see the code). I create a three.js scene and add it to the view hierarchy, but for react-vr we would want to grab the scene I guess and put in some custom code?
I will want to do similar things in VR where I'm generating geometry at runtime based on user actions. This sounds like a slightly different use case than @nikgraf has but similar overall request: let me pass in my own packed buffers of data to
How would I accomplish this with current react-vr? (And how can we make the API nicer in the future)
Our current plan for the source prop on Model
(as well as Video
and Sound
) is to allow the user to register new format handlers at runtime. That way, you can provide a custom DASH video decoder, or a FBX model handler for the Model
tag.
By registering a decoder with a given extension ext
, you can then pass a source like the following: <Model source={{ext: ...}} />
We could support an official first-party decoder, raw
, which is associated with raw vertex and normal data.
HOWEVER, keep in mind that every time you send data across the bridge from React to the runtime, there is a cost. Formats where an identifier (like a filepath) is used on the React side, and the vertex data remains on the Runtime side, are always going to be more performant.
With all of those in mind, it may make sense to to adopt a model similar to the one proposed, but never implemented, for speeding up StyleSheets. StyleSheet.create() would initially pass the data across the bridge, and the returned object would actually be a numeric identifier that's easily passed across the bridge. We do a similar thing with the runtime generation of vector glyph textures like those used in the Video playback controls: we pass the initial information across the bridge once, returning a unique identifier (glyph://UUID
) that gets passed to an <Image>
tag. An exact format is hazy in my head, but I could foresee something like the following:
const rawVertices = createRawVertexObject(...);
const m = <Model source={{raw: rawVertices}}>;
rawVertices.doSomething(); // Update object in a way that bypasses the React reconciliation algo, but is still tied to the frame rate for synchronization
@andrewimm when you talk about "runtime" and "bridge" you're referring to native rendering with the carmel developer preview and not the webvr implementation, right?
Carmel developer preview is not native rendering, it's a web browser that's only capable of drawing WebVR contexts.
Runtime and Bridge are concepts across all platforms. The part of the code that consumes React abstract tags and turns them into pixels is the runtime. For the current implementation, all of the code in client.bundle.js, handling WebGL and WebVR is the Runtime.
The Bridge is how the two communicate asynchronously. In our case, it's the message-passing format between the Web Worker (react code) and the main browser window (runtime code)
@andrewimm "every time you send data across the bridge from React to the runtime, there is a cost" 🔜 could we use SharedArrayBuffer
to mitigate this cost?
For this specific case, yes, though SharedArrayBuffer isn't free. Ideally the spec I described above would be able to determine (under the hood) whether it's supported, and send updates that way rather than through a frame-sync'd message.
@andrewimm I'm not up on runtime / js internals, but sending an ArrayBuffer
, if not shared, should cost on order of a memcpy
and object creation, right? Or is there a lot of other stuff I'm not considering?
Description
Use case: I would like to manipulate models in the fly in the client to create a smooth experience splitting models into multiple entries.