Closed fernandojsg closed 3 years ago
You can have a Promise+worker+incremental based loader (a bit like a mix of both points)
Pass the source URL to the worker script, Fetch the resources, return a struct of transferable objects with the required buffers, structs, even ImageBitmaps; it should be straightforward enough to not need a lot of three.js processing overhead.
The upload data to GPU will be blocking regardless, but you can build a queue to distribute the commands across different frames, via display.rAF. The commands can be executed one at a time per frame, or calculate the average time of the operation and run as many as are "safe" to run in the current frame budge (something similar to requestIdleCallback would be nice, but it's not widely supported, and it's problematic in WebVR sessions). Also can be improved by using bufferSubData, texSubImage2D, etc.
Support for workers and transferable objects is pretty solid right now, specially in WebVR capable browsers.
Hi all, I have a prototype available that may be of interest to you in this context. See the following branch:
https://github.com/kaisalmen/WWOBJLoader/tree/Commons
Here the mesh provisioning part of has been completely separated from WWOBJLoader2
:
https://github.com/kaisalmen/WWOBJLoader/blob/Commons/src/loaders/WWLoaderCommons.js
WWLoaderCommons
makes it easy to implemented other mesh providers (file format loaders). Basically, it defines how a web worker implementation has to provide mesh data back to the main thread and processes/integrates it into the scene. See the random triangle junk provider π which serves as a tech demonstrator:
https://github.com/kaisalmen/WWOBJLoader/tree/Commons/test/meshspray
https://kaisalmen.de/proto/test/meshspray/main.src.html
Even in the current implementation WWOBJLoader2
relies on transferable objects (ArrayBuffers/ByteBuffers) to provide the raw BufferedGeometry
data for the Mesh
from worker to main thread. Time-wise the creation of the Mesh
from the provided ByteBuffers is negligible. Whenever a bigger mesh is integrated into the scene however the rendering is stalled (data copies, scene graph adjustments ... !?). This is always the case independent of the source (correct me if I am wrong).
The "stream" mode of WWOBJLoader2 smooths these stalls, but if a single mesh piece from your OBJ model weighs 0.5 million vertices, then rendering will pause for a longer period of time.
I have opened a new issue to detail what I exactly I have done on the bespoke branch and why: https://github.com/kaisalmen/WWOBJLoader/issues/11 The issue is still a stub and details will follow soon.
To offer some numbers, here's a performance profile of https://threejs.org/examples/webgl_loader_gltf2.html, loading a 13MB model with 2048x2048 textures.
In this case the primary thing blocking the main thread is uploading textures to GPU, and as far as I know that can't be done from a WW.. either the loader should add textures gradually, or three.js should handle it internally.
For the curious, the final chunk blocking the main thread is addition of an environment cubemap.
The main aim for react-vr is not necessarily to have the most optimal loader in terms of wall clock time but to not cause sudden and unexpect frame outs as loading new content happens. Anything we can do to minimize this is beneficial to all but especially VR.
Textures are definitely an issue and an obvious first step would be optionally load them incrementally - a set of lines at a time for a big texture. As the upload is hidden for the client programs it is going to be difficult for them to manage but I'd be all for this being exposed more openly to the webgl renderer to take the pressure off three.js
For the gltf parsing I commonly see blocking of a 500ms on my tests, this is significant and I'd much prefer an incremental approach to all the loaders (which should also be clonable)
The premise of React VR is to encourage easy dynamic content driven by a web style so as to encourage more developers, and this will push more emphasis on improving dynamic handling. Most of the time we don't know which assets will be required at the beginning of our user created applications.
@kaisalmen Thanks for the link
In Elation Engine / JanusWeb, we actually do all our model parsing using a pool of worker threads, which works out pretty well. Once the workers have finished loading each model, we serialize it using object.toJSON()
, send it to the main thread with postMessage()
, and then load it using ObjectLoader.parse()
. This removes most of the blocking portions of the loader code - there's still some time spent in ObjectLoader.parse()
which could probably be optimized out, but overall interactivity and load speed is drastically improved. Since we're using a pool of workers, we can also parse multiple models in parallel, which is a huge win in complex scenes.
On the texture side of things, yeah, I think some changes are needed to three.js's texture uploading functionality. A chunked uploader using texSubImage2D
would be ideal, then we could do partial updates of large textures over multiple frames, as mentioned above.
I would be more than happy to collaborate on this change, as it would benefit many projects which use Three.js as a base
I think to use texSubImage2D
is a good idea.
But also why WebGL doesn't upload texture asynchronously.
OpenGL and other libs have the same limitation?
And another thing I'm thinking is GLSL compilation. Will it drop the frame? Or fast enough and we don't need to care?
Yes, this is a problem in native OpenGL as well - compiling shaders and uploading image data are synchronous / blocking operations. This is why most game engines recommend or even force you to preload all content before you start the level - it's generally considered too much of a performance hit to load new resources even off of a hard drive, and here we're trying to do asynchronously over the internet...we actually have a more difficult problem than most game devs, and we'll have to resort to using more advanced techniques if we want to be able to stream new content in on the fly.
Uploading textures will be less problematic if we use the new ImageBitmap
API in the future. See https://youtu.be/wkDd-x0EkFU?t=82 .
BTW: Thanks to @spite, we already have an experimental ImageBitmapLoader in the project.
@Mugen87 actually I'm already doing all my texture loads with ImageBitmap in Elation Engine / JanusWeb - it definitely helps and is worth integrating into the Three.js core, but there are two main expenses involved with using textures in WebGL - image decode time, and image upload time - ImageBitmap only helps with the first.
This does cut the time blocking the CPU by about 50% in my tests, but uploading large textures to the GPU, especially 2048x2048 and up, can easily take a second or more.
It would be convenient to try what @jbaicoianu is suggesting. Anyway, if opting for the main-thread alternative, this seems a perfect match for requestIdleCallback instead of setTimeout.
I agree with you all, I believe the approach to load and parse everything on the worker, create the needed objects back on main thread (if it's very expensive it could be done in several steps) and then include a incremental loading on the renderer. For a MVP we could define a maxTexturesUploadPerFrame (by default infinite), and the render will take care of loading from the pool according to that number. In the following iterations we could add a logic, as @spite commented, to measure the average and automatically upload them based on a safe range time before blocking. This could be done initially for each textures as an unit, but then it could be improved to incrementally upload chunks for bigger textures.
requestIdleCallback would be nice, but it's not widely supported, and it's problematic in WebVR sessions
@spite I'm curious about your sentence, what do you mean with problematic?
I have a THREE.UpdatableTexture to incrementally update textures using texSubImage2D, but needs a bit of tweaking of three.js. The idea is to prepare a PR to add support.
Regarding requestIdleCallback (rIC):
first, it's supported on Chrome and Firefox, and although it can be polyfilled easily, the polyfilled version might defeat the purpose slightly.
second: the same way vrDisplay.requestAnimationFrame (rAF) needs to be called instead of window.rAF when presenting, the same applies for rIC, as discussed in this crbug. That means that the loader needs to be aware of the current active display at all times, or it will stop firing depending on what's presenting. It's not terribly complicated, it just adds more complexity to the wiring of the loaders (which should ideally just do their job, independently of the presentation state). Another option is to have the part in threejs that runs incremental jobs in the main thread to share the current display; i think it's much easier to do now with the latest changes to VR in threejs.
Another consideration: in order to be able to upload one large texture in several steps using texSubImage2D (256x256 or 512x512), we need a WebGL2 context to have offset and clipping features. Otherwise the images have to be pre-clipped via canvas, basically tiled client-side before uploading.
@spite Good point, I didn't thought about rIC not being called when presenting, at first I thought that we should need a display.rIC but I believe that the .rIC should be attached to window and being called when window or display are both idle. I believe I didn't hear anything related to this in the webvr specs discussions @kearwood maybe has more information, but definitely is an issue we should address.
Looking forward to see your UpdatableTexture PR! :) Even if it's just a WIP we could move some of the discussion there.
Maybe loaders could become something like this...
THREE.MyLoader = ( function () {
// parse file and output js object
function parser( text ) {
return { 'vertices': new Float32Array() }
}
// convert js object to THREE objects.
function builder( data ) {
var geometry = new THREE.BufferGeometry();
geometry.addAttribute( new THREE.BufferAttribute( data.vertices, 3 );
return geometry;
}
function MyLoader( manager ) {}
MyLoader.prototype = {
constructor: MyLoader,
load: function ( url, onLoad, onProgress, onError ) {},
parse: function ( text ) {
return builder( parser( text ) );
},
parseAsync: function ( text, onParse ) {
var code = parser.toString() + '\nonmessage = function ( e ) { postMessage( parser( e.data ) ); }';
var blob = new Blob( [ code ], { type: 'text/plain' } );
var worker = new Worker( window.URL.createObjectURL( blob ) );
worker.addEventListener( 'message', function ( e ) {
onParse( builder( e.data ) );
} );
worker.postMessage( text );
}
}
} )();
First proposal release of THREE.UpdatableTexture
Ideally it should be part of any THREE.Texture, but i would explore this approach first.
@mrdoob i see the merit on having the exact same code piped to the worker, it just feels soooo wrong π. I wonder what the impact of serialising, blobbing and re-evaluating the script would be; nothing too terrible, but i don't think the browser is optimised for this quirks π
Also, ideally the fetch of the resource itself would happen in the worker. And I think the parser() method in the browser would need an importScripts of three.js itself.
But a single point for defining sync/async loaders would be kick-ass!
@mrdoob the builder
function could be completely generic and common to all loaders (WIP: https://github.com/kaisalmen/WWOBJLoader/blob/Commons/src/loaders/support/WWMeshProvider.js#LL215-LL367; Update: not yet isolated in a function). If the input data is constrained to pure js objects without reference to any THREE
objects (that's what you have in mind, right?) we could build serializable worker code without need for imports in the worker (what WWOBJLoader
does). This is easy for Geometry, but Materials/Shaders (if defined in file) could then only be created in the builder and only be described as JSON before by the parser
.
A worker should signal every new Mesh and its completion, I think. It could be alter like this:
// parse file and output js object
function parser( text, onMeshLoaded, onComplete ) {
....
}
parse: function ( text ) {
var node = new THREE.Object3d();
var onMeshLoaded = function ( data ) {
node.add( builder( data ) );
};
// onComplete as second callbackonly provided in async case
parser( text, onMeshLoaded ) );
return node;
},
A worker builder util is helpful + some generic communication protocol which is not contradicting your idea of using the parser as is, but it needs some wrapping, I think. Current state on WWOBJLoader evolution: https://github.com/kaisalmen/WWOBJLoader/blob/Commons/src/loaders/support/WWMeshProvider.js#LL40-LL133, whereas front-end calls are report_progress, meshData and complete.
Update2:
builder
, but it could make sense to be able to set some parameters to adjust the behavior of the parser
. This also implies configuration parameters should be transferable to the worker independent of parsingWWOBJLoader
, btw)WWOBJLoader2
now extends OBJLoader
and it overrides parse. So, it we have both parsing caps, but in different classes. It comes close to the proposal, but it is not in line, yet. Some parser code needs to be unified and eventually both classes need to be fusedThat's it for now. Feedback welcome π
@mrdoob I like the idea of composing the worker out of the loader's code on the fly. My current approach just loads the entire combined application js and just uses different entry point from the main thread, definitely not as efficient as having workers composed with just the code they need.
I like the approach of using a trimmed-down transmission format for passing between workers, because it's easy to mark those TypedArrays as transferrable when passing back to the main thread. In my current approach I'm using the .toJSON()
method in the worker, but then I go through and replace the JS arrays for vertices, UVs, etc. with the appropriate TypedArray type, and mark them as transferrable when calling postMessage. This makes the parsing/memory usage a bit lighter in the main thread, at the cost of a bit more processing/memory usage in the worker - it's a fine trade-off to make, but it could be made more efficient by either introducing a new transmission format as you propose, or by modifying .toJSON()
to optionally give us TypedArrays instead of JS arrays.
The two downsides I see to this simplified approach are:
@spite Regarding "Also, ideally the fetch of the resource itself would happen in the worker." - this was my thinking when I first implemented the worker-based asset loader for Elation Engine - I had a pool of 4 or 8 workers, and I would pass them jobs as they became available, and then the workers would fetch the files, parse them, and return them to the main thread. However, in practice what this meant was that the downloads would block parsing, and you'd lose the benefits you'd get from pipelining, etc. if you requested them all at once.
Once we realized this, we added another layer to manage all our asset downloads, and then the asset downloader fires events to let us know when assets become available. We then pass these off to the worker pool, using transferrables on the binary file data to get it into the worker efficiently. With this change, the downloads all happen faster even though they're on the main thread, and the parsers get to run full-bore on processing, rather than twiddling their thumbs waiting for data. Overall this turned out to be one of the best optimizations we made in terms of asset load speed.
On the topic of texture loading, I've built a proof of concept of a new FramebufferTexture
class, which comes with a companion FramebufferTextureLoader
. This texture type extends WebGLRenderTarget
, and its loader can be configured to load textures in chunked tiles of a given size, and compose them into the framebuffer using requestIdleCallback()
.
https://baicoianu.com/~bai/three.js/examples/webgl_texture_framebuffer.html
In this example, just select an image size and a tilesize and it'll start the loading process. First we initialize the texture to pure red. We start the download of the images (they're about 10mb, so give it a bit), and when they complete we change the background to blue. At this point we start parsing the image with createImageBitmap()
to parse the file, and when that's done we set up a number of idle callbacks which contain further calls to createImageBitmap()
which efficiently split the image into tiles. These tiles are rendered into the framebuffer over a number of frames, and have a significantly lower impact on frame times than doing it all at once.
NOTE - FireFox currently doesn't seem to implement all versions of createImageBitmap
, and is currently throwing an error for me when it tries to split into tiles. As a result, this demo currently only works in Chrome. Does anyone have a reference for a createImageBitmap
support roadmap in FireFox?
There's some clean-up I need to do, this prototype is a bit messy, but I'm very happy with the results and once I can figure out a way around the cross-browser problems (canvas fallback, etc), I'm considering using this as the default for all textures in JanusWeb. The fade-in effect is kind of neat too, and we could even get fancy and blit a downsized version first, then progressively load the higher-detail tiles.
Are there any performance or feature-related reasons anyone can think of why it might be a bad idea to have a framebuffer for every texture in the scene, as opposed to a standard texture reference? I couldn't find anything about max. framebuffers per scene, as far as I can tell once a framebuffer has been set up, if you're not rendering to it then it's the same as any other texture reference, but I have this feeling like I'm missing something obvious as to why this would be a really bad idea :)
@jbaicoianu re: firefox's createImageBitmap, the reason is they don't support the dictionary parameter, so it doesn't support image orientation or color space conversion. it makes most applications of the API pretty useless. I filed two bugs related to the issue: https://bugzilla.mozilla.org/show_bug.cgi?id=1367251 and https://bugzilla.mozilla.org/show_bug.cgi?id=1335594
@spite that's what I thought too, I'd seen this bug about not supporting the options dictionary - but in this case I'm not even using that, I'm just trying to use the x, y, w, h options. The specific error I'm getting is:
Argument 4 of Window.createImageBitmap '1024' is not a valid value for enumeration ImageBitmapFormat.
Which is confusing, because I don't see any version of createImageBitmap
in the spec which takes an ImageBitmapFormat
as an argument.
Are there any performance or feature-related reasons anyone can think of why it might be a bad idea to have a framebuffer for every texture in the scene, as opposed to a standard texture reference? I couldn't find anything about max. framebuffers per scene, as far as I can tell once a framebuffer has been set up, if you're not rendering to it then it's the same as any other texture reference, but I have this feeling like I'm missing something obvious as to why this would be a really bad idea :)
@jbaicoianu THREE.WebGLRenderTarget
keeps a framebuffer, a texture and a render buffer. When you have the texture assembled, you can delete the framebuffer and the render buffer and only keep the texture. Something like this should do this (not tested):
texture = target.texture;
target.texture = null; // so the webgl texture is not deleted by dispose()
target.dispose();
@wrr that's good to know, thanks. I definitely have to do a pass on memory efficiency on this too - it inevitably crashes at some point if you change the parameters enough, so I know there's some clean-up I'm not doing yet. Any other hints like this would be much appreciated.
@mrdoob and @jbaicoianu I forgot to mention that I like the idea, too. π
I have uncluttered the code (reworked init, worker instructions object, replaced rubbish multi-callback handling, common resource description, etc.) of OBJLoader
and WWOBJLoader
and all examples (code). Both loaders are now ready to be combined. They will be according your blueprint hopefully some time next week depending on my spare time:
Directed WWOBJLoader2 test:
https://kaisalmen.de/proto/test/wwparallels/main.src.html
Directed user of generic WorkerSupport
:
https://kaisalmen.de/proto/test/meshspray/main.src.html
The big zipped OBJ file test:
https://kaisalmen.de/proto/test/wwobjloader2stage/main.src.html
I will update the above examples with newer code when available and let you know.
Update 2017-07-30: OBJLoader2
and WWOBJLoader2
now use identical Parsers. They pass data to common builder function directly or from worker.
Update 2017-07-31: WWOBJLoader2
is gone. OBJLoader2
provides parse
and parseAsync
, load
and run
(feed by LoaderDirector
or manually)
Update 2017-08-09: Moved update to new post.
OBJLoader2
is signature and behaviour compatible again with OBJLoader
(I broke this during evolution), OBJLoader2
provides parseAsync
and load
with useAsync
flag in addition. I think, it is ready to be called V2.0.0-Beta now. Here you find the current dev status:
https://github.com/kaisalmen/WWOBJLoader/tree/V2.0.0-Beta/src/loaders
I have extracted LoaderSupport classes (independent of OBJ) that serve as utilities and required support tools. They could be re-used for potential other worker based loaders. All code below, I put under namespace THREE.LoaderSupport
to highlight its dependence from OBJLoader2
:
Builder
: For general mesh buildingWorkerDirector
: Creates loaders via reflection, processes PrepData
in queue with configured amount of workers. Used to fully automate loaders (MeshSpray and Parallels demo)WorkerSupport
: Utility class to create workers from existing code and establish a simple communication protocolPrepData
+ ResourceDescriptor
: Description used for automation or simply for unified description among examplesCommons
: Possible base class for loaders (bundles common parameters)Callbacks
: (onProgress, onMeshAlter, onLoad) used for automation and direction and LoadedMeshUserOverride
is used to provide info back from onMeshAlter
(normals addition in objloader2 test below)Validator
: null/undefined variable checks@mrdoob @jbaicoianu OBJLoader2
now wraps a parser as suggested (it is configured with parameters globally set or received by PrepData
for run). The Builder
receives every single raw mesh and the parser returns the base node, but apart from that it matches the blueprint.
There is still some helper code in OBJLoader2
for serialization of the Parser that is likely not needed.
The Builder needs clean-up as the contract/parameter object for buildMeshes
function is still heavily influenced by OBJ loading and is therefore still considered under construction.
The code needs some polishing, but then it is ready for feedback, discussion, criticism, etc... π
OBJ Loader using run and load: https://kaisalmen.de/proto/test/objloader2/main.src.html OBJ Loader using run async and parseAsync: https://kaisalmen.de/proto/test/wwobjloader2/main.src.html Directed use of run async OBJLoader2: https://kaisalmen.de/proto/test/wwparallels/main.src.html Directed use of generic WorkerSupport: https://kaisalmen.de/proto/test/meshspray/main.src.html The big zipped OBJ file test: https://kaisalmen.de/proto/test/wwobjloader2stage/main.src.html
Looking good! Are you aware of these changes in OBJLoader
? #11871 565c6fd0f3d9b146b9434e5fccfa2345a90a3842
Yes, I need to port this. I proposed some re-producible perf measurements. Will start working on both this weekend. When do you plan to release r87? N-gon support could make it depending on the date.
@mrdoob et voila: https://github.com/mrdoob/three.js/pull/11928 n-gon support π
Status update (code):
The created workers are now able to configure any parser inside the worker via parameters received by a message. WorkerSupport
provides a reference worker runner implementation (code) that could be completely replaced by own code if desired or if it becomes required.
The worker will create and run the parser in the run
method of the WorkerRunnerRefImpl
(Parser
is available inside the worker scope; this.applyProperties
calls setters or properties of the parser):
WorkerRunnerRefImpl.prototype.run = function ( payload ) {
if ( payload.cmd === 'run' ) {
console.log( 'WorkerRunner: Starting Run...' );
var callbacks = {
callbackBuilder: function ( payload ) {
self.postMessage( payload );
},
callbackProgress: function ( message ) {
console.log( 'WorkerRunner: progress: ' + message );
}
};
// Parser is expected to be named as such
var parser = new Parser();
this.applyProperties( parser, payload.params );
this.applyProperties( parser, payload.materials );
this.applyProperties( parser, callbacks );
parser.parse( payload.buffers.input );
console.log( 'WorkerRunner: Run complete!' );
callbacks.callbackBuilder( {
cmd: 'complete',
msg: 'WorkerRunner completed run.'
} );
} else {
console.error( 'WorkerRunner: Received unknown command: ' + payload.cmd );
}
};
Message from OBJLoader2.parseAsync
looks like this:
this.workerSupport.run(
{
cmd: 'run',
params: {
debug: this.debug,
materialPerSmoothingGroup: this.materialPerSmoothingGroup
},
materials: {
materialNames: this.materialNames
},
buffers: {
input: content
}
},
[ content.buffer ]
);
The message object is Loader dependent, but the configuration of the Parser in the worker is generic. Code used by linked examples in previous post has been updated with latest code.
I think the evolution of OBJLoader2 and extraction of support functions has now reached a point where your feedback is required. When all examples have been ported from its repo to the branch above, I will open a PR with a complete summary and then request feedback
FYI, here is a work-in-progress for having ImageBitmapLoader use a worker as discussed above. Perhaps more interestingly, some hard numbers on the results: https://github.com/mrdoob/three.js/pull/12456
firefox's createImageBitmap, the reason is they don't support the dictionary parameter, so it doesn't support image orientation or color space conversion. it makes most applications of the API pretty useless.
This is unfortunate. βΉοΈ
@mrdoob Do you have a plan to switch ImageLoader
to ImageBitmapLoader
in TextureLoader
because ImageBitmap should be less-blocking to upload to texture? createImageBitmap()
seems working on FireFox so far if we pass only first argument. (Perhaps we don't need to pass second and more arguments via TextureLoader
?)
return createImageBitmap( blob );
It's actually important that createImageBitmap ()
supports the options dictionary. Otherwise you can't change stuff like image orientation (flip-Y) or indicate premultiplied alpha. The thing is you can't use WebGLRenderingContext.pixelStorei
for ImageBitmap
. From the spec:
_If the TexImageSource is an ImageBitmap, then these three parameters (UNPACK_FLIP_Y_WEBGL, UNPACK_PREMULTIPLY_ALPHA_WEBGL, UNPACK_COLORSPACE_CONVERSIONWEBGL) will be ignored. Instead the equivalent ImageBitmapOptions should be used to create an ImageBitmap with the desired format.
So I think we can only switch to ImageBitmapLoader
if FF supports the options dictionary. Besides, properties like Texture.premultiplyAlpha
and Texture.flipY
do not work with ImageBitmap
right now. I mean if users set them, they won't affect a texture based on ImageBitmap
which is somewhat unfortunate.
Ah, OK. I've missed that spec.
The importance of the options dictionary is also discussed here:
The bugs on bugzilla (https://bugzilla.mozilla.org/show_bug.cgi?id=1367251, https://bugzilla.mozilla.org/show_bug.cgi?id=1335594) have been there untouched for ... two years now? I didn't think it would take them this bloody long to fix it.
So the problem is that "technically" the feature is supported on FF, but in practice is useless. In order to use it, we could have a path for Chrome that uses it, and another for the other browsers that doesn't. Problem is, since Firefox does have the feature, we'd have to do UA sniffing, which sucks.
The practical solution is performing feature detection: build a 2x2 image using cIB with the flip flag, and then read back and make sure the values are correct.
About FireFox bugs, I'm gonna also internally contact them. Let's see if we need workaround after we hear their plan.
The bugs on bugzilla (https://bugzilla.mozilla.org/show_bug.cgi?id=1367251,Β https://bugzilla.mozilla.org/show_bug.cgi?id=1335594) have been there untouched for ... two years now? I didn't think it would take them this bloody long to fix it.
Yep sorry for that I really didn't follow up with it for a while -_-
So the problem is that "technically" the feature is supported on FF, but in practice is useless. In order to use it, we could have a path for Chrome that uses it, and another for the other browsers that doesn't. Problem is, since Firefox does have the feature, we'd have to do UA sniffing, which sucks.
The practical solution is performing feature detection: build a 2x2 image using cIB with the flip flag, and then read back and make sure the values are correct.
Yep I agree that both solutions really suck and we should try to avoid them so before digging into any of these lets see if we could unblock it on our side
I made ImageBitmap
uploading performance test. Uploading texture in every 5 secs.
You can compare Regular Image vs ImageBitmap.
https://rawgit.com/takahirox/three.js/ImageBitmapTest/examples/webgl_texture_upload.html (Regular Image) https://rawgit.com/takahirox/three.js/ImageBitmapTest/examples/webgl_texture_upload.html?imagebitmap (ImageBitmap)
On my windows I see
Browser | 8192x4096 JPG 4.4MB | 2048x2048 PNG 4.5MB |
---|---|---|
Chrome Image | 500ms | 140ms |
Chrome ImageBitmap | 165ms | 35ms |
FireFox Image | 500ms | 40ms |
FireFox ImageBitmap | 500ms | 60ms |
(texture.generateMipmaps
is true
)
My thoughts
Even with ImageBitmap, uploading texture seems to still block for large texture. Maybe we need partial uploading technique or something for non-blocking.
I guess one solution for this problem might be the usage of a texture compression format and the avoidance of JPG or PNG (and thus ImageBitmap
). It would be interesting to see some performance data in this context.
Yes, agreed. But I guess we probably still see blocking for large texture especially on low-power device like mobile. Anyways, evaluation the performance first.
Or use scheduled/requestIdleCallback texSubImage2D
rIC = requestIdleCallback?
yes, i've made a ninja edit
OK. Yes agreed.
BTW, I'm not familiar with compressed texture yet. Let me confirm my understanding. We can't use Compressed Texture with ImageBitmap
because compressedTexImage2D
doesn't accept ImageBitmap
, correct?
https://developer.mozilla.org/en-US/docs/Web/API/WebGLRenderingContext/compressedTexImage2D
I went back to revisit my old TiledTextureLoader experiments - seems like they're now causing my video driver to crash and restart :(
(edit: actually, it looks like even loading the largest texture (16k x 16k - https://baicoianu.com/~bai/three.js/examples/textures/dotamap1_25.jpg) directly in chrome is what's causing the crash. This used to work just fine, so seems to be some regression in chrome's image handling)
I'd done some experiments using requestIdleCallback, ImageBitmap, and ES6 generators to split a large texture into multiple chunks for uploading to the GPU. I used a framebuffer rather than a regular Texture, because even if you're using texSubimage2D to populate the image data, you still need to preallocate the memory, which requires uploading a bunch of empty data to the GPU, whereas a framebuffer can be created and initialized with a single GL call.
The repository for those changes is still available here https://github.com/jbaicoianu/THREE.TiledTexture/
Some notes from what I remember of the experiments:
My results were similar: there was a trade off between upload speed and jankiness. (BTW I created this https://github.com/spite/THREE.UpdatableTexture).
I think that for the second option to work in WebGL 1, you would actually need two textures, or at least modifiers to the UV coordinates. In WebGL 2 i think it's easier to copy sources that are different size from the target texture.
As discussed in https://github.com/mrdoob/three.js/issues/11301 one of the main problems that we have in WebVR, although is annoying in non-VR experiences too, is blocking the main thread while loading assets.
With the recent implementation on link traversal in the browser non-blocking loading is a must to ensure a satisfying user experience. If you jump from one page to another and the target page start to load assets blocking the mainthread, it will block the render function so no frames will be submitted to the headset and after a small period of grace the browser will kick us out from VR and it will require the user to take out the headset, click enter VR again (user gesture required to do so) and go back to the experience.
Currently we can see two implementations of non-blocking loading of OBJ files:
(1) Using webworkers to parse the obj file and then return the data back to the main thread WWOBJLoader: Here the parsing is done concurrently and you could have several workers at the same time. The main drawback is that once you've loaded the data you need to send the payload back to the mainthread to reconstruct the THREE objects instances and that part could block the main thread: https://github.com/kaisalmen/WWOBJLoader/blob/master/src/loaders/WWOBJLoader2.js#L312-L423
(2) Mainthread promise with deferred parsing using setTimeOut:Oculus ReactVR: This loader keeps reading lines using small time slots to prevent blocking the main thread by calling setTimeout: https://github.com/facebook/react-vr/blob/master/ReactVR/js/Loaders/WavefrontOBJ/OBJParser.js#L281-L298 With this approach the loading will be slower as we're just parsing some lines on each time slot, but the advantage is that once the parsing is completed, we'll have the THREE objects ready to use without any additional overhead.
Both has their pros and cons and I'm honestly not an expert of webworkers to evalute that implementation but It's an interesting discussion that ideally would lead to a generic module that could be used to port the loaders to a non-blocking version.
Any suggestions?
/cc @mikearmstrong001 @kaisalmen @delapuente @spite