Open gkjohnson opened 9 months ago
@gkjohnson I'd like to contribute to the issue you mentioned regarding fetch. We have a lot of b3dm tiles in the scene (each tile is around 3MB + ktx texture 4096x4096). While profiling (in Chrome), I noticed that a lot of time is spent on the arrayBuffer() call. By using a worker and performing fetch and arrayBuffer() calls and transfering the result back (Transferable) to the main thread, I no longer experience frame rate hiccups during scene traversal (almost constant 60fps). The thing is, I don't really have a proper explanation for this. Perhaps memory allocation for each tile (around 3MB) is causing this issue (due to parsing the response, allocating memory, and copying bytes to the buffer). I simply don't know. I thought, as it is a promise (and maybe a function, implemented in V8 internally), it should be executed on a separate V8 internal thread (aside from the JS main application thread).
downloadQueue.add( tile, downloadTile => {
if ( downloadTile.__loadIndex !== loadIndex ) {
return Promise.resolve();
}
const uri = this.preprocessURL ? this.preprocessURL( downloadTile.content.uri ) : downloadTile.content.uri;
return fetch( uri, Object.assign( { signal }, this.fetchOptions ) );
} )
.then( res => {
if ( tile.__loadIndex !== loadIndex ) {
return;
}
// :))
return res.arrayBuffer();
} )
As for the GLTFLoader in the worker thread, I didn't notice any significant difference.
@Nmzik Thanks for taking a look at this! Would you be able to make a small PR with the change to perform fetches in a WebWorker so I can try it out, as well? Then we can figure out what an API change might look like to enable it. Unfortunately we can't have this set by default since bundlers still choke on WebWorkers and they can't be started with cross origin scripts (ie via CDNs).
I simply don't know. I thought, as it is a promise (and maybe a function, implemented in V8 internally), it should be executed on a separate V8 internal thread (aside from the JS main application thread).
Yeah I figured it would be fairly asynchronous, as well, but when I'd tested it I noticed that at least when starting a fetch it was causing some stalls which is why I made the issue.
It seems that sometimes fetches can take a few milliseconds to run and likewise parsing and generating data blobs for image bitmaps can also be a bit slow - between parsing and loading the data it can take ~4ms sometimes. Image bitmaps and geometry array buffers are fast to transfer over webworkers. These could help address the performance hiccups during load of complex models