Closed j9liu closed 2 years ago
Hi @j9liu, I started a draft PR addressing most of this issue. It follows your proposed strategy, with a few slight changes:
getImageFromTypedArray
directly uses the ImageData constructor, rather than the context's createImageData methodGltfTextureLoader
, to more clearly separate WebGL1 vs WebGL2 functionalityHandling the WebGL1 limitation (power-of-two dimension requirements) adds a lot of complexity. I wanted to separate out that complexity so that it can be easily removed whenever we drop WebGL1 support. The main createTexture function now looks like this:
const texture = context.webgl2
? getTextureAndMips(textureUniform, image, context)
: getWebGL1Texture(textureUniform, image, context);
and all the dimension checks and resizing are handled inside getWebGL1Texture
.
I wrote unit tests for the helper functions (resizeImageToNextPowerOfTwo
and getImageFromTypedArray
), but I'm still working on the tests for the main TextureManager
.
I'd appreciate your feedback on 2 things:
TextureManager
make sense to you?TextureManagerSpec
doesn't do this, and I'm not familiar with how to set up a context for the testsThanks!
Hi @jjhembd ,
I'll need a moment to leave more detailed feedback on your PR, but from a glance:
ImageData
constructor isn't supported by Internet Explorer, but we no longer support IE ~(@ggetz @sanjeetsuhag can you confirm?)~ confirmed by @sanjeetsuhag , so I think it's okay to use here.For testing both WebGL1 and WebGL2 contexts, you'll want to make two different scenes in the spec. The first one can be created normally with createScene()
, we use WebGL1 by default. The scene with WebGL2 should be constructed like so:
sceneWithWebgl2 = createScene({
contextOptions: {
requestWebgl2: true,
},
});
Then, in any unit test involving WebGL2, we have to check if the context was successfully requested. Otherwise, the unit test will fail. So we should put this at the beginning of the unit test:
if (!sceneWithWebgl2.context.webgl2) {
return;
}
Currently,
ModelExperimental
'sTextureManager
doesn't handle mipmapping likeGltfTextureLoader
does. This was started in #10158, but there turned out to be more work involved and the PR has gone inactive. A thorough implementation would do the following:resizeImageToNextPowerOfTwo
function into its own file underSource/Core/resizeImageToNextPowerOfTwo.js
. That way, it can be used in bothGltfTextureLoader
andTextureManager
(and potentially be reused in the future).TextureManager
in a similar manner toGltfTextureLoader
, with some differences:GltfTextureLoader
handles KTX2 compression withinternalFormat
,TextureManager
handlestypedArray
, which is an array of numbers that represents the colors of the pixels. We can actually take these pixels and turn them into an image, then resize them, as long as thePixelDataType
is ofUNSIGNED_BYTE
. So let's create a new fileSource/Core/getImageFromTypedArray.js
containingfunction getImageFromTypedArray(typedArray, width, height)
. Then, inside,createImageData
on the context, with the original image width and height (don't convert to power of two yet)ImageData
's data field directly. This only requires making a for loop that setsimageData.data[i] = typedArray[i]
.putImageData
with the image data we just created, with coordinates (0, 0)HTMLCanvasElement
GltfTextureLoader
callsconsole.warn
for KTX2 compression, and in the same location inTextureManager
, we want toconsole.warn
for typed arrays. But before that, check iftextureUniform.pixelDatatype === PixelDatatype.UNSIGNED_BYTE
. If so, callgetImageFromTypedArray
and put the resulting image intoresizeImageToNextPowerOfTwo
. Then create aTexture
from that and proceed as normal.resizeImageToNextPowerOfTwo
,getImageFromTypedArray
, andTextureManager