Open jeffpeck10x opened 2 years ago
It appears that the problem is that I need to specify indices
, just like in the example.
At first it was not clear what indices
is, but it appears to be a flat array in groups of three, specifying the vertices.
It seems like it is easier to simply not use a luma buffer here, and instead just pass the data as a typed array and rely on deck.gl to build the indices
internally. But, is any performance being lost by doing so?
It would be nice if there was some documentation and explanation of the pros/cons of passing indices
and a luma buffer, v.s. passing a typed array to the SolidPolygonLayer.
It would also be nice if there was some clear warning in the documentation about not using a buffer without indices
for a SolidPolygonLayer. Unless I am still misunderstanding something...
You are correct that getPolygon
can be a luma.gl Buffer only if the indices
attribute is supplied.
SolidPolygonLayer
breaks polygons down into triangles in order to draw them in WebGL. This process can only be done on the CPU. If your polygon vertices are opaque to the CPU (i.e. getPolygon
as a WebGL buffer) the layer will not be able to read them, therefore it's up to you to also supply the instructions on how they are used in drawing (i.e. indices
).
When you pass typed arrays to the attributes, each attribute creates its own copy of a WebGL buffer. If you build your own WebGL buffers, then you can share one buffer among multiple attributes/layers, like the interleaving example does. Whether your app should use typed array or buffer depends on the raw data format that you are dealing with. Generally speaking, uploading data to the GPU is way faster than rangling them into the right format on the CPU. So, if your data is well-formatted and ready for buffer-sharing as is (e.g. streamed from a server that you control), go ahead and create your own Buffer object. If some processing is required on the client side after the data loads, then let the built-in functionalities of the layer to do the heavy lifting for you. For relatively small datasets (tens of thousands of vertices) that do not update often, having duplicates is really not a big deal.
Thank you for explaining that @Pessimistress
I understand that it is technically possible to do getAccessorFromBuffer(data.attributes.getPolygon.buffer.getData())
if data.attributes.getPolygon.buffer
is a luma buffer.
Obviously calling .getData
is not efficient, but shouldn't be any less efficient than taking a typed array and turning it into a luma buffer. Right?
So, wouldn't it make sense if the SolidPolygonLayer checked if data.attribute.getPolygon
is a luma buffer and pulled the data out of the buffer to triangulate?
The advantage would be the ability to consistently pass data to all layers in the same variety of formats (be they accessors, typed arrays, or luma buffers). An additional advantage would be if there are multiple layers that share the same data (say a SolidPolygonLayer and a PathLayer) where you could pass a single luma buffer to both of them, which only needs to have .getData()
called once to build the triangles. This is what you wrote above.
Alternatively, it would be nice if deck.gl provided a way to build the indices in the same way that the PathTessalator does, either via documentation, or some exported util function.
Regardless, it definitely should be noted somewhere, as the deck.gl documentation currently speaks of the benefits of passing a buffer directly, but there is not really much mention of the limitations. Even a single sentence on the SolidPolygonLayer documentation would go a long way: "Note, if you pass the polygon data as a luma buffer, you must supply a list of indices" and link to the wikipedia page about polygon triangulation.
Reading directly from a buffer is only supported by WebGL2, so it's not a reliable feature if you are building production apps. Aslo, it doesn't make sense to read the data back from the GPU if you had it on CPU to begin with.
You should try passing both buffer: <Buffer>
and value: <TypedArray>
to your getPolygon
attribute. I haven't done this lately but it's supposed to work.
Totally agree with the proposed documentation changes though.
You should try passing both buffer:
and value: to your getPolygon attribute. I haven't done this lately but it's supposed to work.
ooh, that is clever.
Ok, thank you for explaining all of that. I didn't realize that .getData()
might not be compatible with older WebGL implementations.
I concur that it doesn't make sense to read the data back out of the buffer after it has been loaded there, but was considering the case of the same data going to two different layers where it might be easier to just copy the data out of the buffer when needed. Still, your suggestion to just pass along the original typed array is a fabulously straightforward way to handle that.
I will leave this open for the documentation update.
Description
I have a composite layer whose renderLayers method is being implemented like this (as a contrived example):
Instead of rendering the polygons, I get this error (full trace below):
If I just change this:
to this:
everything works!
The reason that I am expecting this to work with a
Buffer
is because that is what is demonstrated in this example: https://github.com/visgl/deck.gl/blob/master/examples/experimental/interleaved-buffer/app.jsExpected Behavior
The data should render, regardless of whether it is passed as a typed array or as a luma buffer.
Repro Steps
I can provide a working demo if necessary.
Environment
Logs