wwwtyro / candygraph

Fast by default, flexible 2D plotting library.
Other
435 stars 11 forks source link

what has the biggest performance impact? #25

Closed 0xkalle closed 2 years ago

0xkalle commented 2 years ago

Hey,

sorry for opening a issue for a simple question.

I am not able to full understand what action has the biggest performance impact. I've gone through tutorials and the docs and tried what I understood so maybe I am already on the right track....:

What has the biggest performance impact?

From what I see I guess the most important stuff is to like mentioned in the last tutorial step to reuse the data as much as possible.

1) But does that mean, that having a bigger static dataset (where a big portion of the points is not even visible) and not needing to copy it ever again is better then having a smaller data set and needing to refresh it quite often? (Is there something like occlusion culling in webgl in the background)

2) should I think about splitting variable data in static and variable datasets, so that I reduce the part I have to copy again and again and retain the static parts?

I tried to do a benchmark on these but it is hard to get a reliable outcome.

Best, Kalle

wwwtyro commented 2 years ago

But does that mean, that having a bigger static dataset (where a big portion of the points is not even visible) and not needing to copy it ever again is better then having a smaller data set and needing to refresh it quite often? (Is there something like occlusion culling in webgl in the background)

WebGL will not execute the fragment shader on offscreen fragments, but it's not entirely cost-free to rely on that. In fact, applications that are likely to benefit from CandyGraph will tend to spend more cycles on vertices than fragments, so there may not be much in the way of savings at all. Another option is to chunk your data into static datasets and then only render the datasets that intersect the region you're plotting.

should I think about splitting variable data in static and variable datasets, so that I reduce the part I have to copy again and again and retain the static parts?

Yes, that would likely be helpful if you find yourself hitting some bottleneck. The only caveat I would add is that you wouldn't want to make it so fine-grained that it creates too many draw calls (i.e., you'll start having perf issues if you have ~10K datasets).

Hope that helps :slightly_smiling_face:

0xkalle commented 2 years ago

Thank you very much!