leeoniya / uPlot

📈 A small, fast chart for time series, lines, areas, ohlc & bars
MIT License
8.84k stars 385 forks source link

Set data array read position in options to enable circular buffered data #947

Closed RedShift1 closed 7 months ago

RedShift1 commented 7 months ago

Currently data is being read starting from index 0 (https://github.com/leeoniya/uPlot/blob/e579947241a48c401a7a2c96f1815f44fd19173a/src/paths/linear.js#L60).

Simplified like so:

for(let i = 0, len = arr.length; i < len; ++i)
    plotPoint(arr[i])

In the case of time series data streaming, you would have constant array.push and array.slice, to keep a fixed size array. With millisecond level data these operations become significant. A solution to reduce memory and cpu usage is to store the data in a circular buffer (like described here: https://en.wikipedia.org/wiki/Circular_buffer). With this method no memory allocations happen at runtime.

However when you need ordered data, you need to specify the offset at which to start reading the array. In that case the algorithm for plotting the points would become something like this:

const offset = 2;

for(let i = 0, len = arr.length; i < len; ++i)
    plotPoint(arr[(offset + i) % len])

Perhaps an option can be added to set this data offset?

leeoniya commented 7 months ago

i've worked with circular buffers before.

there are a lot of places in the codebase that expect the arrays to be 0-indexed, so this isnt just a case of switching out the loop in one or two places -- it would need to be done everywhere. additionally, you lose the ability to work with these arrays using native functions, or the flexibility of variable length arrays across data updates, etc. then, you now have to build an circular buffer abstraction to do all the same stuff you get for free with plain arrays, and this abstraction is not free, you will pay for it in CPU time in exchange for the memory savings.

i don't think the juice is worth the squeeze here. i'm sure there are cases when this would be helpful, like millions of datapoints being updated frequently, but i dont think it's worth complicating everything just for these extreme cases.

https://github.com/leeoniya/uPlot?tab=readme-ov-file#unclog-your-rendering-pipeline shows a pretty good stress test, with streaming 3600 datapoints at 16ms (60fps). you can see the function that generates the data via slice() + concat is mostly a rounding error in the profile. i'm interested to see your specific use case and what savings you're expecting the proposed changes to yield.

image