In case of long windows (years) and large number of events, we may have perf issues in terms of memory usage/latency for both online and offline case. Some optimizations can be made, e.g. precomputing fixed sized windowed aggregations and reusing them, storing data on-disk vs in-memory etc. This is called 'tiling'
In case of long windows (years) and large number of events, we may have perf issues in terms of memory usage/latency for both online and offline case. Some optimizations can be made, e.g. precomputing fixed sized windowed aggregations and reusing them, storing data on-disk vs in-memory etc. This is called 'tiling'
Tecton example: https://www.tecton.ai/blog/real-time-aggregation-features-for-machine-learning-part-1/
Fennel also mentions this in their blog (TODO find)