Closed elimelt closed 1 year ago
The fact that the WPM is capped for small arrays of data/small snippets is because the maximum WPM will always be conversion_factor*(num_chars). This is pretty much unavoidable without collecting and partitioning data into groupings finer than 1 second.
I am still going to dynamically size the window, but there will always theoretically be a cap on the WPM result due to the lack of precision in our measurment.
Currently the window size in smoothen is a constant. This leads to it performing poorly on very large/very small data sets. We should resize the window depending on the amount of data provided by the performance summary such that there isn't a hard limit on the resulting WPM.