Closed hujiajie closed 3 years ago
Here is the implementation using nanosecond as our time measurement unit. Comparing with other alternative proposals, it worth pointing out the following shortcomings:
FPSTimer
, the raw input is already affected by loss of precision.std::chrono::duration_cast()
for casting among seconds, nanoseconds, ticks, etc. To illustrate how error-prune it could be:int64_t frequency = 200000000LL; // 200 MHz
int64_t tick;
int64_t nanosecond;
tick = 100000000LL; // 100M ticks, or 500,000,000 nanoseconds
nanosecond = tick / frequency * 1000000000; // BAD, the result is 0
tick = 10000000000LL;
nanosecond = tick * 1000000000 / frequency; // BAD, tick * 1000000000 overflows
// In practice it's not a corner case!
// The libc++ way:
nanosecond = tick / frequency * 1000000000 + tick % frequency * 1000000000 / frequency;
The second implementation is based on std::chrono::duration and aims to mitigate these concerns. The canonical unit is second now, however the histogram probably still stays imperfect:
The third path uses std::chrono::duration
too, and the idea is:
std::chrono::duration
is interpreted as milliseconds inside FPSTimer
std::chrono::duration
is interpreted as seconds elsewhereFPSTimer
API boundaryWhile it sounds in this way histogram plotting won't be hurt anymore, please do read the code as talk is always cheap.
clock() returns processor time, which is a nonsense for FPS calculation and it also causes inconsistent fish speed on different backends.
Time duration arithmetics are now primarily performed in integral/fixed-point-like manners, which perhaps sounds better than floats from precision and performance perspectives.
std::chrono also brings the benefit of sharing the same interface among OSes.