reasonml / reason-native

Testing, printing, coloring, and other tools to effectively write native Reason code.
https://reason-native.com/
MIT License
459 stars 43 forks source link

Rely: Performance testing? #34

Open bryphe opened 5 years ago

bryphe commented 5 years ago

A key value proposition of native reason is performance - and the tooling can help significantly in that regard.

I think that rely could help serve this by providing faculty for performance tests. There are two main use cases:

Without proper benchmarking place - it's really easy for performance of features to atrophy. It's tough to measure it all the time and easy to make a change that breaks performance.

rely could help this by providing a way to specify performance tests. These would have specific goals (ie, < 1ms execution time, no allocations).

In addition, like snapshot tests, we could output 'performance snapshots' - storing some data about the test execution, and cached on CI. Then, we could compare - were any tests slower than previous runs? Are any all of a sudden allocating more memory? If so, we could flag it as a failure and catch a potential performance regression.

This would be a neat way to do in-depth benchmarking like @jordwalke did for the flex library: https://github.com/jordwalke/flex build-over-build across platforms.

There'd need to be some tuning to minimize noise, but I think with a combination of the two - performance goals, and build-over-build performance snapshots - rely could go a long way in helping both build performant apps and keep apps performant.

Do you think this would be useful? And if so - would it make sense for this to be part of rely, or perhaps a separate layer on-top of rely?

bandersongit commented 5 years ago

I think this definitely makes sense in rely. At a minimum this seems like it would be fairly straightforward to implement as a custom matcher when we expose that API. I could see something like expect.ext.fn(() => {...}).toTakeLessThanInMs(1000) to be pretty straightforward to implement (similarly for ensuring no allocations).

In terms of first class support I have been scoping out the work of separating out code for custom reporters/allowing tests to be run async/in parallel and based on what Jest does I think that it is very probable that we will end up baking in timing data to the internal TestResult and public Reporters API. I'm taking a fair bit of inspiration from https://github.com/facebook/jest/blob/master/types/TestResult.js#L151 for this task

Once we have that timing data built in I could definitely see adding some utilities for benchmarking execution time. I think that exposing additional test methods or higher order functions is probably the most idiomatic way to accomplish this on a per test basis. Alternatively I could see using a reporter/run parameters to protect/warn against significant performance regressions/ensure that all tests don't take longer than a particular time.

I could also see extending expect as a first class language feature that doesn't take a thunk as an argument, but I am not sure what the users expectations would be/there could be some weirdness with multiple calls within a function.

kyldvs commented 5 years ago

I think this would be great to have. Two things that will be important to take into account though: