novoda / merlin

Observes network connection status & gives callbacks
Other
543 stars 79 forks source link

Drop fest support #161

Closed Mecharyry closed 6 years ago

Mecharyry commented 6 years ago

Problem

Fest is outdated and not actively supported anymore 😬

Solution

Use Truth assertions instead. https://github.com/google/truth

Remove Mockito Annotation from codebase, it's slower than just specifying when adding fields due to the annotation processing.

Screen Capture

Just running the core batch of tests between master and this branch.

Before After
screen shot 2018-02-14 at 16 11 15 screen shot 2018-02-14 at 16 10 46

Paired with

Nobody.

Mecharyry commented 6 years ago

@rock3r we started doing this on an internal project, I'm sure I can find the numbers that we used on that project. On this project it is negligible because of the number of tests. I prefer this because now we have private final fields, so enforcing some of our coding standards on tests, rather than just forgetting about them.

Mecharyry commented 6 years ago

@rock3r updated with the time difference in the description. It is quite significant actually πŸ˜„

rock3r commented 6 years ago

Performance testing ✨

Tests running ./gradlew clean --daemon && ./gradlew test --daemon --scan.

All ran from command line with Gradle 4.4.1 (Gradle 4.5 doesn't work, didn't check what's broken), Build scan plugin 1.10.3, Android Gradle plugin 3.0.1 on my MBP 15". Other stuff was running, but the system load was pretty much constant across all runs.

Before (master)

222 tests executed in 3 projects

  Run 1 Run 2 Run 3 Mean Standard deviation
Total time 14.602s 14.199s 13.618s 14.1397s 0.494675s
Run test 8.935s 8.567s 8.464s 8.6553s 0.247613s
Scan URL tc5sv7cezbuqi gazcf42zhvrwe xnov7knnasz54 β€” β€”

Before, only one module (merlin-rxjava2)

12 tests executed in 1 project

  Run 1 Run 2 Run 3 Mean Standard deviation
Total time 3.526s 3.327s 3.256s 3.3697s 0.139965s
Run test 2.643s 2.431s 2.299s 2.4577s 0.173543s
Scan URL 46eu3gyws5ktg eidzuvi3pc2zs 2b26syundy6cu β€” β€”

After (drop-fest-support)

228 tests executed in 3 projects

  Run 1 Run 2 Run 3 Mean Standard deviation
Total time 13.084s 12.825s 13.350s 13.0863s 0.262507s
Run test 8.412s 8.336s 8.431s 8.3930s 0.050269s
Scan URL iecrppzjsjoda odurn7n6p3bg4 aqtyvp3ltidhm β€” β€”

After, only one module (merlin-rxjava2)

12 tests executed in 1 project

  Run 1 Run 2 Run 3 Mean Standard deviation
Total time 4.226s 3.491s 3.243s 3.6533s 0.511210s
Run test 2.516s 2.620s 2.347s 2.4943s 0.137783s
Scan URL jqxvacirfl4lq geejakag7kboq a4pzvxjnflz56 β€” β€”

Analysis and conclusions

Ok, time to do some stats (and try not to screw it up too badly, since I am no statistical expert).

As the numbers showcase, there is a ~260ms decrease between the before and after mean test run durations, which is a negligible fraction of the total Gradle run time (<2%). The Gradle run total duration was decreased, on average, by 1.05 seconds or about 7%. Given the differences between the numbers are so minute, and there's only few measurements in the dataset, it is not a significative enough difference that it could not be caused by any number of factors.

Running tests only for one module (merlin-rxjava2) shows a mean increase in test run duration of ~37ms β€” once more showing how limited these measurements are, but also that once again there is no significative gain in test runtime. The total runtime has shown a mean reduction of ~280ms, once again rather small at around 8%. It's hard to infer a linear correlation in the decrease of total runtime with the number of modules tested and the before/after cases. On the other hand further measurements might prove a stronger correlation than what is visible here.

There does not seem to be a linear correlation at all between the number of tests run and the total Gradle run duration decrease; a linear projection would tell us that it should take ~440ms to run 12 tests given it takes 8.393s to run 228 (or 222, on master). A single test took a mean of 39ms to run for the whole project on master, and 205ms for just the merlin-rxjava2 module (likely due to Gradle and JUnit overheads). On the current branch the means are respectively 37ms and 208ms, extremely close to the "before" case and well within "random fluctuations" territory.

I would go out on a limb and say that dropping the @Rule in favour of inlined calls to mock is not improving build/test run times in any meaningful way.

Hutch4 commented 6 years ago

" I'd prioritise legibility."

I don't think the Mockito annotations add more readability. So is all this perf talk even needed?

I say merge it. and have a deeper discussion of how to properly execute a perf analysis later.

rock3r commented 6 years ago

I've chatted with @Mecharyry and he prefers having final fields. The numbers only conclusively proved that this doesn’t make anything considerably faster or slower despite the claims of the article that started it all. Besides, legibility is not that much better or worse either way and it'd boil down to personal preferences, so I respect his preference. SHIPIT :shipit:

tasomaniac commented 6 years ago

@rock3r amazing detailed investigation. I personally think that it does not worth that much investigating. It's all personal preferences, really. But I would be curious doing this on a bigger project where we have thousands of tests and the duration is minutes not seconds.

Not even related to this project but something about style, this comes very handy with Kotlin. Because otherwise you would have lateinit var. Not only that is ugly, but also every usage of such mocked instance is underlined (because it is a var)