python / pyperformance

Python Performance Benchmark Suite
http://pyperformance.readthedocs.io/
MIT License
870 stars 175 forks source link

Fix #214: Always upload, even when some benchmarks fail #231

Closed mdboom closed 2 years ago

mdboom commented 2 years ago

Currently, if a single benchmark fails, none of the results are uploaded for that entire compile command run. With this change, we always upload the results if only individual benchmarks fail. If the compilation fails or "global" venv building fails, of course, we still don't upload.

This changes the reporting for compile_all: Since uploading now almost always happens, there's no reason to report it as a special case. Instead, we note when it fails as a result of a benchmark failing, and report that some benchmarks didn't work for that run.

Testing this is really tricky, given that we don't have any tests at all that capture and check the output of compile or compile_all currently. I'm not sure it's worth the effort to clean that up as part of this specific PR.

I did manually check the following cases though:

Any other important use cases to test?

pablogsal commented 2 years ago

I think this covers all cases we care about. If we discover more failure modes we can fix them afterwards.

Thanks for the fix @mdboom. Great work 🤘