mdboom / pyperf-codspeed

Pyperf plugin to create CodSpeed benchmarks
https://codspeed.io
MIT License
0 stars 0 forks source link

AA Test #1

Closed mdboom closed 3 months ago

codspeed-hq[bot] commented 3 months ago

CodSpeed Performance Report

Merging #1 will not alter performance

Comparing aa-test (1ab1915) with main (e605df7)

Summary

✅ 1 untouched benchmarks

adriencaccia commented 3 months ago

Hey @mdboom, in order for the benchmarks to be registered, you need to call lib.dump_stats_at(uri), where uri is a string containing the benchmark's uri, like: path/to/definition/file::benchmark-name, for example tests/benchmarks/test_bench_fibo.py::test_iterative_fibo_10.

You can try to call it here: https://github.com/mdboom/pyperformance/commit/9234f4d0a75000c1f1b11e187a133ae4949583c6#diff-ef2cbcccccf978441b4d1bb0eeb21883285cc07d766b546ec7ca5d0942f8755eR637

mdboom commented 3 months ago

Thanks! Just noticed that and trying again ;)

mdboom commented 3 months ago

@adriencaccia: So I think I have results published from this branch correctly, but it's not finding the baseline results from main. I manually triggered them here, but is there something more I need to do?

adriencaccia commented 3 months ago

@adriencaccia: So I think I have results published from this branch correctly, but it's not finding the baseline results from main. I manually triggered them here, but is there something more I need to do?

There was an error on our backend during the processing of both the run on master and this pull request. So the runs are not persisted on our end. I tracked down the bug, and it is because there are multiple codspeed.dump_stats_at("deltablue".encode("ascii")) happening during a single run.

Can you ensure that this line is called exactly once with the same argument during the run?

In our integrations, we always made sure to have a single dump_stats_at for each uri, so we did not handle the case with multiple calls

mdboom commented 3 months ago

In our integrations, we always made sure to have a single dump_stats_at for each uri, so we did not handle the case with multiple calls

Yes, that should be easily possible with the "hack" I'm testing now. In general with pyperf, it can run benchmarks in a subprocess, so (maybe) that will be more difficult. But I'll just keep that in mind for when I get there.