Closed tompollard closed 5 days ago
We currently have a github action that runs benchmarks when a format is added or edited.
The action errors out when base formats are edited because the "Run benchmark tests" step is skipped but the "Concatenate results" step is still looking for benchmark_results_*.txt file/s to report. See for example: https://github.com/chorus-ai/chorus_waveform/actions/runs/11615143978/job/32344989477
benchmark_results_*.txt
This pull request should fix the problem, by wrapping the cat benchmark_results_*.txt step in an if clause. If the file doesn't exist, we now report "No benchmarks were run.", instead of raising an error.
cat benchmark_results_*.txt
Thanks @tompollard , this looks good me!
We currently have a github action that runs benchmarks when a format is added or edited.
The action errors out when base formats are edited because the "Run benchmark tests" step is skipped but the "Concatenate results" step is still looking for
benchmark_results_*.txt
file/s to report. See for example: https://github.com/chorus-ai/chorus_waveform/actions/runs/11615143978/job/32344989477This pull request should fix the problem, by wrapping the
cat benchmark_results_*.txt
step in an if clause. If the file doesn't exist, we now report "No benchmarks were run.", instead of raising an error.