bencherdev / bencher

🐰 Bencher - Continuous Benchmarking
https://bencher.dev
Other
575 stars 26 forks source link

GitHub Actions CI run yields "400 Bad Request" #513

Closed clinssen closed 1 month ago

clinssen commented 2 months ago

Hi, new bencher user here, hope it's OK to post an issue about this!

I have added bencher to our GitHub Actions CI run, but although the benchmark runs, I'm getting the following error afterwards:

Failed to create new report: Error processing request:
Status: 400 Bad Request

For the full log, please see: log.txt

For the GitHub Actions file, please see: https://github.com/nest/nestml/blob/c83d809d3b0e5148080c146bee2ce9fe796d6d84/.github/workflows/continuous_benchmarking.yml#L70

Much obliged!

epompeii commented 2 months ago

@clinssen thank you for trying out Bencher, and I'm sorry you're running into trouble.

I would recommend removing all of the lines in the script before bencher run here: https://github.com/nest/nestml/blob/master/.github%2Fworkflows%2Fcontinuous_benchmarking.yml#L74

And instead just include what you need as part of the benchmark command, like you have for some env vars now.

clinssen commented 1 month ago

This was entirely my fault and I should read the manual better!

For future reference, what worked is the following invocation:

          LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${{ env.NEST_INSTALL }}/lib/nest bencher run \
          --project nestml \
          --token '${{ secrets.BENCHER_API_TOKEN }}' \
          --branch '${{ github.event.number }}/merge' \
          --branch-start-point '${{ github.base_ref }}' \
          --branch-start-point-hash '${{ github.event.pull_request.base.sha }}' \
          --branch-reset \
          --github-actions "${{ secrets.GITHUB_TOKEN }}" \
          --testbed ubuntu-latest \
          --adapter python_pytest \
          --file results.json \
          --err \
          'LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${{ env.NEST_INSTALL }}/lib/nest python3 -m pytest --benchmark-json results.json -s $GITHUB_WORKSPACE/tests/nest_continuous_benchmarking/test_nest_continuous_benchmarking.py'

making sure that pytest-benchmark has been pip installed, and decorating a special test script (you can't just use existing tests unmodified!):

import pytest

class TestStuff:
    @pytest.mark.benchmark
    def test_foo(self, benchmark):
        return benchmark(self._test_foo)

    def _test_foo(self):
        # regular test code here

Much obliged and sorry for the noise!