openshift-helm-charts / development

0 stars 17 forks source link

Explore using a matrix in GitHub actions to call individual behave tags during E2E testing #327

Open komish opened 6 months ago

komish commented 6 months ago

One of the biggest challenges we run into with E2E testing is with transient errors (e.g. GitHub rate limits). One test case failing requires us to re-run the entire E2E workflow because we currently call all BDD tests in a single call. For context, this is during the Workflow Test execution, where a release is taking place, and the full behave tag is being used.

Explore the use of GitHub's matrix functionality to call the full test suite as individual jobs. Explore concurrency, but keep in mind that we don't want the cluster to tumble because we're running multiple E2E test suites at once.

komish commented 2 months ago

I reconfigured testing for #359 to shorten the feedback loop, and it allowed some basic testing of matrix based execution. Here's the workflow file that I used.

name: "Matrix-based Testing"

on:
  workflow_dispatch:

jobs:
  get-features:
    runs-on: ubuntu-latest
    outputs:
      features: ${{ steps.find-features.outputs.features }}
    steps:
      - uses: actions/checkout@v4
      - name: find features
        id: find-features
        run: |
          cd tests/functional/behave_features
          echo features=$(find . -name '*.feature' | sed -e 's%\./%%g' | jq -R -s -c 'split("\n")[:-1]') | tee -a $GITHUB_OUTPUT
  run-tests:
    runs-on: ubuntu-latest
    needs: [get-features]
    strategy:
      fail-fast: false
      max-parallel: 4
      matrix: 
        feature-file: ${{ fromJson(needs.get-features.outputs.features) }}
    steps:
      - name: Checkout Base Branch
        uses: actions/checkout@v4
        with:
          token: ${{ secrets.BOT_TOKEN }}

      - name: Set up Python 3.x Part 1
        uses: actions/setup-python@v5
        with:
          python-version: "3.10"

      - name: Set up Python scripts
        run: |
          # set up python scripts
          echo "set up python script in $PWD"
          python3 -m venv ve1
          cd scripts
          ../ve1/bin/pip3 install -r requirements.txt
          ../ve1/bin/pip3 install .
          cd ..
      - name: Test CI Workflow
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          BOT_NAME: ${{ secrets.BOT_NAME }}
          BOT_TOKEN: ${{ secrets.BOT_TOKEN }}
          PR_BODY: "Triggered by ${{ github.event.sender.html_url }} from ${{ github.event.repository.html_url }}"
        run: |
          ve1/bin/behave tests/functional/behave_features/ --include ${{ matrix.feature-file }} --tags=full --logging-level=WARNING --no-capture --no-color

This isn't exactly how we would want to run this testing because I was running it against a branch (main in my case) with a workflow_dispatch, but it gives an example of how this might work.

Notable things: