Closed danswick closed 3 weeks ago
Looked into this a bit and discussed with @asteel-gsa. It might be possible for us to use filters (which we use elsewhere) to determine when certain tests should be run. Since the audit
tests are the ones being slow, here's that example:
with:
filters: |
audit:
- './backend/audit/**'
... # filters for other modules
- name: Run Audit Tests
if: ${{ needs.check-for-changes.outputs.audit == 'true' }}
working-directory: ./backend
run: docker compose -f docker-compose.yml run web bash -c 'coverage run --parallel-mode --concurrency=multiprocessing manage.py test audit --parallel && coverage combine && coverage report -m --fail-under=85 && coverage xml -o coverage.xml'
# Notice above only "manage.py test audit" is run
... # tests for other modules
This would basically have to be set up for all the modules. Unfortunately, there are complications since some interconnectedness exists between some modules. For example, a bunch of modules, including audit, use dissemination.models. This means any changes to there should also run all those tests. There are probably other examples, and any time some new interconnection is made these workflows would have to be updated. On my last contract we tried to be too selective with running our unit tests and it definitely caused some big headaches, so I'm a bit hesitant here. Thoughts @danswick?
Without a clear path forward, I'm inclined to close this as unplanned and come back to it later or from another angle, especially since @sambodeme has been putting some thought into running workbook validation tests using the IR instead of the full workbooks, which could be a significant improvement.
Running the full test suite takes >17 minutes. Most of that time is spent running a lot of spreadsheets through a full set of validations. Until we can optimize those tests, we should break out the validation tests and only run them when there are relevant changes in the commits being tested (or possibly when merging into specific branches).