I should probably think of a way to test these tasks better, either with an e2e test, or by doing something to the testing framework. The testing framework currently doesn't let you use the API to set up a test scenario for the cli scripts.
What has been done to verify that this works as intended?
Why is this the best possible solution? Were any other approaches considered?
How does this change affect users? Describe intentional changes to behavior and behavior that could have accidentally been affected by code changes. In other words, what are the regression risks?
Does this change require updates to the API documentation? If so, please update docs/api.yaml as part of this PR.
Before submitting this PR, please make sure you have:
[ ] run make test and confirmed all checks still pass OR confirm CircleCI build passes
[ ] verified that any code from external sources are properly credited in comments or that everything is internally sourced
I noticed the same issue I had before come up in a slightly different place: https://staging.getodk.cloud/#/projects/21/forms/cactus_monitoring/submissions/uuid%3A7f406fd6-a7ab-4b7d-9cde-b91e465e56fc
I should probably think of a way to test these tasks better, either with an e2e test, or by doing something to the testing framework. The testing framework currently doesn't let you use the API to set up a test scenario for the cli scripts.
What has been done to verify that this works as intended?
Why is this the best possible solution? Were any other approaches considered?
How does this change affect users? Describe intentional changes to behavior and behavior that could have accidentally been affected by code changes. In other words, what are the regression risks?
Does this change require updates to the API documentation? If so, please update docs/api.yaml as part of this PR.
Before submitting this PR, please make sure you have:
make test
and confirmed all checks still pass OR confirm CircleCI build passes