We have an acceptance test that compares the metadata we're generating with a fixture of expected metadata.
It works well, but the feedback on a failure just prints the contens of a json file, which in some cases will be quite big, making it hard to drill down to where the failure is.
This task is to see if we can get more specific feedback from it.
What is this
We have an acceptance test that compares the metadata we're generating with a fixture of expected metadata.
It works well, but the feedback on a failure just prints the contens of a json file, which in some cases will be quite big, making it hard to drill down to where the failure is.
This task is to see if we can get more specific feedback from it.
What to do
so this line: https://github.com/ONSdigital/dp-data-pipelines/blob/c1344150d10dee24ef509dcf247d1b28b0edd7d9/features/data_ingress_v1.feature#L21 calls this function: https://github.com/ONSdigital/dp-data-pipelines/blob/c1344150d10dee24ef509dcf247d1b28b0edd7d9/features/steps/data.py#L111
We need more specific feedback than just printing the dict.
Take some time to investigate the options. One (possible) option I stumbeld accross is https://github.com/inveniosoftware/dictdiffer, but there might be others.
Note - use
make feature
to run the. acceptance tests.Acceptance Criteria