singer-io / singer-python

Writes the Singer format from Python
https://singer.io
Apache License 2.0
544 stars 129 forks source link

Fix tests for get_standard_metadata #134

Closed luandy64 closed 4 years ago

luandy64 commented 4 years ago

Description of change

This duplicates changes in #132, but I added the doc string to explain the rational behind the test, and the self.subTest() context manager to make the process of examining a failing test more sane.

Example test run:

$ python -m unittest tests/test_metadata.py
..
======================================================================
FAIL: test_standard_metadata (tests.test_metadata.TestStandardMetadata) (test_number=10)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/code/singer-python/tests/test_metadata.py", line 321, in test_standard_metadata
    self.assertDictEqual(expected_value, actual_value)
AssertionError: {(): {'inclusion': 'foo', 'forced-replication-method': 'INCREMENTAL'}} != {(): {'forced-replication-method': 'INCREMENTAL'}}
- {(): {'forced-replication-method': 'INCREMENTAL', 'inclusion': 'foo'}}
?                                                 --------------------

+ {(): {'forced-replication-method': 'INCREMENTAL'}}

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (failures=1)

Because it's divided into sub tests now, we get an error message like this for each failing sub test. And the (test_number=10) in the first line tells you which input is failing. Just grep for test_number=10 in the test file.

Perhaps a better approach would be to move contants into global constants and break the 1 massive test into the 16 individual tests, but @Jude188 did all of the adjustments already, so I left it that way.

Manual QA steps

Risks

Rollback steps