Closed jawrainey closed 3 years ago
_Originally posted by @jawrainey in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjYyODYxNDMwOnYy/pull_request_review_threads/discussion_
This also raises an interesting question worth discussing around writing to/from files in tests, although arguments against this tend to be for projects that have many tests and therefore writing to disk slows down running tests see here. This is unlikely to impact us 👍
The alternative is creating temporary files (as you've done below) or using mocking with unittest.mock. I think our current approach (reading/writing files) is fine (more functional testing than unit though?) and no need to use mocks as it'll introduce complexity that's not worth the effort imho.
_Originally posted by @davidverweij in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjYyODYxNDMwOnYy/pull_request_review_threads/discussion_
... the cleanup is only needed for the standard output, as the tmp_path
and tmpdir
are automatically cleaned - they are kept for a t most 3 tests invocations. see here. So I do think we might not need the fixture.
Regarding the unittests vs functional tests, yes, I would like to test whether the output is correct (e.g. the nameing of the .docx) but didn't got to it, so that would be more of a unittest don't you think?
_Originally posted by @jawrainey in https://github.com/_render_node/MDIzOlB1bGxSZXF1ZXN0UmV2aWV3VGhyZWFkMjYyODYxNDMwOnYy/pull_request_review_threads/discussion_
I was thinking that when we test the library by invoking convert
directly that it would also create the /output/<files>
and need the same cleanup. We can sort that out once we write tests for the library, but imho in a different issue since you have set the infrastructure in place here so we can progress type annotations etc alongside testing.
Yes, you're right about it being a unit test and great point about checking the output names of files. We should unit test create_unique_name
specifically to verify naming, but of course we might want to test the default arguments/logic of the convert method also
Note that we still need to unit-test whether the output actually exists as intended in the location (which we currently have by seeing if the program exits gracefully, which is not the same imho)
Alternative approach to changing options in each test is using parametrizing fixtures
. Open to debate as to which approach to implement 👍
Edited:
re:parametrizing -- I looked into this and it is not as simple to implement this for the CLI as pytest.mark.parametrize
does not accept fixtures. See this PyCon2020 video for an overview parameterized testing. As an example, here's what it might look like for testing the library under successful conditions:
def library_success_loads():
return test_cases = [
("tests/data/example.csv", "tests/data/example.docx", "NAME", "output", ";"),
("tests/data/example.csv", "tests/data/example.docx", "NAME", "nested/folder", ";"),
]
@pytest.mark.parametrize("data, template, name, path, delimiter", test_cases)
def test_library_convert_success(data, template, name, path, delimiter):
result = c2d.convert(data, template, name, path, delimiter)
# Note: not sure what to assert as convert success case returns None.
# which makes this comparison feel on pythonic
assert result == None
Originally posted by @jawrainey in https://github.com/davidverweij/csv2docx/pull/28
Shall we keep this open and tweak it to focus on unit testing of the library changes?
I am exploring pytest-mock
to test if the code handles empty rows in the .csv and alike. Might be worth applying for more cases, especially regarding:
# Note: not sure what to assert as convert success case returns None.
I've looked at this blog about How to Unit Test Functions Without Return Statements in Python and the Hypermodern Python blog.
-- edit
RE:
as pytest.mark.parametrize does not accept fixtures.
I've rewritten the options fixture as a generator, which allows function arguments in a fixture, and now looks like this:
@pytest.fixture
def options_gen(tmp_path: Path) -> Callable:
def _options_gen(**kwargs) -> List[str]:
default = {
"template" : "tests/data/example.docx",
"data" : "tests/data/example.csv",
"name" : "NAME",
"path" : str(tmp_path.resolve()),
}
# override values if provided
for key, value in kwargs.items():
if key in default:
default[key] = value
# convert dict to sequential list, add '--' for CLI options (i.e. the keys)
return result = [x for (key, value) in default.items() for x in ("--" + key, value)]
return _options_gen
def test_name_not_found(runner: CliRunner, options_gen: Callable) -> None:
result = runner.invoke(cli.main, options_gen(name="notNAME"))
assert result.exit_code == 0
Unit testing infrastructure is now in place. This issue must create unit tests for the library and edge cases as outlined below.