standardebooks / tools

The Standard Ebooks toolset for producing our ebook files.
Other
1.42k stars 125 forks source link

Lint tests #678

Closed acabal closed 6 months ago

acabal commented 6 months ago

I'm trying to add a test for the new lint check t-074. I created the directories under ./tests/lint/typography/t-074, put a test file in there, and ran ~/.local/pipx/venvs/standardebooks/bin/pytest tests --save-golden-files. The command creates ./tests/lint/typography/t-074/golden/t-074-out.txt but the file is empty. I also notice that the file ./tests/lint/typography/t-001-out.txt is empty. Is this correct behavior? Shouldn't the output show the lint error we're testing for?

acabal commented 6 months ago

See 7cd7c4d2

acabal commented 6 months ago

Also unrelated, in that commit running --save-golden-files updated the help message output on my machine, but it seems like it doesn't match whatever the Github VM is running. Do you know what's going on here?

vr8hub commented 6 months ago

I’m out and about, so just a quick note. For the t-074, remember that you have to build the directory tree, so the file goes in t-074/in/src/epub/text/chapter-1.xhtml. (See the testing README.)

Re the help, I don’t have a guess, I’ll look at it when I get back.

acabal commented 6 months ago

That did it, it turns out the t-001 test that was already in there was not structured correctly, and I used it as an example.

Maybe you can look at the difference in the help command when you have a minute.

vr8hub commented 6 months ago

Oops, sorry about that, I must have missed that one when I fixed the rest.

For the help issue, when you say "in that commit running --save-golden-files," did you run --save-golden-files on all the tests, or just t-074? If the former, please don't do that. :) We only want to save golden files for a new test, or for an existing test when something in the code changes.

Step 7 in the readme says "Run the test…" with the option, and then refers to the section below for running a single test. We don't want to indiscriminately write over all existing golden files every time we add a new test; we might easily cover up a problem introduced by the code being tested in the new test. I already had some changes pending for the README, I've added to that section to emphasize that.

This is a perfect example of that; the golden file you overwrite is wrong. :) As to what's happening, I don't have a clue, but your local se is outputting a different help message than GitHub, and my local also agrees with GitHub.

In looking at the code, there is only a single ARGS argument for the parser, while yours is showing nested ones. Yours is outputting "optional arguments:" vs "options:"; again, I don't know why. A difference in the version of argparse? A difference in the python version? (I'm running 3.11.8; I don't know what GitHub has.)

(As an aside, I actually don't understand how se --help is getting the help output, i.e. the -h, --help, show this help message and exit part, and the "[-h]" usage part at the top. Help isn't one of the arguments added to argparse in main.py, and although I see the line of code that does main_args.append(arg) if the arg starts with a dash, I don't see how that determines the message, etc.)

Regardless, the prior version of the golden file should be restored for that test, because it is what GitHub is looking for, so it's not going to pass tests until that is fixed.

acabal commented 6 months ago

OK. In my mind, if running pytest tests/some-test --save-golden-files saves the golden files for a specific test, then pytest tests --save-golden-files should save the golden files for all tests, not screw everything up. Or maybe that's something we should check for when processing that flag, and return an error if we try to invoke it in the wrong way. If I did it the wrong way on my first try, I guarantee you someone else will also screw things up again down the line unless we make it not possible to screw it up.

I believe argparse itself inserts the --help command and corresponding documentation, so the difference in the test output is due to a difference in Python. I'm using 3.8.10, what are you using?

vr8hub commented 6 months ago

That is what pytest tests --save-golden-files does, the point is we don't ever want to run it on all the tests. We don't want to blanket overwrite every golden file just because we added a single test. There might be a way to determine that in pytest, but part of the way pytest is structured is that it doesn't matter whether a single test or all tests are being run; they all run the same way.

I didn't add --save-golden-files, it was already there, and it is definitely convenient. But the way I test new tests is I have a copy of the test ebook in a directory, I do my testing against it, and I copy the output from lint against that to the golden directory. Since I'm working my way through lint, that's not a big deal, because it was a one-time setup; for someone creating a single test, it's a bit more work. All of that to say, I'm fine with getting rid of '--save-golden-files' if we don't want to take the risk, but it's easy enough to check PR's (whether submitted or approved) to ensure no golden files are being touched for tests that aren't being submitted.

As I said above, "(I'm running 3.11.8; I don't know what GitHub has.)", I'm on 3.11.8. Since I just submitted a PR, I looked at the (failed) testing there, and GitHub says it's running 3.12.2. I just ssh'd to the SE server, it's running 3.10.12, and it's output agrees with 3.11/3.12. So at least 3.10/3.11/3.12's output is the same.

Did the message change between 3.8 and 3.10? I don't know, and I don't have a way to tell. Maybe you could install one of 3.10/11/12 to a venv and see if the message is different there?

Regardless, that golden file change needs to be reverted, so our tests pass going forward.

acabal commented 6 months ago

OK, I guess the message changed between 3.8 and some later version. I've already reverted the test. Thanks!