PhilterPaper / Perl-PDF-Builder

Extended version of the popular PDF::API2 Perl-based PDF library for creating, reading, and modifying PDF documents
https://www.catskilltech.com/FreeSW/product/PDF%2DBuilder/title/PDF%3A%3ABuilder/freeSW_full
Other
6 stars 7 forks source link

Full Windows test #163

Closed carygravel closed 3 years ago

carygravel commented 3 years ago

Finally got everything working in a Windows runner. It is a little brittle, because choco doesn't seem to update the PATH variable properly, so I had to do it manually, which means every time choco updates its imagemagick or ghostscript versions, the path will have to be adapted. However, the test should then fail immediately, which at least should highlight the fact that the workflow needs updating.

I've added some diagnostic messages so that it is clear whether Perl has found imagemagick or ghostscript or not.

PhilterPaper commented 3 years ago

This bit me tonight (ImageMagick is in a different place). This has got to be automated in some manner. I searched for the correct new path -- your PATH update says 7.0.11, while the log of the install step says 7.1.0. GS 9.54.0 still seems to be OK. I changed the PATH update to 7.1.0 for Full Windows in test.yml, and it seems to work, but there's got to be a better way. Could a Perl script (say) be used to update the PATH, and in the process looks around to see where the various prereqs are now stored, so that it never has to be done manually again?

How does Full Ubuntu take only 45sec, while Full Windows takes almost 22 minutes? Is "Full" Ubuntu covering as much? Is it all the extra crap that Windows has to install in order to run the tests? I can't believe there's this much difference!

carygravel commented 3 years ago

Certainly you could work up a script to look under C:\Program Files\ImageMagick- and add it to the path. I don't have a Windows machine, though, and the debugging cycle takes too long for me to develop it just using the Github CI.

The reason Ubuntu takes 45sec to Windows 22 minutes is that the Ubuntu, like most Linux distributions provides ready-made packages, plus the fact that the base image has ImageMagick pre-installed. So all the Linux version is doing is:

i.e. the Linux runner doesn't need to touch or run CPAN at all, and doesn't need to compile anything.

Whereas the Windows runner spends much more time installing the non-Perl stuff (choco seems slower than apt). But more importantly, cpan is way slower than apt. Any you are right, Windows has to install more stuff, because Ubuntu has all the development tools there by default.

PhilterPaper commented 3 years ago

Can I assume that all the "missing from Windows" stuff you add in the CI is verified to be needed for build and test, and you didn't just casually add any of it "just to be safe"? I'm concerned that GitHub is going to be unhappy with me for having so much load on their system every time I push files or pull a PR. Do you know if there is some limit? I suppose that the alternative is to start unloading some of the CI builds and tests, and the prereqs for those, to lighten the thing. How much is really necessary to test each time there is repository activity, so long as we're testing everything at some point? Especially, if it's all being tested under Ubuntu CI, how much is really necessary under Windows? Anyway, something to contemplate.

For the same matter, is there too much in the "t" tests? I'm not sure that it's really necessary to exhaustively test every edge and corner case, and every possible ancient regression/bug fix at each installation/upgrade, so long as it's done systematically at some point in the release cycle. By the way, one can provide an xt/ test directory that doesn't normally get run, but could hold those more esoteric tests (instead of cluttering up t/). Another thing to consider.

carygravel commented 3 years ago

Obviously, I want the tests to run as quickly as possible. I only added what I thought was necessary, but hey - have a play and see if you can speed things up.

This says you get 2000 minutes a month for free. That's quite a lot of debugging.

For me, the only tests I would remove are redundant ones. Otherwise, it is always my goal to get as close as possible to 100% coverage. The only bad tests are those that don't run automatically - because those get forgotten and allow regressions.