Closed hiddewie closed 3 years ago
I'm happy to do the upgrade (and keep 8 for the moment) but this exposes a problem with the code since the tests don't pass on node 12 and 14 (technically 11+).
The root cause appears to be related to the specificity sorting, which is inconsistent when the specificities are identical. This leads to the output being different in newer versions of node - see https://github.com/nodejs/node/issues/24294 for more details.
Without getting too deep into that topic, I wonder which is better - adding more node versions (knowing that the test suite fails on these new versions) or fixing the code first? I'm not very keen on making a change that will lead to persistently failing tests.
@gravitystorm Yes. Thanks for the pointers to the Node issue.
It would definitely be better to solve the failing tests first. I tried to implement stable sorting on definitions that have equal specificity, based on the elements in the definition, and if those are equal based on the zoom level of the definition. I hope this will give stable results for all tested Node versions.
Slightly off topic: Most of the test fixtures (all the .result
files) do not match the actual XML output of the test project files. Semantically they output the same structure, but spacing and CDATA content of tags is different. This makes regenerating the .result
files much harder to keep the diff small.
I tried to implement stable sorting on definitions that have equal specificity, based on the elements in the definition, and if those are equal based on the zoom level of the definition. I hope this will give stable results for all tested Node versions.
Great, thanks! This is the kind of approach I was playing with last week, so I'm happy to merge as-is. I realise that there is a very narrow range of situations where this new sorting order might change the rendered output for some people who (accidentally) relied on the previous sorting behaviour, and who haven't changed between node versions, but at least this will be more reliable going forward.
Slightly off topic: Most of the test fixtures (all the
.result
files) do not match the actual XML output of the test project files. Semantically they output the same structure, but spacing and CDATA content of tags is different. This makes regenerating the.result
files much harder to keep the diff small.
I don't know the background to how the .result
files were originally generated, but perhaps the mapnik behaviour has changed over the years to favour CDATA blocks and the test files haven't been updated. Or something similar to that, at least.
Thanks again for your work on this!
The CI config for Travis and Appveyor have been upgraded. The Node versions were outdated, and current/modern versions were not being tested.
See https://nodejs.org/en/about/releases/
Node 8 is end of life (unmaintained), but I left it for some compatibility. It should be good to remove it if maintainers agree.
Node 10 is currently in maintanance mode.
Node 12 and 14 are the active and current versions. The Travis tests also contain versions 11 and 13 (non-LTS).