Open extrowerk opened 4 years ago
I am not sure anybody has bothered running these since I last did. These tests failing is more of an issue for a submitter/contributor than a regular user since it indicates that the resulting images are different across platforms - which should not be the case. But it is still possible there's a bug which changes the results. Are the resulting incorrect sizes/hashes always the same?
FYI Because of the usage of FLAC the output is unfortunately not ensured to be the same since their algorithm is not stable across different versions and compiler flags. We tried to mitigate this with some adjustments so at least the official source package does have reproducible images.
Also the metadata checks obviously need some proper diffing.
@firewave it sounds like the whole package isn't well thought for unit testing like at all, as also claimed in issue #7716, failing the repeatable requirement hardcore among other things.
@angelosa
@firewave it sounds like the whole package isn't well thought for unit testing like at all, as also claimed in issue #7716, failing the repeatable requirement hardcore among other things.
I wrote this ages ago when I had no clue about pytest
at all.
I just wanted to make sure that chdman
is working correctly since it is an important part of MAME and back then I was also planning to use the format to do backups of my all own discs with it. It is easily extendable so it's not too shabby and nobody else seemed to care about it (either the testing approach or if the tool is actually working fine - beside hangs).
- Is it fixable without depending on FLAC?
At least partially for the tests which don't use that compression method. But since the FLAC algorithm is unstable by design (causing different results depending on compiler optimizations) and it being an integral part of the CHD format probably not. But it's quite possible the output just changed because a different FLAC version is being used now and you just need to update the expected output data - hard to tell.
- Is it really necessary to compare with SHA1 instead of any other alternative that better scales up? Totally not a fan of UUID-like checks, even less when you have to compose your own function and check with the same function if a v4 CHD is equal to a v5 ...
I possibly wasn't aware of any other function back then. I hadn't used much Python back then.
- I'm honestly lost by the lack of any documentation whatsoever (to say the least) about what the individual tests are supposed to achieve;
They test various combinations input data for various chdman
commands to get expected reproducible results. I am basically tried to generate a matrix of the possible combinations to test to find some issues.
- An unit test shouldn't dump all that wall of text to stdout unless really on demand and only if anything really failed.
Feel free to changed it.
Current HaikuPorts recipe: https://github.com/haikuports/haikuports/pull/5365
Testresults: