Closed santiagopagani closed 1 year ago
Thanks for the bug report, I will check it soon. I don't like the idea of involving a counter for the calculated hash, mainly because it is not reproducible if the data amount changes but the title stays the same.
Maybe we should then make the hash_caluclation more configurable. I would start to allow the selection of the hash-length, which maybe is enough to get unique IDs again.
But also a check should be added, to figure out if a calcualted_hash was already used before.
Added a PR which allows to raise the length of calculated test suite and test cases IDs.
I can confirm that there is an ID conflict for your provided data, if test suite ID length is 3. Setting it to 4 already fixes this. 5, to be on the safe side :)
We are currently using this extension (locked in version 0.3.6 until issue 40 gets fixed) to link GTEST results.
When using GTEST and typed tests, it is very common to have xml files with many suites (e.g., 60 suites or more, depending on the number of test cases and tested types in the xml). For this use case, we have experienced
sphinx-build
compilation errors about repeated IDs.The titles of the suites are different, but somehow they get the same hash, and hence the problem.
Here is an example xml file which is generating this problem:
The xml is then used in Sphinx-Needs via the following directive:
I have done some debugging by printing information in the test-reports extension and I can see the problem as such:
where we see ID
TEST_UNIT_RESULT_VFC_HASH_8D8
benig repeated.This questions actually whether a hash is not an overkill solution to this problem, which is not even sufficient in this case. Namely, if the user already needs to specify an unique ID for the test file name, why not simply use an incremental counter for each suite, and another incremental counter for the test cases? We can then guarantee that the ID will never get repeated.
I assume that the hash was chosen because as long as the title of the suite does not change, it does not matter if you re-order it in the xml, you always get the same ID, whereas using an incremental counter means that adding a new suite or test results in a different ID.
If the hash is a preferred solution then, we either need a longer hash for the suites, or some mechanism to make sure that the ID does not get repeated. For example, while parsing a file and selecting the hashed IDs of both suites and test cases, the extension could keep a list of the IDs selected so far for this file, and if it produces a hashed ID which has by chance already been selected, then simply increase the value by one and check again until an unused ID is found, or another similar solution. In this case one should verify that we hace not made a complete loop for the limit of encodable values for the chosen lenght of the hash, such that we do not enter an infinite loop. We can then simply issue an error that the number of supported suites for a given file has been reached, or otherwise simply increase the lenght of the hash?