Closed nishihatapalmer closed 1 year ago
Maybe some tests are modifying the text and are leaving it dirty
Fixed in #65 .
While the same random text via the seed was being generated, some algorithms modify the text buffer beyond the text (e.g. to put in a pattern sentinel guard).
This causes algorithms that are erroneously reading out-of-bounds to give different results depending on what algorithm ran before it (which modified the text buffer in some way).
Solution is just to zero out the entire text buffer before each algo run. Doesn't fix the algorithms that are buggy of course, just guarantees they will give the same bugs for the same random seed no matter what algorithms ran before it.
I find if i run a test on an algorithm with a fixed random seed, and then run a test on a set of algorithms that includes that one with the same random seed, I find different failures for that algorithm.
This means that the random seed is not resetting identically for each algorithm. However, the code seems to show that it does indeed do that. It is unclear why each random algorithm run is not repeated identically given the same starting point.