One of the tests is currently failing due to drift a in dependencies or compilers that I have been unable to track down. This also appears to be system dependent as I have run on a few setups where the tests are still passing. There seems to be enough variance in the answers that comparing to a single set of answer files is not the best way to do things. The only solution I can think of is moving to a setup where answers are generated on the fly by reverting to a gold standard changeset. I am open to other suggestions.
One of the tests is currently failing due to drift a in dependencies or compilers that I have been unable to track down. This also appears to be system dependent as I have run on a few setups where the tests are still passing. There seems to be enough variance in the answers that comparing to a single set of answer files is not the best way to do things. The only solution I can think of is moving to a setup where answers are generated on the fly by reverting to a gold standard changeset. I am open to other suggestions.