Currently we have some direct and indirect/visual comparisons between the benchmark and new simulation outputs, but it would be good to add relative performance thresholds for each validation relationship indicating whether the new output passes a high-level test.
Currently we have some direct and indirect/visual comparisons between the benchmark and new simulation outputs, but it would be good to add relative performance thresholds for each validation relationship indicating whether the new output passes a high-level test.