Open richkadel opened 3 years ago
I suggest ignoring "ignore" when "--bless" is specified.
Rather than doing this, I think it makes more sense not to bless the output of ignored tests (i.e. leave the outputs the same as they were before). I do agree that x.py should warn either way, though.
"not to bless the output of ignored tests" is how it works today (I'm fairly sure). The tests don't run at all, so there is no opportunity to bless them.
I thought about how to do a warning while still ignoring them, but since ignored tests don't run, how would we know if the ignored test is a test that cares about "bless". What part of x.py test
would know to warn, unless we always warn if any test in the target list is ignored?
(For example, running x.py test --bless
without any other arguments would always warn, but it would not be specific about what ignored tests will fail to bless.)
Ignoring ignored tests on --bless
is not that much better, I grant you. At least it forces the issue. But the only better way, I think, is to have some way of determining if a given test supports --bless
, and I don't know how many variants there are. (For Makefile tests, we'd have to grep the Makefile for RUSTC_BLESS_TEST
, for example, but mir-opt
tests work differently.)
It may be worth doing it that way, but still less than optimal since many changes break only a small set of blessed results. But I can't think of any way to figure out which results would be effective without the developer enabling every blessable test's required configurations.
"not to bless the output of ignored tests" is how it works today (I'm fairly sure). The tests don't run at all, so there is no opportunity to bless them.
From what I can tell on https://github.com/rust-lang/rust/pull/82076, it's deleting the test outputs, not just ignoring them.
Yes, that seems to be a different issue. But the first time you ran the test with --bless
you said it didn't fail, and Eric suggested you needed profiler = true
. I assumed that was one of your problems. But regardless, I'm fairly sure the issue described here would be a problem for anyone that did not have profiler = true
, or for any other reason the test would have been disabled (such as trying to bless from windows-gnu
, which is currently disabled as well).
Some
ignore
directives in tests (such asneeds-profiler-support
) directcompiletest
to skip over tests when certain conditions are not present.If a compiler change also requires rebuilding "blessed" (expected) test results, those results may need to be rebuild, regardless of the developer's local configuration choices.
For instance, tests that include
needs-profiler-support
will be ignored if the developer has not enabledprofiler = true
. When the developer runsx.py test --bless
, the subject tests will not run, and updated results will not be generated, but thex.py test
run will appear to have succeeded.For some tests that only run under a full
bors
CI run, the test will eventually fail (perhaps hours or days later), with potentially mysterious results (to the developer), since they passed locally.I suggest ignoring "ignore" when "--bless" is specified.
If the developer is unable to recreate all required configurations, the developer should at least get feedback, via error messages, about the tests that failed, and can either adjust their environment to compensate, or visually verify that their changes should not affect the failed tests (and may not require blessing them).
cc: @jyn514