Open ddunbar opened 8 years ago
The way I have been organizing my own tests in Swift projects is to keep performance tests in their own module. This allows a lot of flexibility in Xcode to control when they are run (via schemes, etc) and helps with @testable-related issues when wanting to run tests on release builds. However, it is also an unfortunate model because of all the extra overhead of defining additional test suites and separate tests from each other.
The two straightforward options I see are:
-enable-testability
. This would encourage users to "do the right thing" (in my opinion) at the cost of a lot of inflexibility, and some people might not care if their performance tests run in a true release mode. -enable-testability
. This would allow performance sensitive users to carve up additional test modules that can run in full release mode, without forcing everyone into this model.An additional benefit of #1 is that it clearly separates the tests, which makes it easy for us to have additional behavioral defaults over when performance tests are run. That will be harder for us to manage if they are just mixed in with all the other tests, we would need XCTest to help us out and even then it wouldn't be an ideal workflow (XCTest performance tests are phrased in a way that they can be a combination of correctness tests and performance test metrics... XCTest can't know a test is a performance test until the measure
API is called.
#2 seems like the right general tradeoff for SwiftPM, except for this problem where we won't be able to easily skip performance tests in certain modes. We could potentially solve that with a different, orthogonal feature (i.e. Swift build defines for particular testing modes).
@modocache @briancroom I am curious if either of you have an opinion here?
I've never written any performance tests, so no opinions here 🙂 I'll ask people who have, though!
Maybe we can we do both? A convention-based approach to detecting performance test targets, complemented by the ability to e.g. turn testability on or off manually seems like it would fit in well with the ideas described in the SwiftPM docs.
One particular behavioral default that we'll likely want to disable for performance tests would be whether to parallelize test execution or not.
I have a strong preference that all tests should just be runnable in release mode in the same way. The reasons are
software is usually run in release
mode, so it should be tested in release
mode too (on top of debug
). The code might not be the same (precondition
vs. assert
and also if _isDebugAssertConfiguration()
)
release
mode might reveal extra bugs, especially under address, UB and thread sanitizer
in other words, we definitely don't want to only run "performance tests" in release mode but all.
@weissi I agree 110%. We are unfortunately rather stymied by the design of @testable. There has been some discussion about how we can change the testable model to better support the features we would like, but no one is actively working on it.
Maybe we should call this bug "Improve support for running tests in release mode".
I'll chip in. I agree with @weissi in general.
I use release configuration, like this:
swift test -c release -Xswiftc -enable-testing
but something weird happening, I can't explain. Not all tests run. I have 2 performance tests in the test suite that actually run, but only one actually run to measure the time . Both tests are rather similar and uses XCTest.measure(...)
eg. here are the sources:
if I run
-c debug
all tests run.
Update: The test failed, but it was not reported in any way.
¯(ツ)/¯
Xcode 9.2, Swift 4.0.3
Additional Detail from JIRA
| | | |------------------|-----------------| |Votes | 3 | |Component/s | Package Manager | |Labels | New Feature | |Assignee | None | |Priority | Medium | md5: 946faef0bc85a761be1928a00362082bis duplicated by:
@testable
doesn't workrelates to:
Issue Description:
We should have good support in SwiftPM for performance tests in general, and XCTest's performance tests in particular.
Performance tests often have slightly different requirements from correctness tests:
@testable
API.2. Performance tests frequently need to take longer to run than correctness tests. This can make it useful to devise separate workflows that make it easy to partition them out to only run in CI, etc.
3. Performance test data is much richer than correctness data, which is relatively easy to present. Presenting performance test data well is incredibly difficult. It would be nice to provide some support for this, or find a good way to delegate this to another tool.