Closed mdamone closed 8 years ago
Yes, this change was made as an enhancement to the Expectations API.
In this case, a better version of the test can be written as follows:
@Test
public void simplifiedVersion() {
new Expectations() {{
dependency2.getValue(); result = 1;
dependency2.getName(); result = "dependency2";
}};
String result = unit.doSomething(asList(dependency1, dependency2));
assertEquals("dependency2", result);
}
Although, in this case, it looks like a value object is being mocked, which is bad practice. So, an even better version of the test would be (assuming that DependencyAbc has the necessary properties, and constructors or setter methods):
@Test
public void bestVersion() {
DependencyAbc dep1 = new DependencyAbc();
DependencyAbc dep2 = new DependencyAbc(1, "dependency2");
String result = unit.doSomething(asList(dep1, dep2));
assertEquals("dependency2", result);
}
Are you saying minTimes = 0
is not going to be supported anymore?
I don't understand how your simplifiedVersion()
is going to work. dependency1
is never mocked so it might not produce the expected result (it would in my example, but what if the logic is changed to if (dependency.getValue() >= 0)
then you would have to add depenency1.getValue(); result = -1;
).
Sorry if you are getting caught up in this example but it is really just an "example". My real question (or point) is that minTimes = 0
does not work. Which if that is not supported then shouldn't that be documented somewhere. I don't see which change in 1.26 applies to this. Maybe:
Deprecated all three attributes from the @Mock annotation (invocations, minInvocations, and maxInvocations). Use of said attributes is indicative of a behavior-oriented test that would be better written with the Expectations API.
But I am using the Expectations API right? And right now the javadoc for minTimes
still says: "Zero or a negative value implies there is no lower limit." Is this going to be changed and or somehow prevented in the future? Although to me, this seems like a regression in functionality.
The reason I like "setup" type Expectations is that I can write a single "helper" Expectations block and then re-use that for each test (and multiple times for different Injectables) without having to specifically enumerate the exact cases. Maybe you are suggesting that this is bad practice, but it really makes the test creation easier and makes them easier to read. Without this, I feel test creation can be tedious and the Expectations can "distract" from the real meat of the test case.
Let me know if you feel my use case is still not valid or if you have a better solution for me.
Thanks for your feedback.
BTW, if I change my helper DependencyExpectations
to:
final class DependencyExpectations extends Expectations {
public DependencyExpectations(DependencyAbc dependency, int value, String name) {
super(dependency);
dependency.getValue(); result = value; minTimes = 0;
dependency.getName(); result = name; minTimes = 0;
}
}
Then this works as I would expect (at least in version 1.26). Which might work for my current test case. Except that with the partial mocking I am having problems with other expectations of the dependency so will have to work around those problems or get those fixed in the dependent libary.
But either way, I would still like to know if minTimes = 0
is something that will not be supported in the future.
"minTimes = 0" is still supported, but only from expectations recorded in a test setup method (ie, @Before or @BeforeMethod).
The change in "minTimes" was not mentioned in the API documentation, indeed; I will add a mention about this (which is just the case where it's set to 0 from a test method - otherwise nothing changed).
The rationale for this change was to prevent misuse of the API. As far as I can see, any test that does "minTimes = 0" can be made better by not doing it. In general, indirect recording of expectations (through a supposedly reusable named subclass) only leads to bad, harder to understand and often more verbose tests. As in the "simplifiedVersion" example (which obviously does work for this case).
You say "it really makes the test creation easier and makes them easier to read". However, can you show an example test that actually supports this claim?
After NonStrictExpectations were removed, we changed all our test code to Expectations with minTimes=0 as this was what we thought is the way to go.
Suppose there are multiple test methods in one test class (testng). There might be one @BeforeMethod for them all but some of the methods need specific recordings. For instance if they use DataProvider, minTimes=0 is needed. We have Expectations blocks directly in those test methods.
Are you saying we must extract this block from test method into separate @BeforeMethod and tie it to test method through specific test group? I must say I don't see this as an improvement. Thank you.
Hmm, I am trying to understand your case, but failing...
If you have an expectation block in a particular test method, then it's only there because you expect the code being tested to make at least one call matching the recorded expectation. And that's why "minTimes = 1" is the default, and also why "minTimes = 0" is not allowed (from a test method, it is allowed from a @BeforeMethod),
So, I don't see any need to "extract this block from test method". It should work fine as is (ie, using the default minTimes).
No, I don't expect the code being tested to make any call to collaborator recorded in expectation. That was the reason to use NonStrictExpectations in the first place. There is testng DataProvider for test method providing with input data for method under test. Like this:
@Test(dataProvider = "dataProvider")
public void test(Data input) {
new Expectations() {{
collaborator.find("x");
result = xxx;
minTimes = 0;
collaborator.find("y");
result = yyy;
minTimes = 0;
}};
Output output = codeUnderTest.call(input);
...assert output contents
}
With this change there is no possibility to have one expectation block for each of these input parameters, because methods of collaborator in expectations block (in our case codebook manager) are being called in different order multiple times or not at all - all depending on DataProvider input parameters. All we want to record is if particular method gets called, expected output is returned from collaborator (codebook) and result of method under test is asserted later.
Why have you decided to restrict minTimes=0 usage to @BeforeMethod only? What is the difference between that and recording in test method? That sounds like a serious limitation of functionality. If this is misuse of the API as you say in previous comment, how would you construct this test? Thank you.
I didn't consider cases like this, but "minTimes = 0" can be made to be allowed when a dataProvider is used.
Would it be possible to show a more complete test, though? As the test method only receives input (and doesn't use it in the expectations), how does it know which asserts to execute? I always thought that data-driven tests would exercise a single path with different data sets, but here the test obviously goes through different paths based on each input set.
After some searching & thinking about the question, here are my conclusions:
This is a case of conditional test logic, a known bad testing practice. Also, see the answers to this StackExchange question.
More specifically, the issue becomes clearer if the test is re-written as follows:
@Test(dataProvider = "dataProvider")
public void testDifferentScenarios(Data input) {
if (isScenarioA(input)) {
new Expectations() {{ collaborator.find("x"); result = xxx; }};
} else { // scenario B
new Expectations() {{ collaborator.find("y"); result = yyy; }};
}
Output output = codeUnderTest.call(input);
...assert output contents
}
The test above should be separated into two tests, one for each path/scenario:
@Test(dataProvider = "dataProviderA")
public void testOneScenario(Data input) {
new Expectations() {{ collaborator.find("x"); result = xxx; }};
Output output = codeUnderTest.call(input);
...assert output contents for scenario A
}
@Test(dataProvider = "dataProviderB")
public void testAnotherScenario(Data input) {
new Expectations() {{ collaborator.find("y"); result = yyy; }};
Output output = codeUnderTest.call(input);
...assert output contents for scenario B
}
Here is hopefuly more clear example:
@Test(dataProvider = "dataProvider")
public void test(String inputXml, Output expected) {
new Expectations() {{
collaborator.find("x");
result = codebookX;
minTimes = 0;
collaborator.find("y");
result = codebookY;
minTimes = 0;
}};
// use inputXml to mock ws template - RequestMatchers.payload()...
Output actual = codeUnderTest.call();
ReflectionAssert.assertReflectionEquals(expected, actual);
}
@DataProvider
private Object[][] dataProvider() {
return new Object[][]{
{"full.xml", newFullExpected()},
{"empty.xml", newEmptyExpected()}
};
}
In our case input is xml (spring ws testing with RequestMatchers.payload()) and expected output is domain object. Actual domain object is compared to expected one by using convenient org.unitils.unitils-core library. So basically we test correct mapping. No need to assert collaborator (codebook manager) exact executions - it's input can be found in xml and output is asserted in domain object. As you can see there are no branches or conditions, very simple. In DataProvider you can easily compare input xml with expected domain object. Without minTimes=0 support we would have to split those tests in separate test methods with strict expectation blocks for each of them which results in more verbose tests (and unnecessary), or extract expectations block into BeforeMethod which doesn't seem like an improvement. Thank you.
Thanks. But this only confirms my previous comment: the test relies on conditional logic, even if it is implicit. So, the ideal solution would be to separate it in two tests, one for each scenario being tested, where each scenario uses a different expectation: find("x") or find("y").
Alternatively, the argument to "find(String)" and its corresponding result could come from the data provider:
@Test(dataProvider = "dataProvider")
public void test(String whatToFind, CodeBook codebookFound, String inputXml, Output expected) {
new Expectations() {{ collaborator.find(whatToFind); result = codebookFound; }};
Output actual = codeUnderTest.call();
ReflectionAssert.assertReflectionEquals(expected, actual);
}
@DataProvider
private Object[][] dataProvider() {
return new Object[][] { // or vice-versa
{"x", codebookX, "full.xml", newFullExpected()},
{"y", codebookY, "empty.xml", newEmptyExpected()}
};
}
Well, could you provide with an example of valid minTimes=0 usage in @BeforeMethod where it won't be misuse of API or conditional logic like in this example? I am trying to understand the difference between those two approaches and why it is still alowed in BeforeMethod. We were placing expectation blocks in both places depending on if it is relevant for all test methods or just some of them. Thank you.
The reason it is allowed in a @BeforeMethod is that expectations recorded there are always meant to be used by multiple tests, and it's reasonable to expect that one or another expectation may not actually be needed by one or another of the tests in the test class. This, however, is not a reasonable thing to expect when you record expectations specificaly for a single test, because the developer writing the test is supposed to know what 's being tested, and therefore to know whether each given expectation is needed or not.
Or, to put it more simply: expectations in a "before" method are writen for a set of tests, while expectations in a test method are for a single test; in the first case, the developer doesn't know that each expectation is needed in each test, in the second case he/she does.
In the case of your example test, the problem is that it wants to record expectations without knowing whether they are needed or not. Consider: how would somebody else understand the test when seeing it for the first time? Having an implicit conditional for each expectation (represented by the "minTimes = 0" constraint, which says "this expectation may be needed or not, we don't know") adds unnecessary complexity that can be avoided.
JMockit Version 1.26 with the latest version of JMockit, minTimes does not seem to work the same as before. When a method call in an Expectations block has the minTimes set to 0, a test which does not call that method will fail with a MissingInvocation of that method call.
Here is an example test case that fails in 1.26, but passes in 1.25.
MinTimesTest.txt
This used to be taken care of by NonStrictExpectations. Then that class was removed and I think it was intended to be replaced with a regular Expectations block with minTimes = 0 for each method. If this is not the case then what is the recommended way to have "optional" method expectations?