Open Seanmatthews opened 2 months ago
Expanding on this-- it really comes down to the #[values()] feature creating every combination of inputs. Which, that provides great generation ability, but precludes the ability to associate test inputs & expected values in a 1:1 fashion.
How would you feel about a #[values_assoc()] feature, which requires the same number of cases from other sources, and associates arguments in a 1:1 fashion? Example:
fn test(#[values_assoc(a, b)] val: &str, #[values_assoc(1, 2)] num: i32) {}
would generate test cases
test(a, 1)
test(b, 2)
Is this a feature for which you'd welcome a PR?
Sorry, but I cannot understand what's this is not equivalent to
#[rstest]
fn test(#[values(("a", 1), ("b", 2))] vals: (&str, i32)) {}
or
#[rstest]
#[case("a", 1)]
#[case("b", 2)]
fn test(val: &str, num: i32) {}
I got your point that's how this feature can work with rstest_reuse
but, maybe, is better to find a syntax to use to enrich the apply
macro and attach some other parameters to templates.
I guess that this ticket is quite the same of #215
I can't find a way to do this with either
rstest
orrstest_reuse
, and it'd be very useful for my case. I have a bunch of defined instantiations of a struct against which I'd like to test different operations. But while each of these test functions (one for each operation) take the same list of struct data, the expected output of the operation is different. Is there a way to combine the test data with different expected results for each test?The closest I got was with something like this:
However, this performs every combination of the input and the expected value: (2, 2, true), (2, 2, false), (2, 3, true), (2, 3, false). How can I do something like the above that will result in the cases: (2, 2, true), (2, 3, false)?