Open sgbaird opened 2 years ago
Maybe I could pass a tuple of (
fixture
,attr
,check_value
) withindirect=False
. Assuming that works, will I be losing the benefit of using fixtures in the first place?
Not working either, as the fixture is left as a callable inside the test function.
Thanks @sgbaird for your question.
A simple way to tackle the issue would be to revert the problem: you first create a parametrize fixture that returns a pair of objects (it will therefore return the "zip" directly), and then you define your two "independent" fixtures so that they dependn on the above, and take only the first or second element.
Would that solve your problem ?
See also #284
I have two fixtures that each return a class instance where the classes are different from each other, and I want to compare the attributes of each class instance for a fixed set of inputs to a fixed set of expected outputs. The fixed inputs are the same between the two fixtures, but the list of attributes to check and the associated values are different between the two classes.
This question is pretty similar:
In my case, since I'd like to "loop" through fixtures, it seemed like I'd need to either use
pytest-cases
or some custom workaround. I had trouble getting this kind of behavior using two@parametrize
decorators, so I went with the solution mentioned above of creating a flat list of the combinations. I setindirect=True
so that I evaluate it list-wise, but this throws an error:Here's the function I mocked up for this use-case:
Maybe I could pass a tuple of (
fixture
,attr
,check_value
) withindirect=False
. Assuming that works, will I be losing the benefit of using fixtures in the first place?How would you suggest dealing with this situation? I've spent a long time searching and messing around, so if you have a suggestion or a canonical answer I think I'll go with that.
Related:
150