Open nicoddemus opened 6 years ago
as far as i can tell pytest.fixture_request('default_context')
in the proposal is roughlymatchingt of the lazy fixture as far as specification/declaration goes
from that i can also tell is that lazy_fixtures is semantically broken due to the evaluation mechanism being foreign to the fixture mechanism (as such, dependent fixtures may get the wrong values)
additionally lazy_fixtures is unaware of the potential parameterization of requested fixtures which should be noted as shortcoming
as such i consider lazy_fixtures a good initial prototype before including it however, i feel the need to enable it in various ways
one part can be solved by the new fixture setup hooks, which can be used to map from a request to the desired fixture values
the other part may need a design discussion and perhaps needs to be deferred, as parametrization of the dependent fixtures needs to be taken care of either by explicit declaration of the parameter at the request or by extending collection/parametrization
i propose initially bailing out if a requested fixture is parametrized, and later on deciding a more nuanced mechanism
Thanks @RonnyPfannschmidt!
In the light of the technical details highlighted by @RonnyPfannschmidt, I propose then that we work with @TvoroG to solve while documenting pytest-lazy-fixtures
as the recommended solution to the problem (IOW the 2nd suggestion from the original post).
Hi! Sorry for late response :)
I'm not quite understood, you want to solve lazy_fixture
parametrization first? For example to support this:
@pytest.mark.parametrization('bar', [
lazy_fixture('bar_fixture', params=[4, 8, 15])
])
def test_foo(bar):
...
@TvoroG no, the parametrization is pretty hard to solve, and not necessarily correct that way
imagine a local override of a requested parameter fixture that changes parameter counts/values or even goes from parametrized to un-parametrized
what i would like to see solved is awareness of parametrization in such cases and early bail out + good errors in such cases because this one will bite hard otherwise
@RonnyPfannschmidt I didn't quite get that, can you provide a minimal example that shows the problem (preferably using pytest-lazy-fixture
)?
@fixture(params=["a", "b", "c"])
def moarfun(request):
return request.param
@fixture(params=[1,2,3])
def whops(request, moarfun):
return request.param
@pytest.mark.parametrize('a', [
lazy_fixture('whops'), # not clear how to expand
lazy_fixture_ex('whops', params=[1,2,3]), # dito
lazy_fixture_ex2('whops', param=1), # explicit expansion, still unaware of indirect things
lazy_fixture_ex2('whops', param=2),
lazy_fixture_ex2('whops', param=3),
]
def test(a):
pass
Is anyone working on it ?
@rajibmitra until pytest_layz_fixtures is actually up to the task merging it to the core is taking on technical debt with high interest and low payback probability - so no
Hello, is there any progress?
Nope
For what it's worth a +1 for this as an integration.
Not popping the hood on either, the role pytest_lazy_feature tries to fill seems to have huge benefits to the community. Seems there are lots of questions on the topic and no official solutions.
A very old and common question is "how can I use fixtures for parametrization?" (#349). During the 2016 sprint we had a discussion which resulted in a proposal, but neither approaches proposed there were implemented into the core.
Meanwhile @TvoroG went ahead and implemented the 2nd approach in an external plugin, pytest-lazy-fixture, which seems to work well in practice.
So how about we integrate that into the core? @TvoroG, is that something you would be willing to contribute?
Alternatively we do not integrate it into the core right now, but officially declare that using
pytest-lazy-fixture
is the recommended way to go for using fixtures for parametrization (just aspytest-xdist
is the official plugin to use for distributed testing). This way we can decide later to merge (if ever) depending on user feedback. This will probably also increase the visibility of the plugin, giving us a chance of seeing the plugin in the wild.I think the latter is "zero effort/high reward" because we can just add it to the docs, close the relevant issues and let users try it out immediately; the plugin doesn't seem to be very well known at the moment, I was just asked again by a coworker this question and this motivated me to write this proposal/question.
Thoughts?