Open win5923 opened 1 year ago
Hi, @win5923 I'll take a look today or tomorrow.
@delatrie If more information you need, please let me know. thanks
Hi, @win5923 ! Some updates regarding this issue.
I confirm there is a bug in allure-pytest related to how a dynamic parameter updates an existing parameter specified by pytest (or one of its plugins).
Until this is fixed, I advise you to introduce a new parameter instead of overwriting an existing one in your setup
fixture:
@pytest.fixture(scope="function")
def setup(page: Page,pytestconfig):
page.set_viewport_size({"width": 1920, "height": 1080})
page.goto("https://google.com")
browser = pytestconfig.getoption("--browser")
browser_channel = pytestconfig.getoption("--browser-channel")
if browser_channel != None:
allure.dynamic.feature(f"{browser_channel}")
allure.dynamic.tag(browser_channel)
allure.dynamic.parameter("browser_channel", browser_channel) # Note the parameter name here
else:
allure.dynamic.feature(f"{browser[0]}")
allure.dynamic.tag(browser[0])
allure.dynamic.parameter("browser_name", browser[0])
yield page
That will do the trick:
In the example above there is already the browser_name
parameter added to the test by pytest-playwright. The parameter is originally set to the value "chromium"
when running the test with --browser-channel=chrome
or --browser-channel=msedge
. We use original values (as opposed to their serialized counterparts that appear in the report) to calculate historyId
(the thing that determines whether two tests are retries or not) because serialized values might be indistinguishable from each other (e.g., byte arrays).
The bug appears because we incorrectly update the serialized value only, leaving the original one intact, when handling a dynamic parameter added via allure.dynamic.parameter
.
Given the following example:
import allure
import pytest
from time import perf_counter
@pytest.mark.parametrize("a", ["old", "old"])
def test_issue752_reproduction(a):
allure.dynamic.parameter("a", perf_counter())
If we run pytest, we get two result files with the same historyId
. If we generate and open the report from these files, we see one test with one retry instead of two tests:
Hi, @delatrie Do you have any examples? I'm not quite sure how to add browser_channel
from conftest.py
to Allure Parameters
. Since I specify a specific browser for testing each time, as mentioned above, thanks in advance., as I mentioned above. Thanks in advance.
In your conftest.py
just replace this line:
allure.dynamic.parameter("browser_name",browser_channel)
with this one:
allure.dynamic.parameter("browser_channel", browser_channel)
Now, when you run your tests with --browser=firefox
, each test will receive browser_name="firefox"
as its parameter.
When you run them with --browser-channel=chrome
or --browser-channel=msedge
, each test will receive browser_name="chromium"
and an additional parameter browser_channel
with the value "chrome"
or "msedge"
.
Did that help?
Yes, it works. Thanks!
You're welcome!
Hi, I am conducting login tests on different browsers using
pytest-playwright
. I am using@pytest.mark.parametrize
to run tests with different emails and passwords. However, I noticed that in the allure report, my tests for Edge and Chrome are grouped together as the same test case. The test that runs first becomes the retries for the subsequent test case. I want them to appear as separate tests.What I would want it to be like would be:
I've tried using
allure.dynamic.tag
andallure.dynamic.parameter
, but they didn't help. Thanks in advance.I'm submitting a ...
What is the current behavior?
If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
test_login.py
conftest.py
I am testing different browsers using the following command:
What is the expected behavior?
The Edge test case and Chrome Test case need to be seperate.
What is the motivation / use case for changing the behavior?
Please tell us about your environment: