reportportal / agent-python-pytest

Framework integration with PyTest
Apache License 2.0
94 stars 102 forks source link

Skipped tests marks launch status as Failed #269

Closed cr-idanhaim closed 2 years ago

cr-idanhaim commented 3 years ago

name: Bug report about: Create a report to help us improve title: '' labels: bug assignees: ''


Describe the bug Today when I have at least 1 skipped test the whole status is marked as Failed but I would like to see that the status is marked as Passed (of course when I don't have failed tests at all). For example: Let's assume I have 10 tests that 8 passed and 2 skipped. The status is Failed but I the status should be Passed

Steps to Reproduce Steps to reproduce the behavior:

  1. Install pytest agent
  2. Run tests and skip some of them See screenshot: image

Expected behavior The launch should be marked as Passed

Actual behavior The launch marked as Failed

Package versions pytest 6.2.4 reportportal-client 5.0.10 pytest-reportportal 5.0.9

Server version: API Service: 5.3.5; Index Service: 5.0.10; Authorization Service: 5.3.5; Service UI: 5.3.5;

Additional context Similar issue: https://github.com/reportportal/reportportal/issues/451

iivanou commented 3 years ago

@cr-idanhaim Please, provide debug logs and pytest output if it's possible. Thanks.

cr-idanhaim commented 3 years ago

@iivanou Where can I find debug logs? Could you send a full paths to relevant logs?

cr-idanhaim commented 3 years ago

@iivanou Any update on my questions?

iivanou commented 3 years ago

@iivanou Where can I find debug logs? Could you send a full paths to relevant logs?

It depends on how you run pytest. By default, pytest saves logs in the root directory where the framework is being invoked. By default debug logging is disabled. Add --log-level=DEBUG to the command line to enable debugging

cr-idanhaim commented 3 years ago

As we discussed in the Slack channel, I uploaded the log file you requested and screenshot: image

HardNorth commented 3 years ago

As per slack discussion I had no luck reproducing your issue. I implemented a special suite which does skips in different ways and still nothing: https://github.com/reportportal/examples-python/blob/master/pytest/tests/test_skipped.py

Could you please ensure that yours extensions and fixtures do not fail tests?

PS: I use Report Portal 5.4.0 and pytest-reportportal 5.0.10

HardNorth commented 3 years ago

Closing as "Cannot reproduce"

cr-NirKlieman commented 3 years ago

@HardNorth

I reproduce the bug with an inner-skip test:

`def test_pass_to_show_in_report(): assert True is True

@pytest.mark.skip(reason='no way of currently testing this') def test_the_unknown(): assert True is False

def test_inner_skip(): pytest.skip("inner skip")`

When running, only the first and third tests are performed, but the final result of the launch in the report-portal web is failure:

image

I also use pytest-reportportal 5.0.10

HardNorth commented 3 years ago

OK, one more try

HardNorth commented 3 years ago

Nope, still works fine: Screen Shot 2021-08-11 at 1 08 15 PM

HardNorth commented 3 years ago

This is my new test: https://github.com/reportportal/examples-python/blob/master/pytest/tests/test_skipped.py#L24

HardNorth commented 3 years ago

@cr-NirKlieman Two questions:

  1. I can't recall such UI view in the recent versions of RP. That might be a backend bug if you use an old version of RP. Also this might be your modification bug. Could you please try my suite on a fresh, unmodified RP installation? We have a demo instance, you can try it: https://demo.reportportal.io/
  2. Could you please ensure you use the latest version of client module? reportportal-client:5.0.12
cr-NirKlieman commented 3 years ago

@HardNorth

I reproduced the bug when I used a new RP-UI taken from here on my local computer. It has the same versions like your demo (5.5.0 etc).

The other component versions are:

pytest-reportportal==5.0.10
reportportal-client==5.0.12
pytest-reportportal==5.0.10

The pytest.ini configuration is:

rp_project = probe_tests
rp_endpoint = http://127.0.0.1:8080
rp_ignore_errors = True
rp_hierarchy_dirs = False
rp_hierarchy_module = False
rp_hierarchy_class = False
rp_hierarchy_parametrize = True
rp_is_skipped_an_issue = False

And it's still report Failed when at least one test is skipped although the other passed

Thanks

image

iivanou commented 3 years ago

Can u share the pytest debug logs as well? I'd like to see how exactly test items are being reported. Thanks.

HardNorth commented 3 years ago

@cr-NirKlieman Ahh, I've got it, it's a widget. Tried again on the latest version of RP, still nothing: Screen Shot 2021-08-12 at 5 54 07 PM

Probably it depends on python version, which one you use?

cr-NirKlieman commented 3 years ago

@HardNorth We use Python 3.8.6

@iivanou That's the pytest debug logs:

============================= test session starts =============================
collecting ... collected 2 items

[18-08-2021 13:16:11] DEBUG    test_linux_and_mac_collection.tests.collection.linux.test_user_info: Case1 start
test_user_info.py::test_pass[BB_Linux_Ubuntu_20_10-1-None] [18-08-2021 13:16:11] DEBUG    test_linux_and_mac_collection.tests.collection.linux.test_user_info: Case1 end
Case1 start

-------------------------------- live log call --------------------------------
DEBUG    test_linux_and_mac_collection.tests.collection.linux.test_user_info:test_user_info.py:118 Case1 start
Case1 end
DEBUG    test_linux_and_mac_collection.tests.collection.linux.test_user_info:test_user_info.py:120 Case1 end
PASSED[18-08-2021 13:16:11] DEBUG    test_linux_and_mac_collection.tests.collection.linux.test_user_info: Case2 start

test_user_info.py::test_inner_skip[BB_Linux_Ubuntu_20_10-1-None] Case2 start

-------------------------------- live log call --------------------------------
DEBUG    test_linux_and_mac_collection.tests.collection.linux.test_user_info:test_user_info.py:124 Case2 start
SKIPPED (inner skip message)
Skipped: inner skip message

================== 1 passed, 1 skipped, 9 warnings in 0.15s ===================

Process finished with exit code 0

The tested code is:


def test_pass(rp_logger):
    rp_logger.debug("Case1 start")
    pass
    rp_logger.debug("Case1 end")

def test_inner_skip(rp_logger):
    rp_logger.debug("Case2 start")
    pytest.skip("inner skip message")
    rp_logger.debug("Case2 end")```
iivanou commented 3 years ago

I do not see any logs from the agent and the listener...

cr-NirKlieman commented 3 years ago

Here are the logs of the agent:

Test1 (passed)

send: b'POST /api/v2/probe_tests/item HTTP/1.1\r\nHost: 1.2.3.4:8080\r\nUser-Agent: python-requests/2.26.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Bearer xx-xx-xx-xx-xx\r\nContent-Length: 822\r\nContent-Type: application/json\r\n\r\n'
send: b'{"name": ":\\\\test_linux_and_mac_collection\\\\tests\\\\collection\\\\linux\\\\test_user_info.py::test_pass[BB_Linux_Ubuntu_20_10-1-None]", "description": null, "attributes": [{"key": "os.bits", "value": "64"}, {"key": "os.type", "value": "Linux"}, {"key": "template.name", "value": "BB_Linux_Ubuntu_20_10"}, {"key": "module.name", "value": "test_pass"}], "startTime": "1629298755825", "launchUuid": "xx-xx-xx-xx-xx", "type": "STEP", "parameters": [{"key": "endpoint_machine", "value": "BB_Linux_Ubuntu_20_10-1", "system": false}, {"key": "topo", "value": "None", "system": false}], "hasStats": true, "codeRef": "C:\\\\Users\\\\nir.klieman\\\\PycharmProjects\\\\code\\\\probe-tests\\\\test_linux_and_mac_collection\\\\tests\\\\collection\\\\linux\\\\test_user_info.py:test_pass[BB_Linux_Ubuntu_20_10-1-None]", "testCaseId": null}'
reply: 'HTTP/1.1 201 Created\r\n
test_user_info.py::test_pass[BB_Linux_Ubuntu_20_10-1-None] send: b'PUT /api/v2/probe_tests/item/xx-xx-xx-xx-xx HTTP/1.1\r\nHost: 1.2.3.4:8080\r\nUser-Agent: python-requests/2.26.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Bearer xx-xx-xx-xx-xx\r\nContent-Length: 137\r\nContent-Type: application/json\r\n\r\n'
send: b'{"endTime": "1629298756012", "status": "PASSED", "issue": null, "launchUuid": "xx-xx-xx-xx-xx", "attributes": null}'
reply: 'HTTP/1.1 200 OK\r\n'
send: b'POST /api/v2/probe_tests/item HTTP/1.1\r\nHost: 1.2.3.4:8080\r\nUser-Agent: python-requests/2.26.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Bearer xx-xx-xx-xx-xx\r\nContent-Length: 840\r\nContent-Type: application/json\r\n\r\n'
send: b'{"name": ":\\\\test_linux_and_mac_collection\\\\tests\\\\collection\\\\linux\\\\test_user_info.py::test_inner_skip[BB_Linux_Ubuntu_20_10-1-None]", "description": null, "attributes": [{"key": "os.bits", "value": "64"}, {"key": "os.type", "value": "Linux"}, {"key": "template.name", "value": "BB_Linux_Ubuntu_20_10"}, {"key": "module.name", "value": "test_inner_skip"}], "startTime": "1629298756191", "launchUuid": "xx-xx-xx-xx-xx", "type": "STEP", "parameters": [{"key": "endpoint_machine", "value": "BB_Linux_Ubuntu_20_10-1", "system": false}, {"key": "topo", "value": "None", "system": false}], "hasStats": true, "codeRef": "C:\\\\Users\\\\nir.klieman\\\\PycharmProjects\\\\code\\\\probe-tests\\\\test_linux_and_mac_collection\\\\tests\\\\collection\\\\linux\\\\test_user_info.py:test_inner_skip[BB_Linux_Ubuntu_20_10-1-None]", "testCaseId": null}'
reply: 'HTTP/1.1 201 Created\r\n'

Test 2 (inner skip)

test_user_info.py::test_inner_skip[BB_Linux_Ubuntu_20_10-1-None] send: b'PUT /api/v2/probe_tests/item/aa6ad5cd-995e-4e36-9338-ec7e8910ec38 HTTP/1.1\r\nHost: 1.2.3.4:8080\r\nUser-Agent: python-requests/2.26.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Bearer xx-xx-xx-xx-xx\r\nContent-Length: 161\r\nContent-Type: application/json\r\n\r\n'
send: b'{"endTime": "1629298756376", "status": "SKIPPED", "issue": {"issue_type": "NOT_ISSUE"}, "launchUuid": "xx-xx-xx-xx-xx", "attributes": null}'
reply: 'HTTP/1.1 200 OK\r\n'
send: b'POST /api/v2/probe_tests/log HTTP/1.1\r\nHost: 1.2.3.4:8080\r\nUser-Agent: python-requests/2.26.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Bearer xx-xx-xx-xx-xx\r\nContent-Length: 528\r\nContent-Type: multipart/form-data; boundary=2c84ce5e48ac8812c820032f437d951f\r\n\r\n'
send: b'--2c84ce5e48ac8812c820032f437d951f\r\nContent-Disposition: form-data; name="json_request_part"\r\nContent-Type: application/json\r\n\r\n[{"launchUuid": "xx-xx-xx-xx-xx", "time": "1629298756375", "message": "(\'C:\\\\\\\\Users\\\\\\\\nir.klieman\\\\\\\\PycharmProjects\\\\\\\\code\\\\\\\\probe-tests\\\\\\\\test_linux_and_mac_collection\\\\\\\\tests\\\\\\\\collection\\\\\\\\linux\\\\\\\\test_user_info.py\', 136, \'Skipped: inner skip message\')", "level": "ERROR", "itemUuid": "aa6ad5cd-995e-4e36-9338-ec7e8910ec38"}]\r\n--2c84ce5e48ac8812c820032f437d951f--\r\n'
reply: 'HTTP/1.1 201 Created\r\n'
send: b'PUT /api/v2/probe_tests/launch/xx-xx-xx-xx-xx/finish HTTP/1.1\r\nHost: 1.2.3.4:8080\r\nUser-Agent: python-requests/2.26.0\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nConnection: keep-alive\r\nAuthorization: Bearer xx-xx-xx-xx-xx\r\nContent-Length: 64\r\nContent-Type: application/json\r\n\r\n'
send: b'{"endTime": "1629298756557", "status": null, "attributes": null}'
reply: 'HTTP/1.1 200 OK\r\n'
iivanou commented 3 years ago

You can notice that your test items were reported correctly. So, it does not seem to be an agent's problem.

HardNorth commented 3 years ago

@cr-NirKlieman I was able to reproduce the behavior when a nested step failed (a step with a field: hasStats:false) it has no stats, so no failed tests in column, but the launch will be failed. Do you use nested-steps?

It's not possible to report nested-steps in pytest out of the box, but if you customized...

cr-NirKlieman commented 2 years ago

@HardNorth We're not using nested tests. But when I tried to to, the same behavior you detected, happened at our environment as well.

HardNorth commented 2 years ago

Closing as "cannot reproduce"