allankp / pytest-testrail

pytest plugin for integration with TestRail, for creating testruns and updating results
MIT License
96 stars 124 forks source link

Run with Test Plan Id and no run id doesn't capture results #91

Open conversica-aaronpa opened 5 years ago

conversica-aaronpa commented 5 years ago

Describe the bug When I try to get results into an existing Test Plan, the parameters are accepted, but no results are found in the UI.

To Reproduce Create a testrail configuration file with the following valid values:

[API]
url = https://conversica.testrail.io
email = aaronpa@conversica.com
[TESTRUN]
assignedto_id = 2
project_id = 7

Execute a pytest run with command line arguments included with valid values like the following, where 65 is an empty Test Plan with a descriptive name for environment:

--testrail --tr-config=testrail.cfg --tr-password=MyRealPassword --tr-plan-id=65

Output

python3 -m pytest --junitxml logs/out_report.xml --html /logs/out_report.html --variables /config/config.json --testrail --tr-config=testrail.cfg --tr-password=password --tr-plan-id=65
============================= test session starts ==============================
platform linux -- Python 3.6.8, pytest-4.3.0, py-1.8.0, pluggy-0.9.0
pytest-testrail: existing testplan #65 selected
rootdir: /, inifile:
plugins: variables-1.7.1, testrail-2.3.3, metadata-1.8.0, html-1.20.0, cov-2.6.1
collected 60 items                                                             

tests/menu_walk/test_leadmanager_listview.py ....                        [  6%]
tests/menu_walk/test_leadmanager_listview_morefilters.py ....FF..        [ 20%]
tests/menu_walk/test_leadmanager_responseview.py ....                    [ 26%]
tests/menu_walk/test_leadmanager_resview_morefilters.py .......          [ 38%]
tests/menu_walk/test_overview.py ..                                      [ 41%]
tests/menu_walk/test_reporting_assistantactivity.py .....                [ 50%]
tests/menu_walk/test_reporting_conversation.py ....                      [ 56%]
tests/menu_walk/test_reporting_customreports.py .......                  [ 68%]
tests/menu_walk/test_reporting_leadprocess.py .....                      [ 76%]
tests/menu_walk/test_reporting_leadsources.py .....                      [ 85%]
tests/menu_walk/test_reporting_newleads.py .....                         [ 93%]
tests/menu_walk/test_reporting_repperformance.py ....                    [100%][testrail] Start publishing
[testrail] Testcases to publish: 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721
[testrail] Testruns to update: 
[testrail] End publishing

Expected behavior A Test Run with an auto generated name will be added to the Test Plan, much like how a Test Run will be created and populated when no Test Run or Test Plan ID is included

Comment I'm just trying to find a simple flow to allow me to group newly auto-generated Test Runs into a grouping object (Test Plan seems logical) for runs to a given environment. I want to group TRs for QA vs. Stage environments separately. The closest I've found is to include a static TR id for a run with a descriptive name, but that captures results showing only most recent at the aggregated summary level. I think I want a separate run for each actual build server run, but want to group rather than using a naming strategy for example.

conversica-aaronpa commented 5 years ago

Accidentally clicked closed, re-opened

allankp commented 5 years ago

@conversica-aaronpa I'll take a look at this when I get a chance.

You have left your password in the sample output, I have removed it. Please change your password on testrail asap

conversica-aaronpa commented 5 years ago

Whoops, forgot about that echo. Done, thanks.

conversica-aaronpa commented 5 years ago

Looking at the pytest-testrail and the testrail API some more, I guess I'm surprised that add_run doesn't have an optional plan id that can be passed in, and to create a run inside a plan it looks like you have to call add_plan_entry in place of add_run. That's more complicated than I'd expected, so perhaps this is actually a feature request. If there is an existing feature (suites? we are currently single suite by default in cloud hosted) that can allow me to organize similar test runs being created on the fly by automated runs into Plans or some other grouping per environment/run-reason that would work.

apallier commented 5 years ago

Hello @conversica-aaronpa, options --tr-run-id and --tr-plan-id don't allow to automatically create testrun. Your testplan must contained one or more testruns. Actually, these options work only on existing testplan/run. If you want to create a new testrun, you must not use these options. If you want to create a new testrun into an existing testplan, you're right, it's a new behavior/feature.

conversica-aaronpa commented 5 years ago

Yes, the latter is what I'm after, new run in a passed in plan. I'm not sure what else could happen when accepting such parameters, what happens now with no results being logged at all after going through all the motions doesn't seem right. It shouldn't be difficult to add another path in the logic to call add_plan_entry in place of add_run for this scenario. As it stands, this scenario, passing in a test plan id only, seems to lose the results. If I can figure it out I'll make a pull request, but it might take me a while.

apallier commented 5 years ago

@conversica-aaronpa OK, feel free to open a pull request.

conversica-aaronpa commented 5 years ago

PR made as https://github.com/allankp/pytest-testrail/pull/92, will add example output there.

clifter1 commented 3 years ago

I've encountered this issue due to sheer confusion. You have two parameters that create test runs, and two that update test runs. Unfortunately, plan_id has use cases for both:

create test run:

overrides creation and only updates existing test run(s):

clifter1 commented 3 years ago

There is a separate API call for adding a test run to a test plan (add_plan_entry):

https://www.gurock.com/testrail/docs/api/reference/plans#addruntoplanentry

Maybe --tr-testrun-planentry-id and planentry_id as values for creating the test run under a test plan?