Open conversica-aaronpa opened 5 years ago
Accidentally clicked closed, re-opened
@conversica-aaronpa I'll take a look at this when I get a chance.
You have left your password in the sample output, I have removed it. Please change your password on testrail asap
Whoops, forgot about that echo. Done, thanks.
Looking at the pytest-testrail and the testrail API some more, I guess I'm surprised that add_run
doesn't have an optional plan id that can be passed in, and to create a run inside a plan it looks like you have to call add_plan_entry
in place of add_run
. That's more complicated than I'd expected, so perhaps this is actually a feature request. If there is an existing feature (suites? we are currently single suite by default in cloud hosted) that can allow me to organize similar test runs being created on the fly by automated runs into Plans or some other grouping per environment/run-reason that would work.
Hello @conversica-aaronpa, options --tr-run-id
and --tr-plan-id
don't allow to automatically create testrun. Your testplan must contained one or more testruns. Actually, these options work only on existing testplan/run.
If you want to create a new testrun, you must not use these options.
If you want to create a new testrun into an existing testplan, you're right, it's a new behavior/feature.
Yes, the latter is what I'm after, new run in a passed in plan. I'm not sure what else could happen when accepting such parameters, what happens now with no results being logged at all after going through all the motions doesn't seem right.
It shouldn't be difficult to add another path in the logic to call add_plan_entry
in place of add_run
for this scenario. As it stands, this scenario, passing in a test plan id only, seems to lose the results.
If I can figure it out I'll make a pull request, but it might take me a while.
@conversica-aaronpa OK, feel free to open a pull request.
PR made as https://github.com/allankp/pytest-testrail/pull/92, will add example output there.
I've encountered this issue due to sheer confusion. You have two parameters that create test runs, and two that update test runs. Unfortunately, plan_id has use cases for both:
create test run:
overrides creation and only updates existing test run(s):
plan_id
From what I understand, there was an original use case where plan_id would update all test runs that exist under a test plan (that doesn't make any sense to me, but that's how it works).
I think there are two options: 1) Fix plan_id to follow suit with milestone_id and project_id. This makes logical sense as updating a test run only happens when you explicitly supply the run_id. 2) Make a new parameter for test run creation under a plan. --tr-testrun-plan-id and config value say... plan_newrun_id This doesn't make logical sense from a naming schema, but it does allow for backward compatability.
There is a separate API call for adding a test run to a test plan (add_plan_entry):
https://www.gurock.com/testrail/docs/api/reference/plans#addruntoplanentry
Maybe --tr-testrun-planentry-id
and planentry_id
as values for creating the test run under a test plan?
Describe the bug When I try to get results into an existing Test Plan, the parameters are accepted, but no results are found in the UI.
To Reproduce Create a testrail configuration file with the following valid values:
Execute a pytest run with command line arguments included with valid values like the following, where 65 is an empty Test Plan with a descriptive name for environment:
Output
Expected behavior A Test Run with an auto generated name will be added to the Test Plan, much like how a Test Run will be created and populated when no Test Run or Test Plan ID is included
Comment I'm just trying to find a simple flow to allow me to group newly auto-generated Test Runs into a grouping object (Test Plan seems logical) for runs to a given environment. I want to group TRs for QA vs. Stage environments separately. The closest I've found is to include a static TR id for a run with a descriptive name, but that captures results showing only most recent at the aggregated summary level. I think I want a separate run for each actual build server run, but want to group rather than using a naming strategy for example.