gurock / trcli

TR CLI (trcli) is a command line tool for interacting with TestRail.
Mozilla Public License 2.0
47 stars 42 forks source link

Support plan_id when submitting results from pytest #138

Open matthcap opened 1 year ago

matthcap commented 1 year ago

What would you like the TestRail CLI to be able to do?

posting on behalf of @lscott in the TestRail Community: https://discuss.testrail.com/t/trcli-not-accepting-test-plan-run-id/23701

Use a test plan (multiple runs) run id in the --run-id field of trcli

When attempting to use a plan-id, I just get this error:

TestRail CLI v1.4.3 Copyright 2021 Gurock Software GmbH - www.gurock.com Parse JUnit Execution Parameters

Report file: ./reports/junit-report.xml Config file: None TestRail instance: https://OUR-INSTANCE.testrail.io Project: Firmware Run title: Temporary Automated Test Plan 2 Update run: 3078 Add to milestone: No Auto-create entities: True Parsing JUnit report. Processed 65 test cases in 1 sections. Checking project. Done. Nonexistent case IDs found in the report file: [30864, 30862, 30863, 30865, 30878, 30877, 30867, 30868, 30869, 30871, 30870, 30873, 30872, 30875, 30874, 30866, 28059, 28060, 30879, 30881, 30880, 30884, 30882, 30883, 30885, 30889, 30887, 30886, 30890, 30891, 30892, 30888, 30894, 30893, 30895, 30906, 30903, 30904, 30900, 30901, 30899, 30904, 30902, 30909, 30907, 30908, 30915, 30916, 30918, 30919, 30920, 30917, 30921, 30925, 30924, 30923, 30922, 30927, 30928, 30929, 30926, 30930, 30933, 30931, 30932] Error occurred while checking for 'missing test cases': 'Case IDs not in TestRail project or suite were detected in the report file.' Adding missing sections to the suite. Updating run: https://OUR-INSTANCE.testrail.io/index.php?/runs/view/3078 Adding results: 0/65 Error during add_results. Trying to cancel scheduled tasks.

Aborting: add_results. Trying to cancel scheduled tasks. Adding results: 0/65 No attachments found to upload. Field :run_id is not a valid test run. Deleted created section At the moment, I’m bodging this by Am I doing something wrong or are test plans simply not supported at this time?

Why is this feature necessary on the TestRail CLI?

Using a singular --run-id doesn’t scale for the multiple products that we make (each with different tests to be ran) when we use test plans with multiple test runs to capture

More details

In the spirit of open source, here is the code we currently use to get round this:

!/usr/bin/python3

Author: lscott

Usage: Generate string for pytest and submit results to TestRail using trcli

""" TODO :

import logging import subprocess import sys from time import sleep import os import xml.etree.ElementTree as ET import requests

If I've done my job right, you should only have to update the infomation in this box

But obviously feel free to peruse at your leisure :)

unit = "unit" title = "Temporary Automated Test Plan 2"

Update me with appropriate runIDs

If any new suites are added, they must be also added here with an approriate ID

test_types = { "About/": "3079", "Access/": "3080", "Analytics/": "3081", "API/": "3082", "CustomInstall/": "3084", "Alarms/": "3085", "Airplay/": "3102", "Analogue/": "3103", "Bluetooth/": "3104", "CD/": "3105", "CD/": "3106" }

Configure the logger

logging.basicConfig(filename='test.log', level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s') console_handler = logging.StreamHandler(sys.stdout) console_handler.setLevel(logging.INFO) console_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s')) logging.getLogger().addHandler(console_handler)

home = os.path.expanduser('~')

Get IP from ip.xml

tree = ET.parse(f"{home}/code/ip.xml")

Get the root element

root = tree.getroot()

Find the ip element and extract its value

ip_element = root.find('ip') ip_address = ip_element.text logging.info(f"Using IP: {ip_address}")

Get UUTs build information

response = requests.get(f'http://{ip_address}:15081/system') if response.status_code != 200: logging.critical(f"Device NOT reachable at {ip_address}") sys.exit(1)

data = response.json() build = data['build'] logging.info(f'Build: {build}')

folder = ''

Main loop

for test_type, run_id in test_types.items(): logging.info(f"Processing test type {test_type}") with open(f"{home}/git/SWTestScripts/JenkinsHelpers/config/{unit}", 'r') as tests:

Ignore commented tests

    matching_tests = []
    # Ignore commented tests
    # This is added to NOT break current automated testing in Jenkins
    for test in tests:
        if test.startswith('#'):
            continue
        if test.startswith('!'):
            # Get first word after '!'
            folder = test.strip().split()[1]
        if test_type in test:
            matching_tests.append(f"{home}/git/SWTestScripts/CI/{folder}/{test.strip()}")
    logging.info(f"Found {len(matching_tests)} matching tests")
    if not matching_tests:
        continue
    test_type = test_type[:-1]
    command = f"python3 -m pytest --junitxml 'reports/{test_type}-report.xml' {' '.join(matching_tests)}"
    logging.info(f"Running command: {command}")
    # Run command and wait until completed waiting on an exit code
    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
    stdout, stderr = process.communicate()
    exit_code = process.wait()
    logging.info(f"Command output:\n{stdout.decode('utf-8')}")
    if exit_code == 0:
        logging.info(f"Command completed successfully with exit code {exit_code}")
    else:
        logging.error(f"Command failed with exit code {exit_code}:\n{stderr.decode('utf-8')}")
    sleep(3)

    # Submit to TestRail with results file
    # Suite ID = Generic Automation Suite
    command = f'trcli -y -c config.yaml parse_junit --title "{title}" --case-matcher "name" --run-id {run_id} --suite-id 1671 --result-fields version:{build} --allow-ms -f reports/{test_type}-report.xml'
    logging.info(f"Running command: {command}")
    process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
    stdout, stderr = process.communicate()
    exit_code = process.wait()
    logging.info(f"Command output:\n{stdout.decode('utf-8')}")
    if exit_code == 0:
        logging.info(f"Command completed successfully with exit code {exit_code}")
    else:
        logging.error(f"Command failed with exit code {exit_code}:\n{stderr.decode('utf-8')}")
    sleep(3)

The configs are structured as so:

File name: unit-name

# ! Features # Inputs/test_inputs.py

IgnoredTests/

As you can see, this is not a fix, but a bodge.

Hope that helps some deal with this annoyance :slight_smile:

Interested in implementing it yourself?

Maybe, let's talk!

lscottnaim commented 1 year ago

Happy to test this before something is merged

lscottnaim commented 1 year ago

Formatted code:

!/usr/bin/python3

# Author: lscott
# Usage: Generate string for pytest and submit results to TestRail using trcli

""" 
TODO :
    - Copy this file to ~/git/
    - '/' is added to stop finding cases like test_AccessAPI.py, it is removed in the report creation stage
"""

import logging
import subprocess
import sys
from time import sleep
import os
import xml.etree.ElementTree as ET
import requests

# If I've done my job right, you should only have to update the infomation in this box
# But obviously feel free to peruse at your leisure :)

unit = "unit"
title = "Temporary Automated Test Plan 2"

# Update me with appropriate runIDs
# If any new suites are added, they must be also added here with an approriate ID
test_types = {
        "About/": "3079",
        "Access/": "3080",
        "Analytics/": "3081",
        "API/": "3082",
        "CustomInstall/": "3084",
        "Alarms/": "3085",
        "Airplay/": "3102",
        "Analogue/": "3103",
        "Bluetooth/": "3104",
        "CD/": "3105",
        }

# Configure the logger
logging.basicConfig(filename='test.log', level=logging.DEBUG, format='%(asctime)s %(levelname)s %(message)s')
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(logging.INFO)
console_handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s'))
logging.getLogger().addHandler(console_handler)

home = os.path.expanduser('~') 

# Get IP from ip.xml 
tree = ET.parse(f"{home}/code/ip.xml")

# Get the root element
root = tree.getroot()

# Find the ip element and extract its value
ip_element = root.find('ip')
ip_address = ip_element.text
logging.info(f"Using IP: {ip_address}")

# Get UUTs build information
response = requests.get(f'http://{ip_address}:15081/system')
if response.status_code != 200:
   logging.critical(f"Device NOT reachable at {ip_address}")
   sys.exit(1)

data = response.json()
build = data['build']
logging.info(f'Build: {build}')

folder = ''

# Main loop
for test_type, run_id in test_types.items():
    logging.info(f"Processing test type {test_type}")
    with open(f"{home}/git/SWTestScripts/JenkinsHelpers/config/{unit}", 'r') as tests:
        # Ignore commented tests
        matching_tests = []
        # Ignore commented tests
        # This is added to NOT break current automated testing in Jenkins
        for test in tests:
            if test.startswith('#'):
                continue
            if test.startswith('!'):
                # Get first word after '!'
                folder = test.strip().split()[1]
            if test_type in test:
                matching_tests.append(f"{home}/git/SWTestScripts/CI/{folder}/{test.strip()}")
        logging.info(f"Found {len(matching_tests)} matching tests")
        if not matching_tests:
            continue
        test_type = test_type[:-1]
        command = f"python3 -m pytest --junitxml 'reports/{test_type}-report.xml' {' '.join(matching_tests)}"
        logging.info(f"Running command: {command}")
        # Run command and wait until completed waiting on an exit code
        process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
        stdout, stderr = process.communicate()
        exit_code = process.wait()
        logging.info(f"Command output:\n{stdout.decode('utf-8')}")
        if exit_code == 0:
            logging.info(f"Command completed successfully with exit code {exit_code}")
        else:
            logging.error(f"Command failed with exit code {exit_code}:\n{stderr.decode('utf-8')}")
        sleep(3)

        # Submit to TestRail with results file
        # Suite ID = Generic Automation Suite
        command = f'trcli -y -c config.yaml parse_junit --title "{title}" --case-matcher "name" --run-id {run_id} --suite-id 1671 --result-fields version:{build} --allow-ms -f reports/{test_type}-report.xml'
        logging.info(f"Running command: {command}")
        process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
        stdout, stderr = process.communicate()
        exit_code = process.wait()
        logging.info(f"Command output:\n{stdout.decode('utf-8')}")
        if exit_code == 0:
            logging.info(f"Command completed successfully with exit code {exit_code}")
        else:
            logging.error(f"Command failed with exit code {exit_code}:\n{stderr.decode('utf-8')}")
        sleep(3)
bitcoder commented 9 months ago

This was implemented as part of v1.6.0 release. If no further feedback is provided here on the following days, then this issue will be closed.

lscottnaim commented 9 months ago

Hi @bitcoder, See here where I had a conversation with @d-rede. This is not implemented. https://github.com/gurock/trcli/pull/155

@d-rede I'm confused by the conclusion of https://github.com/gurock/trcli/issues/138. I thought I would be able to simply give a plan ID and if a test case was found in a run inside a plan, it would drill down to check and update it. i.e. trcli -n -c /home/liam/git/config.yaml parse_junit --title "NAME-OF-TITLE" --case-matcher "name" --plan-id 3246 --suite-id 1671 --result-fields version:3.8.4.5453.0 --allow-ms -f /home/liam/git/reports/test-report.xml

bitcoder commented 9 months ago

@lscottnaim ,

  1. so right now the run is associated to the plan, correct?
  2. sorry I didnt understand exactly what would like to have and what is not happening right now. can you please elaborate a bit, maybe giving a concrete example?

thanks in advance!

lscottnaim commented 9 months ago

Sure thing.

I have a plan called Release-3.8.5 and inside that plan I have test runs for different features I want to test. These runs have different test cases in them.

So:

-- PLAN --- FEATURE 1 ---- TEST CASES --- FEATURE 2 ---- TEST CASES

I have my automated tests that execute with python split into the folders following the same layout as the name of the features.

I want to be able to run all the tests in one go and generate one JUNIT file with all the results of my tests from all different features (e.g. Feature 1 and 2). As this is not currently possible, I have to run my tests in sections based on Feature and have to specify the exact run id for each fixture:

test_types = {
        "About/": "3079",
        "Access/": "3080",
        "Analytics/": "3081",
        "API/": "3082",
        "CustomInstall/": "3084",
        "Alarms/": "3085",
        "Airplay/": "3102",
        "Analogue/": "3103",
        "Bluetooth/": "3104",
        "CD/": "3105",
        }

After each feature is complete, it will then upload the results to that specific test run and start on the next test feature. We have around 50 features with many test cases in them, so having to upload every time is time-consuming.

So in an ideal world, I would like to give a plan id only and as the runs and their cases are associated to TestRail it would recognise that case ids are associated to the plan and update the results accordingly.