algattik / azure-pipelines-jmeter-extension

Azure Pipelines extension for JMeter and Taurus test tools
MIT License
12 stars 6 forks source link

Taurus Runner keeps running when the test is over #25

Open jams4code opened 2 years ago

jams4code commented 2 years ago

We have a load testing pipeline using the Taurus Runner extension, but sometimes it doesn't complete the task when the JMeter test is done. And the consequence is that the pipeline runs until it reaches the timeout and get canceled (in this case tests were supposed to run 30min, and it failed after 6h. The tests did run for 30 minutes only)

configuration:

image image

Here is the YAML of the pipeline:

trigger: none

variables:
  - group: testUserCredentials
  - name: JMETER_DIRECTORY_OUTPUT
    value: "testResult"

name: "$(ReportName)"

pool:
  vmImage: "windows-latest"

#
# The pipeline is defined with the azure extension from jmeter
# See https://marketplace.visualstudio.com/items?itemName=AlexandreGattiker.jmeter-tasks
# Taurus settings can be found here https://gettaurus.org/docs/JMeter/#JMeter-Properties-and-Variables
#

jobs:

  - job: JmeterTest
    timeoutInMinutes: 360

    steps:
      - task: JMeterInstaller@0
        inputs:
          jmeterVersion: "5.5"

      - task: UsePythonVersion@0
        inputs:
          versionSpec: "3.8"
      - script: python -m pip install --upgrade pip
      - script: pip install --upgrade wheel setuptools Cython

      - task: TaurusInstaller@0
        inputs:
          taurusVersion: "1.16.11"
          pythonCommand: "python"

      - task: TaurusRunner@0
        inputs:
          taurusConfig: |
            execution:
            - scenario:
                script: LoadTesting/jmeter/PerformanceManagementTestPlan.jmx

            settings:
              check-interval: 60s
              default-executor: jmeter

            modules:
              jmeter:
                properties:
                  TestDuration: '$(TestDuration)'

            reporting:
            - module: passfail
              criteria:
              - fail of GetLive>3%
              - avg-rt of GetLive>1000ms

            - module: junit-xml
              filename: $(JMETER_DIRECTORY_OUTPUT)/jmeter-performance.xml
              data-source: pass-fail

            - module: final-stats
              summary: true
              percentiles: true
              summary-labels: true
              failed-labels: true
              test-duration: true

            - module: blazemeter
              report-name: '$(ReportName)'

          jmeterHome: "$(JMeterInstaller.JMeterHome)"
          jmeterPath: "$(JMeterInstaller.JMeterPath)"
          jmeterVersion: "$(JMeterInstaller.JMeterVersion)"
          outputDir: $(JMETER_DIRECTORY_OUTPUT)

      - task: PublishTestResults@2
        condition: succeededOrFailed()
        inputs:
          testResultsFormat: "JUnit"
          testResultsFiles: "**/jmeter-performance.xml"
          failTaskOnFailedTests: true
        displayName: 'RESULTS: Publish Results'

      - publish: $(JMETER_DIRECTORY_OUTPUT)
        artifact: JMeterResults
        condition: succeededOrFailed()
        displayName: 'RESULTS: Publish Artifacts'

Does someone know how I can avoid that in the future? Because we run the load tests once a week as it needs to run against a prod "like" environment, which is expensive.

jams4code commented 2 years ago

Any news?