Closed rauldpm closed 1 week ago
I've modified the original workflow by removing the ssh-key:
I've found no failures in the first run. I've launched the workflow with threads=13
I've found no failures in the second run. I've launched the workflow with threads=13
I've found no failures in the third run. I've launched the workflow with threads=13.
As the issue cannot be reproduced we will close this issue as not planned
Description
A flaky test has been discovered in DTT1 when using the following YAML input, I have only reproduced it once in three executions
YAML
``` version: 0.1 description: This workflow is used to test agents' deployment for DDT1 PoC variables: manager-os: - linux-ubuntu-22.04-amd64 - linux-ubuntu-18.04-amd64 - linux-ubuntu-20.04-amd64 - linux-amazon-2-amd64 - linux-redhat-7-amd64 - linux-redhat-8-amd64 - linux-redhat-9-amd64 - linux-centos-7-amd64 - linux-centos-8-amd64 - linux-oracle-9-amd64 - linux-debian-10-amd64 - linux-debian-11-amd64 - linux-debian-12-amd64 infra-provider: aws working-dir: /tmp/dtt1-poc tasks: # Unique manager allocate task - task: "allocate-manager-linux-ubuntu-22.04-amd64" description: "Allocate resources for the manager." do: this: process with: path: python3 args: - modules/allocation/main.py - action: create - provider: "{infra-provider}" - size: large - composite-name: "linux-ubuntu-22.04-amd64" - inventory-output: "{working-dir}/manager-linux-ubuntu-22.04-amd64/inventory.yaml" - track-output: "{working-dir}/manager-linux-ubuntu-22.04-amd64/track.yaml" - ssh-key: "As the log file is stored in
/tmp
, I have not been able to save it due to a power outageBased on the image, we need to check the uninstall test cases, as probably, the
/var/ossec
directory is probably still present due to an uninstall failed process