Closed fcaffieri closed 7 months ago
This issue can be started using Vagrant, AWS testing will be blocked until DevOps defines the mandatory EC2 tags and how we have to use them
Working on defining test cases for vagrant
:warning: Amazon - 2023 image does not exist Suse - 15 image does not exist openSUSE 15 Assistant does not work (for Manager Installation) (Could not find the system)
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
⚠️
Amazon - 2023 image does not exist Suse - 15 image does not exist openSUSE 15 Assistant does not work (for Manager Installation) (Could not find the system)
Evidence:
Evidence:
Evidence:
Note ⚠️ Amazon Linux 2023 vagrant image does not exist
Note ⚠️ openSUSE 15 Assistant does not work (for Manager Installation) (Could not find the system)
Evidence:
Note ⚠️ Suse - 15 image does not exist
:warning:
Opensuse - 15 AMI fails SUSE 15 Assistant does not work (for Manager Installation) (Could not find the system)
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
:warning:
Opensuse - 15 AMI fails SUSE 15 Assistant does not work (for Manager Installation) (Could not find the system)
Evidence:
Evidence:
Evidence:
Evidence:
Evidence:
Note ⚠️ openSUSE 15 AMI failure
Evidence:
Note ⚠️ SUSE 15 Assistant does not work (for Manager Installation) (Could not find the system)
Evidence:
:warning: Amazon - 2023 image does not exist Suse - 15 image does not exist Opensuse - 15 Assistant does not work (Could not find the system)
Evidence:
:warning:
Oracle - 9 No tar Amazon - 2023 No curl Suse - 15 Assistant does not work (Could not find the system) Opensuse - 15 Assistant does not work (Could not find the system)
Evidence (Executions considering the :warning: Warning Note presented below) :
A small instability has been found since the EC2s that the allocation deploys take a long time to allow an ssh connection.
Evidence
If the first host to which actions are executed is listed in the middle or at the end of the list of hosts to be allocated, there is a greater chance of failure.
A check should be generated at the endo of each module that guarantees the completion of its actions before moving to the next module
A waiting utility was added:
CASE 1. Wrong credentials:
[2024-04-03 11:29:26,669] [INFO] [Testing]: Checking connection to centos-8
[2024-04-03 11:29:26,948] [ERROR] [Testing]: Authentication error on attempt 1 of 10. Check SSH credentials in centos-8
[2024-04-03 11:29:32,144] [ERROR] [Testing]: Authentication error on attempt 2 of 10. Check SSH credentials in centos-8
[2024-04-03 11:29:37,345] [ERROR] [Testing]: Authentication error on attempt 3 of 10. Check SSH credentials in centos-8
[2024-04-03 11:29:42,782] [ERROR] [Testing]: Authentication error on attempt 4 of 10. Check SSH credentials in centos-8
[2024-04-03 11:29:47,979] [ERROR] [Testing]: Authentication error on attempt 5 of 10. Check SSH credentials in centos-8
[2024-04-03 11:29:53,176] [ERROR] [Testing]: Authentication error on attempt 6 of 10. Check SSH credentials in centos-8
[2024-04-03 11:29:58,372] [ERROR] [Testing]: Authentication error on attempt 7 of 10. Check SSH credentials in centos-8
[2024-04-03 11:30:03,576] [ERROR] [Testing]: Authentication error on attempt 8 of 10. Check SSH credentials in centos-8
[2024-04-03 11:30:08,885] [ERROR] [Testing]: Authentication error on attempt 9 of 10. Check SSH credentials in centos-8
[2024-04-03 11:30:14,076] [ERROR] [Testing]: Authentication error on attempt 10 of 10. Check SSH credentials in centos-8
[2024-04-03 11:30:19,080] [ERROR] [Testing]: Connection attempts failed after 10 tries. Connection timeout in centos-8
[2024-04-03 11:30:19,213] [INFO] [Testing]: No Firewall to disable on centos-8
CASE 2. The host is stopped but it restarts after certain amount of time:
[2024-04-03 11:30:48,223] [INFO] [Testing]: Checking connection to centos-8
[2024-04-03 11:31:02,764] [ERROR] [Testing]: Error on attempt 1 of 10: [Errno None] Unable to connect to port 22 on 192.168.57.10
[2024-04-03 11:31:10,984] [ERROR] [Testing]: Error on attempt 2 of 10: [Errno None] Unable to connect to port 22 on 192.168.57.10
[2024-04-03 11:31:19,176] [ERROR] [Testing]: Error on attempt 3 of 10: [Errno None] Unable to connect to port 22 on 192.168.57.10
[2024-04-03 11:31:24,391] [INFO] [Testing]: Connection established successfully in centos-8
Test in AWS :green_circle:
[2024-04-03 11:44:21,656] [INFO] [Testing]: Checking connection to centos-7
[2024-04-03 11:44:21,974] [ERROR] [Testing]: Error on attempt 1 of 10: [Errno None] Unable to connect to port 2200 on 44.201.151.52
[2024-04-03 11:44:52,290] [ERROR] [Testing]: Error on attempt 2 of 10: [Errno None] Unable to connect to port 2200 on 44.201.151.52
[2024-04-03 11:45:22,596] [ERROR] [Testing]: Error on attempt 3 of 10: [Errno None] Unable to connect to port 2200 on 44.201.151.52
[2024-04-03 11:45:52,906] [ERROR] [Testing]: Error on attempt 4 of 10: [Errno None] Unable to connect to port 2200 on 44.201.151.52
[2024-04-03 11:46:23,113] [ERROR] [Testing]: Error on attempt 5 of 10: [Errno None] Unable to connect to port 2200 on 44.201.151.52
[2024-04-03 11:46:54,858] [INFO] [Testing]: Connection established successfully in centos-7
depends-on
and foreach
tagsTest manager with depends-on
:
Test agents with depends-on
and Provision install:
Vagrant:
Failed due to timeout, more analysis required. Fixed!
AWS:
A bug was found in the provisioning when installing the agents, this caused the tests to not be executed or in other cases unexpected failures in workflow and tests. This bug is fixed and the following tests are re-executed:
ETA Changed due to bug findings
Fixes and retests were required
Working on the fix of https://github.com/wazuh/wazuh-qa/issues/5125#issuecomment-2041682591
Test uninstall - for all agentes, provisioning manager and agents with provision module :green_circle:
The problem was due to the fact that after uninstallation, the clientkey disappeared and the system would look for it in successive validations. A change was made so that the taking of the agent's name is done only at the beginning of the test and not in each test within the test set
Testing in Vagrant:
Install with provision and stop,restart,uninstall on all agents with clean up :red_circle:
Test is working ok, however, the agent (same OS than the manager (ubuntu-22.04) was jumped in the testing
https://github.com/wazuh/wazuh-qa/issues/5125#issuecomment-2031950188
https://github.com/wazuh/wazuh-qa/issues/5125#issuecomment-2034458509
https://github.com/wazuh/wazuh-qa/issues/5125#issuecomment-2042784252
https://github.com/wazuh/wazuh-qa/issues/5125#issuecomment-2042637943
https://github.com/wazuh/wazuh-qa/issues/5125#issuecomment-2038548385
https://github.com/wazuh/wazuh-qa/issues/5125#issuecomment-2041682591
The test worked OK
It has already curl but it is not used for the Wazuh installation
Even if the provision module installs the curl, it will not work.
curl-minimal
, not curl
LGTM
LGTM
Description
The objective of this issue is to carry out a battery of tests for the Test module, to guarantee the correct functioning of the tests in the provided systems.
Tasks