Closed QU3B1M closed 1 month ago
This issue will be paused until we complete the fix in the IPs returned by the Vagrant provider that is required to correctly execute the tests and get the IPs of every central component:
Also, the test_install requires to discuss about the check-files, the same goes for the issue DTT1 - Iteration 3 - Test module - Improve Wazuh agent tests.
Some tests were researched. it will remain on hold until the vagrant IP can be configured.
After speaking with @QU3B1M , I have understood that the testing of central components
must be carried out with a Wazuh manager connected to components other than an agent. I will investigate how to generate such a structure in order to develop the tests.
It seems that the structure needed to generate an infrastructure provisioned with the central components is not developed. It will remain on hold until we can confirm this.
The tests that I could find in the the repository enhancement/4844-dtt1-iteration-3-test-central-components
are based in the old structure where the test module was only checking the system.
Some changes should be done around the installation process
Actions to be created:
After conversing with @fcaffieri, we realized that the installation of the central component should be approached by following the steps below:
Step | Action | Detail |
---|---|---|
1 | Generate Certificate - Download setup file | Download from the corresponding URL for the version |
2 | Generate Certificate - Modify data | Change data according to the desired setup |
3 | Execute cert-tools | |
4 | Turn off firewall | |
5 | SCP - Set sshd_config | Modify parameters to enable SCP between computers |
6 | SCP - Usage | Perform SCP between VMs |
7 | Install Manager on both computers (Manager only) | |
8 | Cluster Configuration | Hexadecimal code should be set in both ossec.conf files |
9 | Restart both computers | |
10 | Check connected clusters | /var/ossec/bin/cluster_control -l |
On the other side, as these tests are not going to work between a manager and an agent, changes should be done in the fixture (yaml file) where 2 central components should be mentioned to allocate and provision.
Using the following fixture:
version: 0.1
description: This workflow is used to test agents deployment por DDT1 PoC
variables:
#agents-os:
manager-os:
- linux-ubuntu-22.04-amd64
- linux-centos-8-amd64
infra-provider: vagrant
working-dir: /tmp/dtt1-poc
tasks:
# Unique manager allocate task
- task: "allocate-{manager}"
description: "Allocate resources for the managers."
do:
this: process
with:
path: python3
args:
- modules/allocation/main.py
- action: create
- provider: "{infra-provider}"
- size: large
- composite-name: "{manager-os}"
- inventory-output: "{working-dir}/manager-{manager-os}/inventory.yaml"
- track-output: "{working-dir}/manager-{manager-os}/track.yaml"
foreach:
- variable: manager-os
as: manager
This error was present:
(deplo_test) akim@akim-PC:~/Desktop/test/wazuh-qa/deployability$ python3 -m workflow_engine /home/akim/Desktop/test/wazuh-qa/deployability/modules/workflow_engine/examples/dtt1-agents-poc.yaml
[2024-02-26 18:59:39] [INFO] [1704038] [MainThread] [workflow_engine]: Executing DAG tasks.
[2024-02-26 18:59:39] [INFO] [1704038] [MainThread] [workflow_engine]: Executing tasks in parallel.
[2024-02-26 18:59:39] [INFO] [1704038] [ThreadPoolExecutor-0_0] [workflow_engine]: [allocate-linux-ubuntu-22.04-amd64] Starting task.
[2024-02-26 18:59:41] [ERROR] [1704038] [ThreadPoolExecutor-0_0] [workflow_engine]: [allocate-linux-ubuntu-22.04-amd64] Task failed with error: Error executing process task Traceback (most recent call last):
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/main.py", line 30, in <module>
main()
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/main.py", line 26, in main
Allocator.run(InputPayload(**vars(parse_arguments())))
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/allocation.py", line 31, in run
return cls.__create(payload)
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/allocation.py", line 50, in __create
instance = provider.create_instance(
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/generic/provider.py", line 65, in create_instance
return cls._create_instance(base_dir, params, config)
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/vagrant/provider.py", line 48, in _create_instance
config = cls.__parse_config(params, credentials, instance_id)
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/vagrant/provider.py", line 136, in __parse_config
os_specs = cls._get_os_specs()[params.composite_name]
KeyError: "['linux-ubuntu-22.04-amd64', 'linux-centos-8-amd64']"
.
[2024-02-26 18:59:41] [INFO] [1704038] [ThreadPoolExecutor-0_0] [workflow_engine]: [allocate-linux-centos-8-amd64] Starting task.
[2024-02-26 18:59:41] [ERROR] [1704038] [ThreadPoolExecutor-0_0] [workflow_engine]: [allocate-linux-centos-8-amd64] Task failed with error: Error executing process task Traceback (most recent call last):
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/main.py", line 30, in <module>
main()
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/main.py", line 26, in main
Allocator.run(InputPayload(**vars(parse_arguments())))
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/allocation.py", line 31, in run
return cls.__create(payload)
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/allocation.py", line 50, in __create
instance = provider.create_instance(
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/generic/provider.py", line 65, in create_instance
return cls._create_instance(base_dir, params, config)
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/vagrant/provider.py", line 48, in _create_instance
config = cls.__parse_config(params, credentials, instance_id)
File "/home/akim/Desktop/test/wazuh-qa/deployability/modules/allocation/vagrant/provider.py", line 136, in __parse_config
os_specs = cls._get_os_specs()[params.composite_name]
KeyError: "['linux-ubuntu-22.04-amd64', 'linux-centos-8-amd64']"
.
[2024-02-26 18:59:41] [INFO] [1704038] [MainThread] [workflow_engine]: Executing Reverse DAG tasks.
[2024-02-26 18:59:41] [INFO] [1704038] [MainThread] [workflow_engine]: Executing tasks in parallel.
Using the following fixture it worked
version: 0.1
description: This workflow is used to test agents deployment por DDT1 PoC
variables:
#agents-os:
# - linux-ubuntu-22.04-amd64
manager-os:
- linux-ubuntu-22.04-amd64
- linux-ubuntu-20.04-amd64
infra-provider: vagrant
working-dir: /tmp/dtt1-poc
tasks:
# Unique manager allocate task
- task: "allocate-manager-{manager}"
description: "Allocate resources for the manager."
do:
this: process
with:
path: python3
args:
- modules/allocation/main.py
- action: create
- provider: "{infra-provider}"
- size: large
- composite-name: "{manager}"
- inventory-output: "{working-dir}/manager-{manager-os}/inventory.yaml"
- track-output: "{working-dir}/manager-{manager-os}/track.yaml"
foreach:
- variable: manager-os
as: manager
However, some issues around the naming of the allocation happens
akim@akim-PC:/tmp$ ls dtt1-poc/
'manager-['\''linux-ubuntu-22.04-amd64'\'', '\''linux-ubuntu-20.04-amd64'\'']'
akim@akim-PC:/tmp$ ls dtt1-poc/manager-\[\'linux-ubuntu-22.04-amd64\'\,\ \'linux-ubuntu-20.04-amd64\'\]/
inventory.yaml track.yaml
akim@akim-PC:/tmp$ cat dtt1-poc/manager-\[\'linux-ubuntu-22.04-amd64\'\,\ \'linux-ubuntu-20.04-amd64\'\]/inventory.yaml
ansible_host: 192.168.57.3
ansible_port: 22
ansible_ssh_private_key_file: /tmp/wazuh-qa/VAGRANT-BD3641B4-CA7F-4C94-AF82-B2F303EF8D9B/instance_key
ansible_user: vagrant
Changing the manager-os to manager in inventory and track path made the allocator work correctly:
manager-os:
- linux-ubuntu-22.04-amd64
- linux-ubuntu-20.04-amd64
infra-provider: vagrant
working-dir: /tmp/dtt1-poc
tasks:
# Unique manager allocate task
- task: "allocate-manager-{manager}"
description: "Allocate resources for the manager."
do:
this: process
with:
path: python3
args:
- modules/allocation/main.py
- action: create
- provider: "{infra-provider}"
- size: large
- composite-name: "{manager}"
- inventory-output: "{working-dir}/manager-{manager}/inventory.yaml"
- track-output: "{working-dir}/manager-{manager}/track.yaml"
foreach:
- variable: manager-os
as: manager
Installs the manager in the host of the inventory IP
python3 modules/testing/main.py --inventory "/tmp/dtt1-poc/manager-linux-ubuntu-22.04-amd64/inventory.yaml" --component "manager" --wazuh-version "4.7.2" --wazuh-revision "1" --tests install --dependencies "{'manager': '/tmp/dtt1-poc/manager-linux-ubuntu-22.04-amd64/inventory.yaml'}" --dependencies "{'manager':'/tmp/dtt1-poc/manager-linux-ubuntu-20.04-amd64/inventory.yaml'}" --live False --one_line True
Installs the manager in the host of the inventory IP (host1)
python3 modules/testing/main.py --inventory "/tmp/dtt1-poc/manager-linux-ubuntu-22.04-amd64/inventory.yaml" --component "manager" --wazuh-version "4.7.2" --wazuh-revision "1" --tests install --dependencies "{'manager':'/
tmp/dtt1-poc/manager-linux-ubuntu-22.04-amd64/inventory.yaml'}" --live False --one_line True
Should install the manager in the host of the inventory IP (host2), however, some issues around the WazuhAPI methods are raised
python3 modules/testing/main.py --inventory "/tmp/dtt1-poc/manager-linux-ubuntu-20.04-amd64/inventory.yaml" --component "manager" --wazuh-version "4.7.2" --wazuh-revision "1" --tests install --dependencies "{'manager':'/
tmp/dtt1-poc/manager-linux-ubuntu-20.04-amd64/inventory.yaml'}" --live False --one_line True
TASK [Test install for manager] ************************************************
fatal: [192.168.57.3]: FAILED! => changed=true
cmd:
- pytest
- test_manager/test_install.py
- -v
- --wazuh_version=4.7.2
- --wazuh_revision=1
- --component=manager
- '--dependencies={manager: 192.168.57.3}'
- --live=False
- --one_line=True
- -s
delta: '0:00:00.408456'
end: '2024-02-27 15:52:52.636765'
msg: non-zero return code
rc: 4
start: '2024-02-27 15:52:52.228309'
stderr: |-
/usr/local/lib/python3.8/dist-packages/_pytest/config/__init__.py:331: PluggyTeardownRaisedWarning: A plugin raised an exception during an old-style hookwrapper teardown.
Plugin: helpconfig, Hook: pytest_cmdline_parse
ConftestImportFailure: TypeError: 'type' object is not subscriptable (from /tmp/tests/conftest.py)
For more information see https://pluggy.readthedocs.io/en/stable/api_reference.html#pluggy.PluggyTeardownRaisedWarning
config = pluginmanager.hook.pytest_cmdline_parse(
ImportError while loading conftest '/tmp/tests/conftest.py'.
conftest.py:5: in <module>
from .helpers.wazuh_api.api import WazuhAPI
helpers/wazuh_api/api.py:9: in <module>
class WazuhAPI:
helpers/wazuh_api/api.py:62: in WazuhAPI
def get_agents(self, **kwargs: dict) -> list[dict]:
E TypeError: 'type' object is not subscriptable
stderr_lines: <omitted>
stdout: ''
stdout_lines: <omitted>
Trying to check how the flow is moving to WazuhAPI get_agents method
On the other hand, testing scp from one manager host to another. I could find that the scp is not working:
Even changing the sshd_config
Reviewing the tests (agent and central)
manager-agent => manager provisioned by provisioner .install => install in agent .register => register in agent .basic_info => agent in info .restart => stop in agent .stop => stop in agent .uninstall => uninstall in agent
manager-manager => manager can't be provisioned by provisioner => It should generate certs (provisioner can't generate them becase the module doesn't know the type of test to get the manager2 ip) .install => generate certs, share certs, install managers (both), register cluters .restart => restart manager2 .stop => restart manager2 .uninstall => uninstall manager2
Points to be modified
The problem is that as the module test is executed inside of 1 host, in case that I want to generate certs, share them and trigger actions in both hosts, it will not be possible to do so being inside of 1 host. These actions should be done by the test module using ansible.
Considering this new approach.
Certs_generation done
Trying to send the wazuh-certificates.tar from one host
to the destination host
as from local
to the destination host
.
I could find the following error:
TASK [Copy file from one remote host to another] *******************************
fatal: [192.168.57.3 -> 192.168.57.2]: UNREACHABLE! => changed=false
msg: |-
Failed to connect to the host via ssh: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:+hifkGLpCfIhjli3EljIuJI60jchvxCCKp/6RtZN6Ps.
Please contact your system administrator.
Add correct host key in /home/akim/.ssh/known_hosts to get rid of this message.
Offending ED25519 key in /home/akim/.ssh/known_hosts:812
remove with:
ssh-keygen -f "/home/akim/.ssh/known_hosts" -R "192.168.57.2"
UpdateHostkeys is disabled because the host key is not trusted.
akim@192.168.57.2: Permission denied (publickey).
unreachable: true
While
akim@akim-PC:~/Desktop$ ping 192.168.57.2
PING 192.168.57.2 (192.168.57.2) 56(84) bytes of data.
64 bytes from 192.168.57.2: icmp_seq=1 ttl=64 time=2.63 ms
64 bytes from 192.168.57.2: icmp_seq=2 ttl=64 time=0.755 ms
64 bytes from 192.168.57.2: icmp_seq=3 ttl=64 time=0.465 ms
64 bytes from 192.168.57.2: icmp_seq=4 ttl=64 time=15.8 ms
64 bytes from 192.168.57.2: icmp_seq=5 ttl=64 time=2.12 ms
^C
--- 192.168.57.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4054ms
rtt min/avg/max/mdev = 0.465/4.362/15.836/5.794 ms
akim@akim-PC:~/Desktop$ telnet 192.168.57.2 22
Trying 192.168.57.2...
Connected to 192.168.57.2.
Escape character is '^]'.
SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.6
Connection closed by foreign host.
Editing /etc/ssh/sshd_d in both managers (ubuntu22.04 and ubuntu20.04) Adding: PasswordAuthentication yes PermitRootLogin yes
Restarting using systemctl restart sshd
Checking if firewald is off
Ubuntu20.04, 18.04, rhel7 allowed me to connect and transfer files using scp.
Using sshpass + scp it was possible to pass a file from one host to another.
After creating a couple of playbooks in order to satisfy the setup and running them manually, I could finally get the following result
TASK [Command stout] **********************************************
ok: [192.168.57.3] =>
cluster_output.stdout_lines:
- 'NAME TYPE VERSION ADDRESS '
- 'wazuh-1 master 4.7.2 192.168.57.2 '
- 'node01 worker 4.7.2 192.168.57.3 '
The point now is to create the best arrangement of the playbook to satisfy the test requirements.
On the other hand, sshpass was not working 100% well. After a manual validation it worked, however, running it by code did not work.
After some discussion about the test module with the DTT Team we conclude that
TEST MODULE SHOULD:
On the other hand, we will do a poll to check the possible requirements for the test module before we refactor the module.
Poll Models
https://docs.google.com/forms/d/1XCRTax17I949P39puYrNRXoOV_cLL1j8htg8iVEOIco/edit
https://docs.google.com/forms/d/1uRpbjlDMb2ojOn_o3PPAWPCw9wE-HsQ6_gDlNq3yC2w/edit
The current module has many points to enhance and scripts done until now will be reused.
Using paramiko
, I could create a script that executes commands into the VMS, the problem is that when the process takes a certain time to finish, the script does not work.
This means that the installation processes should be handled by ansible.
Considering this point:
Step | Action | Detail | Handler | Tested |
---|---|---|---|---|
1 | Generate Certificate - Download setup file | Download from the corresponding URL for the version | python-script-layer | :green_circle: |
2 | Generate Certificate - Modify data | Change data according to the desired setup | python-script-layer | :green_circle: |
3 | Execute cert-tools | python-script-layer | :green_circle: | |
4 | Turn off firewall | python-script-layer | :green_circle: | |
5 | SCP - Set sshd_config | Modify parameters to enable SCP between computers | python-script-layer | :green_circle: |
6 | SCP - Usage | Perform SCP between VMs | python-script-layer | :green_circle: |
7 | Install Manager on both computers (Manager only) | ansible | :green_circle: | |
8 | Cluster Configuration | Hexadecimal code should be set in both ossec.conf files | python-script-layer | :green_circle: |
9 | Restart both computers | python-script-layer | :green_circle: | |
10 | Check connected clusters | /var/ossec/bin/cluster_control -l | python-script-layer | :green_circle: |
All the steps were tested in each handler. Now it will be necessary to know how to create the layer and how all these methods will be handled even in upcoming cases.
The second survey has been delivered to the QA team. Meanwhile, a layer has been developed that enables remote command execution from the executing node.
This layer is being tested and is functional for installations, leading to the decision to proceed solely using Python without Ansible for generating actions in the tests.
This decision was evaluated collectively with @fcaffieri and @QU3B1M
The new layer was created. Agent, Manager, and General host methods are done.
I will start to join all the objects to the existing test method.
Working in checkfile setting. Some fixes should be done depending the OS
Install
After obtaining all the files generated in var and comparing them with the pre install files, we could see that (except for those that contained ossec or wazuh in their path)
Additional filters have been added depending the Os (each Os generates files in Var while Wazuh is being installed and they are not related to the Wazuh installation process itself)
centos = ['yum', 'rpm'] rhel = ['yum', 'rpm'] amazonlinux= ['yum']
ubuntu = ['ubuntu', 'lxcfs', 'dpkg'] debian = ['dpkg', 'lists']
oracle = ['dnf', 'selinux'] fedoraX = ['dnf', 'selinux', 'rpm'] rocky_linux=['dnf', 'selinux']
Installation is almost done (including certs creation and cluster configuration). Changing to assisted installation, filebeat is included and some changes in the checkfile should be reviewed.
Some fixes were done after changing to assisted installation.
On the other hand, all the required Os by the requirements were tested and many issues were found
OS | Status | Details |
---|---|---|
redhat7 | ok | |
redhat8 | image fails | |
redhat9 | image fails | |
centos7 | ok | |
centos8 | image fails |
For the entire Debian family, additional filters were required, and downloads were done via wget (as curl is not available).
OS | Status | Details |
---|---|---|
debian10 | image fails | |
debian11 | login issue with Vagrant, does not receive files with scp, installation fails | 11/03/2024 09:51:07 ERROR: Wazuh installation failed. 11/03/2024 09:51:07 INFO: --- Removing existing Wazuh installation --- |
debian12 | login issue with Vagrant, does not receive files with scp, manager installed | |
ubuntu-18.4 | ok | |
ubuntu-20.4 | as a master, does not like when sshd is disabled | |
ubuntu-22.4 | problems with port opening (does not receive scp or commands) | |
oracle9 | ok | |
amazon-2 | prompts for password, but manual entry works | |
amazon-2003 | no image available | |
suse | fails to create the image | |
opensuse | image not available |
After a meeting with @fcaffieri and @QU3B1M some conclusions were reached:
SCP will be done from localhost using inventory certificates, this will avoid problems using scp from the host and it will be possible to remove sshd configuration
Checkfiles will be changed to: bin sbin root boot (only changes will be checked)
'Inventory' will be changed by 'targets' All targets will be tested and dependencies will remain only for data source inventory information.
Result of some tests (comparison between pre and post install snapshot)
os | boot | install | uninstall | bin | install | uninstall | root | install | uninstall | sbin | install | uninstall |
---|---|---|---|---|---|---|---|---|---|---|---|---|
redhat7 | boot | {'added': [], 'removed': [], 'modified': ['/boot/grub2/grubenv']} | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | sbin | - | - | |||
redhat8 | boot | - | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | {'added': ['/root/.gnupg/trustdb.gpg'], 'removed': [], 'modified': []} | - | sbin | - | - |
redhat9 | boot | {'added': [], 'removed': [], 'modified': ['/boot/grub2/grubenv']} | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | - | - | sbin | - | - |
centos7 | boot | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | bin | - | - | root | - | - | sbin | - | - |
centos8 | boot | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | - | - | sbin | - | - |
debian10 | boot | {'added': [], 'removed': [], 'modified': ['/boot/grub2/grubenv']} | - | bin | {'added': ['/usr/bin/gapplication', '/usr/bin/pkcheck', '/usr/bin/pkexec', '/usr/bin/gdbus', '/usr/bin/pkcon', '/usr/bin/gresource', '/usr/bin/gsettings', '/usr/bin/unattended-upgrade', '/usr/bin/pkttyagent', '/usr/bin/pkaction', '/usr/bin/add-apt-repository', '/usr/bin/gio', '/usr/bin/pkmon', '/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | - | - | sbin | - | - |
debian11 | boot | - | - | bin | {'added': ['/usr/bin/gapplication', '/usr/bin/add-apt-repository', '/usr/bin/gpg-wks-server', '/usr/bin/pkexec', '/usr/bin/gpgsplit', '/usr/bin/watchgnupg', '/usr/bin/pinentry-curses', '/usr/bin/gpg-zip', '/usr/bin/gsettings', '/usr/bin/gpg-agent', '/usr/bin/gresource', '/usr/bin/gdbus', '/usr/bin/gpg-connect-agent', '/usr/bin/gpgconf', '/usr/bin/gpgparsemail', '/usr/bin/lspgpot', '/usr/bin/pkaction', '/usr/bin/pkttyagent', '/usr/bin/pkmon', '/usr/bin/dirmngr', '/usr/bin/kbxutil', '/usr/bin/migrate-pubring-from-classic-gpg', '/usr/bin/gpgcompose', '/usr/bin/pkcheck', '/usr/bin/gpgsm', '/usr/bin/gio', '/usr/bin/pkcon', '/usr/bin/gpgtar', '/usr/bin/dirmngr-client', '/usr/bin/gpg', '/usr/bin/filebeat', '/usr/bin/gawk', '/usr/bin/curl'], 'removed': [], 'modified': []} | - | root | - | sbin | - | - | |
debian12 | boot | - | - | bin | {'added': ['/usr/bin/gsettings', '/usr/bin/gpgconf', '/usr/bin/gpg-wks-server', '/usr/bin/gpgtar', '/usr/bin/gpgsm', '/usr/bin/add-apt-repository', '/usr/bin/pinentry-curses', '/usr/bin/kbxutil', '/usr/bin/gpg', '/usr/bin/pkaction', '/usr/bin/filebeat', '/usr/bin/watchgnupg', '/usr/bin/gapplication', '/usr/bin/update-mime-database', '/usr/bin/migrate-pubring-from-classic-gpg', '/usr/bin/gpgsplit', '/usr/bin/dirmngr', '/usr/bin/pkmon', '/usr/bin/gpgparsemail', '/usr/bin/dh_installxmlcatalogs', '/usr/bin/gpgcompose', '/usr/bin/gio', '/usr/bin/dirmngr-client', '/usr/bin/appstreamcli', '/usr/bin/pkcon', '/usr/bin/lspgpot', '/usr/bin/gpg-zip', '/usr/bin/pkcheck', '/usr/bin/pkttyagent', '/usr/bin/gdbus', '/usr/bin/gresource', '/usr/bin/gpg-connect-agent', '/usr/bin/gpg-agent'], 'removed': [], 'modified': []} | - | root | {'added': ['/root/.gnupg/trustdb.gpg'], 'removed': [], 'modified': []} | - | sbin | {'added': ['/usr/sbin/update-catalog', '/usr/sbin/applygnupgdefaults', '/usr/sbin/addgnupghome', '/usr/sbin/install-sgmlcatalog', '/usr/sbin/update-xmlcatalog'], 'removed': [], 'modified': []} | - |
ubuntu20.04 | boot | - | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | - | - | sbin | - | - |
ubuntu22.04 | boot | {'added': [], 'removed': [], 'modified': ['/boot/grub2/grubenv']} | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | {'added': ['/root/.gnupg/trustdb.gpg'], 'removed': [], 'modified': []} | - | sbin | - | - |
oracle9 | boot | {'added': [], 'removed': [], 'modified': ['/boot/grub2/grubenv']} | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | {'added': ['/root/.gnupg/trustdb.gpg'], 'removed': [], 'modified': []} | - | sbin | - | - |
amazon2 | boot | - | - | bin | {'added': ['/usr/bin/filebeat'], 'removed': [], 'modified': []} | - | root | - | - | sbin | - | - |
amazon2023 | no image | |||||||||||
opensuse15 | Manager installation failure 12/03/2024 10:51:22 ERROR: Couldn't find type of system | |||||||||||
suse15 | (only in aws) |
redhat7, redhat8, redhat9, ubuntu20 ubuntu22 oracle9 amazon2 centos7 centos8 filter = {'/boot': {'added': [], 'removed': [], 'modified': ['grubenv']}, '/usr/bin': {'added': ['filebeat'], 'removed': [], 'modified': []}, '/root': {'added': ['trustdb.gpg'], 'removed': [], 'modified': []}, '/usr/sbin': {'added': [], 'removed': [], 'modified': []}}
debian10, debian11 debian12 filter= {'/boot': {'added': [], 'removed': [], 'modified': ['grubenv']}, '/usr/bin': {'added': ['unattended-upgrade', 'gapplication', 'add-apt-repository', 'gpg-wks-server', 'pkexec', 'gpgsplit', 'watchgnupg', 'pinentry-curses', 'gpg-zip', 'gsettings', 'gpg-agent', 'gresource', 'gdbus', 'gpg-connect-agent', 'gpgconf', 'gpgparsemail', 'lspgpot', 'pkaction', 'pkttyagent', 'pkmon', 'dirmngr', 'kbxutil', 'migrate-pubring-from-classic-gpg', 'gpgcompose', 'pkcheck', 'gpgsm', 'gio', 'pkcon', 'gpgtar', 'dirmngr-client', 'gpg', 'filebeat', 'gawk', 'curl', 'update-mime-database', 'dh_installxmlcatalogs', 'appstreamcli','lspgpot'], 'removed': [], 'modified': []}, '/root': {'added': ['trustdb.gpg'], 'removed': [], 'modified': []}, '/usr/sbin': {'added': ['update-catalog', 'applygnupgdefaults', 'addgnupghome', 'install-sgmlcatalog', 'update-xmlcatalog'], 'removed': [], 'modified': []}}
The following list has failures amazon2023 (only in aws) opensuse15 (Couldn't find type of system)
suse15 (only in aws)
Differences between pre and post-install snapshots in debian12:
{'/boot': {'added': [], 'removed': [], 'modified': []}, '/usr/bin': {'added': ['/usr/bin/appstreamcli', '/usr/bin/gio', '/usr/bin/migrate-pubring-from-classic-gpg', '/usr/bin/gpgconf', '/usr/bin/add-apt-repository', '/usr/bin/pinentry-curses', '/usr/bin/pkttyagent', '/usr/bin/gpg', '/usr/bin/dirmngr', '/usr/bin/gpg-agent', '/usr/bin/pkcon', '/usr/bin/gpgtar', '/usr/bin/gpg-zip', '/usr/bin/gpgsplit', '/usr/bin/dh_installxmlcatalogs', '/usr/bin/kbxutil', '/usr/bin/gdbus', '/usr/bin/pkaction', '/usr/bin/pkmon', '/usr/bin/update-mime-database', '/usr/bin/gpgcompose', '/usr/bin/watchgnupg', '/usr/bin/gapplication', '/usr/bin/dirmngr-client', '/usr/bin/gpgparsemail', '/usr/bin/gpgsm', '/usr/bin/gpg-connect-agent', '/usr/bin/pkcheck', '/usr/bin/gresource', '/usr/bin/filebeat', '/usr/bin/gsettings', '/usr/bin/lspgpot', '/usr/bin/gpg-wks-server'], 'removed': [], 'modified': []}, '/root': {'added': ['/root/.gnupg/trustdb.gpg'], 'removed': [], 'modified': []}, '/usr/sbin': {'added': ['/usr/sbin/update-xmlcatalog', '/usr/sbin/addgnupghome', '/usr/sbin/applygnupgdefaults', '/usr/sbin/update-catalog', '/usr/sbin/install-sgmlcatalog'], 'removed': [], 'modified': []}} /tmp/dtt1-poc/manager-linux-debian-12-amd64/inventory.yaml
After filters:
{'/boot': {'added': [], 'removed': [], 'modified': []}, '/usr/bin': {'added': [], 'removed': [], 'modified': []}, '/root': {'added': [], 'removed': [], 'modified': []}, '/usr/sbin': {'added': [], 'removed': [], 'modified': []}}
After a team discussion with @fcaffieri and @QU3B1M more tasks were added
grep
tar
coreutils
sed
procps
gawk
lsof
curl
openssl
libcap
apt-transport-https
libcap2-bin
software-properties-common
gnupg
gpg
Python reads the yaml file and does 2 things:
Entering the data correctly in the yaml and executing the certificate creation, everything is generated correctly, but when running the installer referring to the values entered in the config, it fails since the existing indentations do not please the wizard.
For future reference, this was the function created:
Testing in AWS
Some adjustments were required around public and private IP in certs creation, cluster configuration, ssh port and scp
PART I
PART II
Testing in AWS
Results
OS | Test result | Additional data |
---|---|---|
redhat9 | :green_circle: | |
redhat7 | :red_circle: | AMI failure |
redhat8 | :green_circle: | |
centos7 | :green_circle: | |
centos8 | :green_circle: | |
debian10 | :green_circle: | symcryptrun was added in bin filters |
debian11 | :green_circle: | |
debian12 | :green_circle: | |
ubuntu20.04 | :green_circle: | |
ubuntu22.04 | :green_circle: | |
oracle9 | :red_circle: | No tar (It does not uncompress tar file to get API password) |
amazon2 | :green_circle: | |
amazon2023 | :red_circle: | No curl |
opensuse | :red_circle: | AMI failure |
suse15 | :red_circle: | Assitant does not work (Could not find the system) |
Retesting in Vagrant after changes
OS | Test result | Additional data |
---|---|---|
redhat7 | :green_circle: | |
redhat9 | :green_circle: | |
redhat8 | :green_circle: | Added in root lesshst filter |
centos7 | :green_circle: | |
centos8 | :green_circle: | |
debian10 | :green_circle: | |
debian11 | :green_circle: | |
debian12 | :green_circle: | |
ubuntu20.04 | :green_circle: | |
ubuntu22.04 | :green_circle: | |
oracle9 | :green_circle: | Added in root lesshst filter |
amazon2 | :green_circle: | |
amazon2023 | :red_circle: | No Vagrant Image |
opensuse | :red_circle: | Assitant does not work (Could not find the system) |
suse15 | :red_circle: | No Vagrant Image |
Testing in Vagrant
Changes done.
One question about execute_commands() metod
Changes done
LGTM
LGTM!
Description
This issue aims to implement tests for Wazuh server to meet the DTT1 requirements
Tasks