Closed rauldpm closed 1 year ago
First, it is necessary to list all the available or possible operative systems to adapt the OVA:
After talking with the team, Ubuntu systems, and RHEL systems seems to be not good for the OVA as they tend to have problems or complex systems to deploy the OVA.
With this, it seems that currently, the best option to deploy the OVA is in Amazon Linux 2. It would not have many problems due to it is RPM-based, as CentOS 7 is.
As a first approach, the following workaround is to change the OS specified in the Vagrantfile, changing centos/7
to bento/amazonlinux-2
and check the results of the OVA generation.
The generate_ova.sh
scripts executes the following tasks:
systemConfig - steps.sh :green_circle:
HellopreInstall - steps.sh :green_circle:
Install Wazuh (AIO) :green_circle:
``` 16/03/2023 09:53:25 INFO: Starting Wazuh installation assistant. Wazuh version: 4.5.0 16/03/2023 09:53:25 INFO: Verbose logging redirected to /var/log/wazuh-install.log 16/03/2023 09:53:27 DEBUG: Adding the Wazuh repository. [wazuh] gpgcheck=1 gpgkey=https://packages-dev.wazuh.com/key/GPG-KEY-WAZUH enabled=1 name=EL-${releasever} - Wazuh baseurl=https://packages-dev.wazuh.com/staging/yum/ protect=1 16/03/2023 09:53:28 INFO: Wazuh development repository added. 16/03/2023 09:53:28 INFO: --- Configuration files --- 16/03/2023 09:53:28 INFO: Generating configuration files. 16/03/2023 09:53:28 DEBUG: Creating the root certificate. Generating a 2048 bit RSA private key ........................+++ ...................+++ writing new private key to '/tmp/wazuh-certificates//root-ca.key' ----- Generating RSA private key, 2048 bit long modulus ..............................+++ .......................+++ e is 65537 (0x10001) Signature ok subject=/C=US/L=California/O=Wazuh/OU=Wazuh/CN=admin Getting CA Private Key 16/03/2023 09:53:28 DEBUG: Creating the Wazuh indexer certificates. Generating a 2048 bit RSA private key ......................................................................+++ ...................................+++ writing new private key to '/tmp/wazuh-certificates//wazuh-indexer-key.pem' ----- Signature ok subject=/C=US/L=California/O=Wazuh/OU=Wazuh/CN=wazuh-indexer Getting CA Private Key 16/03/2023 09:53:28 DEBUG: Creating the Wazuh server certificates. Generating a 2048 bit RSA private key .+++ ...................................+++ writing new private key to '/tmp/wazuh-certificates//wazuh-server-key.pem' ----- Signature ok subject=/C=US/L=California/O=Wazuh/OU=Wazuh/CN=wazuh-server Getting CA Private Key 16/03/2023 09:53:28 DEBUG: Creating the Wazuh dashboard certificates. Generating a 2048 bit RSA private key ....................+++ ...................+++ writing new private key to '/tmp/wazuh-certificates//wazuh-dashboard-key.pem' ----- Signature ok subject=/C=US/L=California/O=Wazuh/OU=Wazuh/CN=wazuh-dashboard Getting CA Private Key 16/03/2023 09:53:28 DEBUG: Generating random passwords. 16/03/2023 09:53:28 INFO: Created wazuh-install-files.tar. It contains the Wazuh cluster key, certificates, and passwords necessary for installation. 16/03/2023 09:53:28 INFO: --- Wazuh indexer --- 16/03/2023 09:53:28 INFO: Starting Wazuh indexer installation. Complementos cargados:dkms-build-requires, langpacks, priorities, update-motd Resolviendo dependencias --> Ejecutando prueba de transacción ---> Paquete wazuh-indexer.x86_64 0:4.5.0-40500 debe ser instalado --> Resolución de dependencias finalizada Dependencias resueltas ================================================================================ Package Arquitectura Versión Repositorio Tamaño ================================================================================ Instalando: wazuh-indexer x86_64 4.5.0-40500 wazuh 497 M Resumen de la transacción ================================================================================ Instalar 1 Paquete Tamaño total de la descarga: 497 M Tamaño instalado: 747 M Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Instalando : wazuh-indexer-4.5.0-40500.x86_64 1/1 Created opensearch keystore in /etc/wazuh-indexer/opensearch.keystore Comprobando : wazuh-indexer-4.5.0-40500.x86_64 1/1 Instalado: wazuh-indexer.x86_64 0:4.5.0-40500 ¡Listo! 16/03/2023 09:56:44 INFO: Wazuh indexer installation finished. 16/03/2023 09:56:44 DEBUG: Configuring Wazuh indexer. 16/03/2023 09:56:44 INFO: Wazuh indexer post-install configuration finished. 16/03/2023 09:56:44 INFO: Starting service wazuh-indexer. Created symlink from /etc/systemd/system/multi-user.target.wants/wazuh-indexer.service to /usr/lib/systemd/system/wazuh-indexer.service. 16/03/2023 09:56:50 INFO: wazuh-indexer service started. 16/03/2023 09:56:50 INFO: Initializing Wazuh indexer cluster security settings. ************************************************************************** ** This tool will be deprecated in the next major release of OpenSearch ** ** https://github.com/opensearch-project/security/issues/1755 ** ************************************************************************** Security Admin v7 Will connect to 127.0.0.1:9200 ... done Connected as "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US" OpenSearch Version: 2.4.1 Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ... Clustername: wazuh-cluster Clusterstate: GREEN Number of nodes: 1 Number of data nodes: 1 .opendistro_security index does not exists, attempt to create it ... done (0-all replicas) Populate config from /etc/wazuh-indexer/opensearch-security/ Will update '/config' with /etc/wazuh-indexer/opensearch-security/config.yml SUCC: Configuration for 'config' created or updated Will update '/roles' with /etc/wazuh-indexer/opensearch-security/roles.yml SUCC: Configuration for 'roles' created or updated Will update '/rolesmapping' with /etc/wazuh-indexer/opensearch-security/roles_mapping.yml SUCC: Configuration for 'rolesmapping' created or updated Will update '/internalusers' with /etc/wazuh-indexer/opensearch-security/internal_users.yml SUCC: Configuration for 'internalusers' created or updated Will update '/actiongroups' with /etc/wazuh-indexer/opensearch-security/action_groups.yml SUCC: Configuration for 'actiongroups' created or updated Will update '/tenants' with /etc/wazuh-indexer/opensearch-security/tenants.yml SUCC: Configuration for 'tenants' created or updated Will update '/nodesdn' with /etc/wazuh-indexer/opensearch-security/nodes_dn.yml SUCC: Configuration for 'nodesdn' created or updated Will update '/whitelist' with /etc/wazuh-indexer/opensearch-security/whitelist.yml SUCC: Configuration for 'whitelist' created or updated Will update '/audit' with /etc/wazuh-indexer/opensearch-security/audit.yml SUCC: Configuration for 'audit' created or updated Will update '/allowlist' with /etc/wazuh-indexer/opensearch-security/allowlist.yml SUCC: Configuration for 'allowlist' created or updated SUCC: Expected 10 config types for node {"updated_config_types":["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","internalusers","actiongroups","config"],"updated_config_size":10,"message":null} is 10 (["allowlist","tenants","rolesmapping","nodesdn","audit","roles","whitelist","internalusers","actiongroups","config"]) due to: null Done with success 16/03/2023 09:57:00 INFO: Wazuh indexer cluster initialized. 16/03/2023 09:57:00 INFO: --- Wazuh server --- 16/03/2023 09:57:00 INFO: Starting the Wazuh manager installation. Complementos cargados:dkms-build-requires, langpacks, priorities, update-motd Resolviendo dependencias --> Ejecutando prueba de transacción ---> Paquete wazuh-manager.x86_64 0:4.5.0-40500 debe ser instalado --> Resolución de dependencias finalizada Dependencias resueltas ================================================================================ Package Arquitectura Versión Repositorio Tamaño ================================================================================ Instalando: wazuh-manager x86_64 4.5.0-40500 wazuh 117 M Resumen de la transacción ================================================================================ Instalar 1 Paquete Tamaño total de la descarga: 117 M Tamaño instalado: 444 M Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Instalando : wazuh-manager-4.5.0-40500.x86_64 1/1 Comprobando : wazuh-manager-4.5.0-40500.x86_64 1/1 Instalado: wazuh-manager.x86_64 0:4.5.0-40500 ¡Listo! 16/03/2023 09:57:50 INFO: Wazuh manager installation finished. 16/03/2023 09:57:50 INFO: Starting service wazuh-manager. Created symlink from /etc/systemd/system/multi-user.target.wants/wazuh-manager.service to /usr/lib/systemd/system/wazuh-manager.service. 16/03/2023 09:57:59 INFO: wazuh-manager service started. 16/03/2023 09:57:59 INFO: Starting Filebeat installation. 16/03/2023 09:58:10 INFO: Filebeat installation finished. wazuh/ wazuh/module.yml wazuh/archives/ wazuh/archives/config/ wazuh/archives/config/archives.yml wazuh/archives/ingest/ wazuh/archives/ingest/pipeline.json wazuh/archives/manifest.yml wazuh/alerts/ wazuh/alerts/config/ wazuh/alerts/config/alerts.yml wazuh/alerts/ingest/ wazuh/alerts/ingest/pipeline.json wazuh/alerts/manifest.yml wazuh/_meta/ wazuh/_meta/config.yml wazuh/_meta/fields.yml wazuh/_meta/docs.asciidoc Created filebeat keystore Successfully updated the keystore Successfully updated the keystore 16/03/2023 09:58:12 INFO: Filebeat post-install configuration finished. 16/03/2023 09:58:12 INFO: Starting service filebeat. Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service. 16/03/2023 09:58:12 INFO: filebeat service started. 16/03/2023 09:58:12 INFO: --- Wazuh dashboard --- 16/03/2023 09:58:12 INFO: Starting Wazuh dashboard installation. Complementos cargados:dkms-build-requires, langpacks, priorities, update-motd Bloqueo existente en /var/run/yum.pid: otra copia se encuentra en ejecución como pid 14915. Another app is currently holding the yum lock; waiting for it to exit... La otra aplicación es: yum Memoria : 142 M RSS (357 MB VSZ) Iniciado: Thu Mar 16 09:58:11 2023 - 00:01 atrás Estado : Ejecutando, pid: 14915 Resolviendo dependencias --> Ejecutando prueba de transacción ---> Paquete wazuh-dashboard.x86_64 0:4.5.0-40500 debe ser instalado --> Resolución de dependencias finalizada Dependencias resueltas ================================================================================ Package Arquitectura Versión Repositorio Tamaño ================================================================================ Instalando: wazuh-dashboard x86_64 4.5.0-40500 wazuh 327 M Resumen de la transacción ================================================================================ Instalar 1 Paquete Tamaño total de la descarga: 327 M Tamaño instalado: 1.1 G Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Instalando : wazuh-dashboard-4.5.0-40500.x86_64 1/1 Comprobando : wazuh-dashboard-4.5.0-40500.x86_64 1/1 Instalado: wazuh-dashboard.x86_64 0:4.5.0-40500 ¡Listo! 16/03/2023 10:00:41 INFO: Wazuh dashboard installation finished. 16/03/2023 10:00:41 DEBUG: Wazuh dashboard certificate setup finished. 16/03/2023 10:00:41 INFO: Wazuh dashboard post-install configuration finished. 16/03/2023 10:00:41 INFO: Starting service wazuh-dashboard. Created symlink from /etc/systemd/system/multi-user.target.wants/wazuh-dashboard.service to /etc/systemd/system/wazuh-dashboard.service. 16/03/2023 10:00:41 INFO: wazuh-dashboard service started. 16/03/2023 10:00:41 DEBUG: Setting Wazuh indexer cluster passwords. 16/03/2023 10:00:42 DEBUG: Creating password backup. ************************************************************************** ** This tool will be deprecated in the next major release of OpenSearch ** ** https://github.com/opensearch-project/security/issues/1755 ** ************************************************************************** Security Admin v7 Will connect to 127.0.0.1:9200 ... done Connected as "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US" OpenSearch Version: 2.4.1 Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ... Clustername: wazuh-cluster Clusterstate: GREEN Number of nodes: 1 Number of data nodes: 1 .opendistro_security index already exists, so we do not need to create one. Will retrieve '/config' into /etc/wazuh-indexer/backup/config.yml SUCC: Configuration for 'config' stored in /etc/wazuh-indexer/backup/config.yml Will retrieve '/roles' into /etc/wazuh-indexer/backup/roles.yml SUCC: Configuration for 'roles' stored in /etc/wazuh-indexer/backup/roles.yml Will retrieve '/rolesmapping' into /etc/wazuh-indexer/backup/roles_mapping.yml SUCC: Configuration for 'rolesmapping' stored in /etc/wazuh-indexer/backup/roles_mapping.yml Will retrieve '/internalusers' into /etc/wazuh-indexer/backup/internal_users.yml SUCC: Configuration for 'internalusers' stored in /etc/wazuh-indexer/backup/internal_users.yml Will retrieve '/actiongroups' into /etc/wazuh-indexer/backup/action_groups.yml SUCC: Configuration for 'actiongroups' stored in /etc/wazuh-indexer/backup/action_groups.yml Will retrieve '/tenants' into /etc/wazuh-indexer/backup/tenants.yml SUCC: Configuration for 'tenants' stored in /etc/wazuh-indexer/backup/tenants.yml Will retrieve '/nodesdn' into /etc/wazuh-indexer/backup/nodes_dn.yml SUCC: Configuration for 'nodesdn' stored in /etc/wazuh-indexer/backup/nodes_dn.yml Will retrieve '/whitelist' into /etc/wazuh-indexer/backup/whitelist.yml SUCC: Configuration for 'whitelist' stored in /etc/wazuh-indexer/backup/whitelist.yml Will retrieve '/allowlist' into /etc/wazuh-indexer/backup/allowlist.yml SUCC: Configuration for 'allowlist' stored in /etc/wazuh-indexer/backup/allowlist.yml Will retrieve '/audit' into /etc/wazuh-indexer/backup/audit.yml SUCC: Configuration for 'audit' stored in /etc/wazuh-indexer/backup/audit.yml 16/03/2023 10:00:44 DEBUG: Password backup created in /etc/wazuh-indexer/backup. 16/03/2023 10:00:44 DEBUG: Generating password hashes. 16/03/2023 10:00:46 DEBUG: Password hashes generated. 16/03/2023 10:00:46 DEBUG: Creating password backup. mkdir: no se puede crear el directorio «/etc/wazuh-indexer/backup»: File exists ************************************************************************** ** This tool will be deprecated in the next major release of OpenSearch ** ** https://github.com/opensearch-project/security/issues/1755 ** ************************************************************************** Security Admin v7 Will connect to 127.0.0.1:9200 ... done Connected as "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US" OpenSearch Version: 2.4.1 Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ... Clustername: wazuh-cluster Clusterstate: GREEN Number of nodes: 1 Number of data nodes: 1 .opendistro_security index already exists, so we do not need to create one. Will retrieve '/config' into /etc/wazuh-indexer/backup/config.yml SUCC: Configuration for 'config' stored in /etc/wazuh-indexer/backup/config.yml Will retrieve '/roles' into /etc/wazuh-indexer/backup/roles.yml SUCC: Configuration for 'roles' stored in /etc/wazuh-indexer/backup/roles.yml Will retrieve '/rolesmapping' into /etc/wazuh-indexer/backup/roles_mapping.yml SUCC: Configuration for 'rolesmapping' stored in /etc/wazuh-indexer/backup/roles_mapping.yml Will retrieve '/internalusers' into /etc/wazuh-indexer/backup/internal_users.yml SUCC: Configuration for 'internalusers' stored in /etc/wazuh-indexer/backup/internal_users.yml Will retrieve '/actiongroups' into /etc/wazuh-indexer/backup/action_groups.yml SUCC: Configuration for 'actiongroups' stored in /etc/wazuh-indexer/backup/action_groups.yml Will retrieve '/tenants' into /etc/wazuh-indexer/backup/tenants.yml SUCC: Configuration for 'tenants' stored in /etc/wazuh-indexer/backup/tenants.yml Will retrieve '/nodesdn' into /etc/wazuh-indexer/backup/nodes_dn.yml SUCC: Configuration for 'nodesdn' stored in /etc/wazuh-indexer/backup/nodes_dn.yml Will retrieve '/whitelist' into /etc/wazuh-indexer/backup/whitelist.yml SUCC: Configuration for 'whitelist' stored in /etc/wazuh-indexer/backup/whitelist.yml Will retrieve '/allowlist' into /etc/wazuh-indexer/backup/allowlist.yml SUCC: Configuration for 'allowlist' stored in /etc/wazuh-indexer/backup/allowlist.yml Will retrieve '/audit' into /etc/wazuh-indexer/backup/audit.yml SUCC: Configuration for 'audit' stored in /etc/wazuh-indexer/backup/audit.yml 16/03/2023 10:00:47 DEBUG: Password backup created in /etc/wazuh-indexer/backup. Successfully updated the keystore 16/03/2023 10:00:48 DEBUG: filebeat started. 16/03/2023 10:00:48 DEBUG: wazuh-dashboard started. 16/03/2023 10:00:48 DEBUG: Loading new passwords changes. ************************************************************************** ** This tool will be deprecated in the next major release of OpenSearch ** ** https://github.com/opensearch-project/security/issues/1755 ** ************************************************************************** Security Admin v7 Will connect to 127.0.0.1:9200 ... done Connected as "CN=admin,OU=Wazuh,O=Wazuh,L=California,C=US" OpenSearch Version: 2.4.1 Contacting opensearch cluster 'opensearch' and wait for YELLOW clusterstate ... Clustername: wazuh-cluster Clusterstate: GREEN Number of nodes: 1 Number of data nodes: 1 .opendistro_security index already exists, so we do not need to create one. Populate config from /home/vagrant Force type: internalusers Will update '/internalusers' with /etc/wazuh-indexer/backup/internal_users.yml SUCC: Configuration for 'internalusers' created or updated SUCC: Expected 1 config types for node {"updated_config_types":["internalusers"],"updated_config_size":1,"message":null} is 1 (["internalusers"]) due to: null Done with success 16/03/2023 10:00:49 DEBUG: Passwords changed. 16/03/2023 10:00:49 INFO: Initializing Wazuh dashboard web application. 16/03/2023 10:01:00 INFO: Wazuh dashboard web application initialized. 16/03/2023 10:01:00 INFO: Installation finished. ```Clean :green_circle:
``` + systemctl stop wazuh-dashboard filebeat wazuh-indexer wazuh-manager + systemctl enable wazuh-manager + clean + rm -f /securityadmin_demo.sh + yum clean all Complementos cargados:dkms-build-requires, langpacks, priorities, update-motd Limpiando repositorios: amzn2-core amzn2extra-docker wazuh Cleaning up everything Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos ```
After importing the OVA, the system crashes when login into it. This is an unexpected behavior that must be investigated and solved. This problem does not occur if the OVA is generated by using CentOS 7 as the system base.
A summary of the realized changes are:
centos/7
to gbailey/amzn2
. This last box is less personalized than bento/amazon-linux2
. /etc/update-motd.d
folder or edit an existing script. The scripts stored in that folder will be executed automatically at the start of the system by alphabetical order. In this case, I edit the 30-banner
script, removing the Amazon Linux 2 message and adding the Wazuh logo. Related: http://mytechmembank.blogspot.com/2018/06/motd-on-aws-linux-instances.html.
Notice that the content of these files is not plain text (like CentOS 7), but scripts that print text. generate_ova.sh
script. Previously, the option -r
or --repository
allowed two values: prod
, which uses the production packages, and dev
, which uses the pre-release packages of the development repository. A new option, staging
has been added. This new option allows the script to use the staging
packages of the development bucket, useful when the packages of the development are not in the pre-release
folder but in the staging
folder.The system crash was not related to the OVA itself, it seems that my machine was having problems with the import of the OVA in VirtualBox, but another member of the team was able to generate and import the OVA successfully in VirtualBox.
:heavy_check_mark: The generation of the OVA finished successfully. The complete log is:
The Wazuh logo is displayed correctly after login into the VM.
VMWare In VMWare, the OVA is imported successfully and all the components of Wazuh are working correctly.
VirtualBox In VMWare, the OVA is imported successfully (in my case, changing the Graphic Controller to VMSVGA in the VirtualBox configuration) and all the components of Wazuh are working correctly.
After talking with the team and discussing the current progress, we thought it is not a good idea to use a foreign Vagrant box to perform this task. Generally, it is not recommended to use third-party software that is not maintainable or unofficial. Due to this reasons, two alternatives are available using the official image of Amazon Linux 2 :
After talking with the team about the alternatives, we conclude that the best option is to create the Vagrant box from the VM. Instead of uploading it to the Vagrant cloud, we can store it in S3. Here is an example: https://github.com/wazuh/wazuh-jenkins/blob/079d26833b5340451ce83f886e87f7fd409c6696/quality/deployments/vagrant/macos/Vagrantfile#L111
The steps to follow this process is:
The process is described in this documentation: https://docs.aws.amazon.com/en_us/AWSEC2/latest/UserGuide/amazon-linux-2-virtual-machine.html Besides, there is a GitHub repository that explains exactly what we want to achieve: https://github.com/poflynn/AMZN2Vagrant/tree/master
Amazon officially provides some [virtual disks ] (https://cdn.amazonlinux.com/os-images/2.0.20230307.0/) of Amazon Linux 2.
The steps are:
.vdi
disk with the following criteria.
seedconfig
and create two files inside this folder, user-data
and meta-data
.The meta-data
file content is:
local-hostname:localhost.localdomain
The user-data
file contains some configuration to create the Vagrant box. Its content is:
#cloud-config
users:
- default
- name: vagrant
groups: wheel
sudo: ['ALL=(ALL) NOPASSWD:ALL']
plain_text_passwd: vagrant
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key
lock_passwd: false
chpasswd:
list: |
root:vagrant
expire: False
# Required so we can install VirtualBox Guest Additions later
packages:
- kernel-devel
- kernel-headers
- gcc
- make
- perl
- bzip2
- mod_ssl
- git
runcmd:
# Stop cloud-init from randomizing root password on startup
- sed -i 's/.*root:RANDOM/#&/g' /etc/cloud/cloud.cfg.d/99_onprem.cfg
# Make it look like RedHat
- ln -s /etc/system-release /etc/redhat-release
With both files in the same folder, execute:
genisoimage -output seed.iso -volid cidata -joliet -rock user-data meta-data
. This command will generate the seed.iso
file.
Attach the seed.iso
file to the VM in Storage -> CD -> Select/Create optical virtual disk.
Start the VM. In the first run, the VM will install some packages defined in the user-data
file
GuestAdditions are mandatory in the Vagrant configuration. It allows some features such as shared folders.
The steps to perform this task are:
seed.iso
file from the machine.root
user. The password is vagrant
.seed.iso
did not extract correctly.
sudo yum -y update
sudo yum -y install kernel-headers kernel-devel
mount -r -t iso9660 /dev/cdrom /media cd /media ./VBoxLinuxAdditions.run systemctl enable vboxadd.service
In these steps, some warnings can be displayed.
## Clean the VM
When we use a Vagrant box, it should be as clean as possible, without history, ssh keys, logs, and unnecessary packages.
The clean-up commands are:
yum remove -y amazon-ssm-agent
yum clean all rm -rf /var/cache/yum
sed -i 's/PermitRootLogin yes/#PermitRootLogin no/g' /etc/ssh/sshd_config sed -i 's/^PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config && sudo service sshd restart
find /var/log -type f | while read f; do echo -ne '' > $f; done dd if=/dev/zero of=/ZERO bs=1M rm -f /ZERO
ec2-user
user.userdel -r ec2-user unset HISTFILE rm /root/.bash_history cat /dev/null > ~/.bash_history && history -c
shutdown -h now
## Create the Vagrant box
In the host machine, execute the following commands:
vagrant init vagrant package --base AMZN --output amazonlinux2.box
This will generate the Amazon Linux 2 Vagrant box in the current path.
## Create the OVA
With the box generated, the OVA can be generated easily by changing the vagrant base box that the builder script uses.
config.vm.box_url = "https://packages-dev.wazuh.com/vms/ova/amazonlinux2.box" config.vm.box = "amazonlinux2"
With this change, the OVA is generated successfully and works as expected.
<details><summary>Display screenshot</summary>
![image](https://user-images.githubusercontent.com/72193239/228767742-fa80ee4f-c879-41e4-8d05-272b64a2997f.png)
</details>
## Upload to S3
The Vagrant box and the OVA have been uploaded to S3. The files are stored in https://packages-dev.wazuh.com/ provisionally, in the folder `vms/ova`. These files were uploaded manually. If it is necessary to modify them, please follow the steps given previously.
After talking with the team about the current progress, we decided to perform some changes in the process.
It would be ideal for creating the Vagrant box with the wazuh-user
and disabling the connection via an insecure SSH key, removing the vagrant
user, and disabling the root login via SSH. These are some steps that are performed in the post-provision.sh
script of the generation of the OVA.
The next steps are to re-create the Vagrant box with the mentioned configuration and create the AMI from that box.
To perform the steps given above, I will follow the steps explained in the documentation above.
Starting from the beginning, the vagrant
user can be removed easily by removing it from the user-data
file.
With this, the user-data
file would change to:
#cloud-config
users:
- default
- name: wazuh-user
groups: wheel
sudo: ['ALL=(ALL) NOPASSWD:ALL']
plain_text_passwd: wazuh
ssh-authorized-keys:
lock_passwd: false
chpasswd:
list: |
root:wazuh
expire: False
And the rest of the file would be the same.
This change specifies that the default user of the machine would be wazuh-user
with wazuh
as the password. This user can use superuser privileges without typing the password.
As the vagrant
user is removed, in the Vagrantfile it is necessary to specify which user are we going to use to login and specify that the login will be via password:
config.ssh.username = "wazuh-user"
config.ssh.password = "wazuh"
config.ssh.insert_key = false
With this, we have created a Vagrant box:
vagrant
user.ec2-user
user.wazuh-user
user with password wazuh
.:x: With the new Vagrant box, the generation of the OVA finished successfully without executing the postProvision.sh
, but for an unknown reason the Wazuh dashboard was not installed correctly although the log of the OVA installation does not show any error. The rest of the components worked correctly.
To investigate these errors, some tests have been done.
It seems that some of the steps performed in the postProvision stage are necessary for the correct functionality of the Wazuh installation.
This steps are:
CURRENT_PATH="$( cd $(dirname $0) ; pwd -P )"
ASSETS_PATH="${CURRENT_PATH}/assets"
CUSTOM_PATH="${ASSETS_PATH}/custom"
SYSTEM_USER="wazuh-user"
systemctl stop wazuh-manager wazuh-indexer filebeat wazuh-dashboard
# Remove everything related to vagrant
mv ${CUSTOM_PATH}/removeVagrant.service /etc/systemd/system/
sed -i "s/USER/${SYSTEM_USER}/g" /etc/systemd/system/removeVagrant.service
mv ${CUSTOM_PATH}/removeVagrant.sh /home/${SYSTEM_USER}/
sed -i "s/USER/${SYSTEM_USER}/g" /home/${SYSTEM_USER}/removeVagrant.sh
chmod 755 /home/${SYSTEM_USER}/removeVagrant.sh
systemctl daemon-reload
systemctl enable removeVagrant.service
# Clear synced files
rm -rf ${CURRENT_PATH}/* ${CURRENT_PATH}/.gitignore
# Remove logs
find /var/log/ -type f -exec bash -c 'cat /dev/null > {}' \;
find /var/ossec/logs/ -type f -exec bash -c 'cat /dev/null > {}' \;
history -c
shutdown -r now > /dev/null 2>&1
The part of removing everything related to Vagrant is not necessary anymore, as the created Vagrant box does not have anything related to Vagrant. Hence, the result steps of the postProvision stage are:
systemctl daemon-reload
# Clear synced files
rm -rf ${CURRENT_PATH}/* ${CURRENT_PATH}/.gitignore
# Remove logs
find /var/log/ -type f -exec bash -c 'cat /dev/null > {}' \;
find /var/ossec/logs/ -type f -exec bash -c 'cat /dev/null > {}' \;
history -c
shutdown -r now > /dev/null 2>&1
:heavy_check_mark: With these steps added to the clean
function of the steps.sh
file, the OVA works correctly. Hence, the postProvision stage can be deleted, and its necessary commands can be moved to the provision stage.
Due to the pre-configuration of the Vagrant box (can not be accessed by the SSH configuration of Vagrant as it does not have the vagrant
user), a problem has been found in the following commands of the Vagrantfile
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder ".", "/tmp", type: "rsync", :rsync__exclude => ['output']
In the Vagrantfile, the connection with the VM is configured via password. When Vagrant executes the sync commands, the following output is generated:
==> default: SSH address: 127.0.0.1:2222
==> default: SSH username: wazuh-user
==> default: SSH auth method: password
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: The machine you're rsyncing folders to is configured to use
==> default: password-based authentication. Vagrant can't script rsync to automatically
==> default: enter this password, so you'll likely be prompted for a password
==> default: shortly.
==> default:
==> default: If you don't want to have to do this, please enable automatic
==> default: key insertion using `config.ssh.insert_key`.
==> default: Rsyncing folder: /home/davidcr01/Wazuh/1575-change-ova-4.5/ova/ => /tmp
==> default: - Exclude: [".vagrant/", "output"]
wazuh-user@127.0.0.1's password:
:x: With this, the script is stopped waiting for the password, and this may produce problems in the automatic scripts to generate the OVA.
Some tests have been done to insert the password automatically, with no success:
rsync__password: "wazuh"
to the commands.password: "wazuh"
to the commands.rsync_password: "wazuh"
to the commands.echo
command with the password.sshpass
tool.:heavy_check_mark: The only alternative that works and avoids creating the vagrant
user is to change the Vagrantfile, adding the following command:
config.ssh.insert_key = true
With this, the access to the Vagrant machine is done via password, but it inserts the insecure Vagrant key into it. This change makes Vagrant not to ask for the password to sync the folders. In the clean
function explained above, it can be removed by adding the following command:
rm ~/.ssh/authorized_keys
This file only contains the insecure key of Vagrant, so it can be performed safely.
Once the OVA and the Vagrant box are created and uploaded to S3 (packages-dev.wazuh.com), we can create the related AMI.
To perform this, is necessary to have the AWS account configured in the system, and the AWS CLI installed.
To generate the AMI, I executed the following command:
aws ec2 import-image --description "AL2_OVA_base" --disk-containers "file://containers.json" --profile wazuh-qa --region us-west-1
Where wazuh-qa
is the AWS configured profile and the containers.json
has the following content:
[
{
"Description": "Amazon Linux 2 OVA",
"Format": "ova",
"UserBucket": {
"S3Bucket": "packages-dev.wazuh.com",
"S3Key": "vms/ova/amazonlinux-2.ova"
}
}
]
To check the status of the AMI generation, I used the following command:
aws ec2 describe-import-image-tasks --import-task-ids import-ami-XXXXXXXXXXXX --profile wazuh-qa --region us-west-1
And the previous command returns the following content:
{
"ImportImageTasks": [
{
"Description": "AL2_OVA_base",
"ImportTaskId": "import-ami-093a05b9ea18ad79d",
"SnapshotDetails": [
{
"DiskImageSize": 0.0,
"Status": "completed"
}
],
"Status": "pending"
}
]
}
Once the AMI is generated, its information can be consulted in the AWS console, and launch an instance using the generated AMI.
The OVA is created by default in us-west-1
, but it's necessary to copy it to us-east-1
.
The instance has been created with:
AL2_OVA_base_wp1575
as its name.t2.xlarge
as its size.sg-0bd10845ada7de977
and sg-005cff996b335d497-default
.wp1575
) to access the instance.These features are specified in https://github.com/wazuh/wazuh-jenkins/blob/master/src/org/wazuh/TFInstance.groovy and https://github.com/wazuh/wazuh-jenkins/blob/master/jenkins-files/packages/Packages_builder_OVA.groovy
Once the AMI is created, it's necessary to perform some steps to clean up the AMI with the configuration that Amazon adds to it.
amazon-ssm-agent
package./var/log/
/tmp/
yum autoremove
.root
and wazuh-user
users.After this, a new AMI will be generated, and this AMI will be used to build the OVA through the automatic process.
With this, the commands that have to be executed in the AMI are:
sudo yum remove -y amazon-ssm-agent
sudo rm -rf /var/log/*
sudo rm -rf /tmp/*
sudo yum autoremove
sudo rm ~/.ssh/*
sudo su
rm -rf /root/.ssh/*
cat /dev/null > /root/.bash_history && history -c && exit
cat /dev/null > ~/.bash_history && history -c && sudo shutdown -h now
After this, in AWS console, I clicked on Actions -> Images and Templates -> Create image, add it a name (Amazon-Linux2-for-OVA-wp1575) and a description (AMI created from AL2_OVA_base_wp1575 after clean up).
This AMI provisionally will be used to generate the OVA in the Packages_builder_OVA Jenkins pipeline.
The id of the AMI is ami-01801051d5737dbfe
.
I had to rebuild the Vagrant box and the OVA due to they did not have the git
tool installed. This tool is necessary for the wazuh_ova_generation.yml
:
Besides, it is necessary to rebuild the AMIs. Summarizing, repeat the process.
us-east-1
. ami-059d636d3a622a7631
ami-0f463cf5ed41502eb
wazuh-jenkins
After all the steps mentioned above, a strange behavior has been found in the Packages_builder_OVA pipeline. It seems that the provision.sh
script is trying to execute a second time when the script is removed.
This behavior is seen in the following: https://ci.wazuh.info/job/Packages_Builder_OVA/224
The error is:
16:42:17 fatal: [Packages_Builder_OVA_B224_20230421143406]: FAILED! => {
16:42:17 "changed": true,
16:42:17 "cmd": [
16:42:17 "sh",
16:42:17 "provision.sh",
16:42:17 "staging",
16:42:17 "yes"
16:42:17 ],
16:42:17 "delta": "0:00:00.006995",
16:42:17 "end": "2023-04-21 14:42:17.678561",
16:42:17 "invocation": {
16:42:17 "module_args": {
16:42:17 "_raw_params": "sh provision.sh staging yes",
16:42:17 "_uses_shell": false,
16:42:17 "argv": null,
16:42:17 "chdir": "/var/provision/wazuh-packages/ova",
16:42:17 "creates": null,
16:42:17 "executable": null,
16:42:17 "removes": null,
16:42:17 "stdin": null,
16:42:17 "stdin_add_newline": true,
16:42:17 "strip_empty_ends": true,
16:42:17 "warn": true
16:42:17 }
16:42:17 },
16:42:17 "rc": 127,
16:42:17 "start": "2023-04-21 14:42:17.671566"
16:42:17 }
16:42:17
16:42:17 STDERR:
16:42:17
16:42:17 sh: provision.sh: No such file or directory
But, it has been proved that this script is being executed. If an error is produced in the provision.sh
script, it will report it. This is seen in: https://ci.wazuh.info/job/Packages_Builder_OVA/222/console
Is necessary to investigate this behavior and finish the OVA generation development.
I was working on the tests and I was able to validate that the ova is built correctly locally, I am working on the Jenkins build process
I was adapting the branches pointing to master since the destination of this development was changed to 4.6.0, for this reason I had to generate new packages in staging, to be able to build the OVA
I am debugging an error when trying to use the provision.sh script, at the moment exists in the path where it is searched for but I cannot find why it is failing, I keep validating options.
16:24:54 TASK [Clean history] ***********************************************************
16:24:54 task path: /home/ec2-user/workspace/Packages_Builder_OVA/ansible-playbooks/wazuh_ova_generation.yml:34
16:24:54 changed: [Packages_Builder_OVA_B235_20230703192249] => {
16:24:54 "changed": true,
16:24:54 "cmd": "ls -la \"/var/provision/wazuh-packages/ova\"",
16:24:54 "delta": "0:00:00.003868",
16:24:54 "end": "2023-07-03 19:24:53.993063",
16:24:54 "invocation": {
16:24:54 "module_args": {
16:24:54 "_raw_params": "ls -la \"/var/provision/wazuh-packages/ova\"",
16:24:54 "_uses_shell": true,
16:24:54 "argv": null,
16:24:54 "chdir": null,
16:24:54 "creates": null,
16:24:54 "executable": null,
16:24:54 "removes": null,
16:24:54 "stdin": null,
16:24:54 "stdin_add_newline": true,
16:24:54 "strip_empty_ends": true,
16:24:54 "warn": true
16:24:54 }
16:24:54 },
16:24:54 "rc": 0,
16:24:54 "start": "2023-07-03 19:24:53.989195"
16:24:54 }
16:24:54
16:24:54 STDOUT:
16:24:54
16:24:54 total 44
16:24:54 drwxr-xr-x 3 root root 185 Jul 3 19:24 .
16:24:54 drwxr-xr-x 22 root root 4096 Jul 3 19:24 ..
16:24:54 drwxr-xr-x 3 root root 36 Jul 3 19:24 assets
16:24:54 -rwxr-xr-x 1 root root 6630 Jul 3 19:24 generate_ova.sh
16:24:54 -rw-r--r-- 1 root root 27 Jul 3 19:24 .gitignore
16:24:54 -rwxr-xr-x 1 root root 2020 Jul 3 19:24 Ova2Ovf.py
16:24:54 -rwxr-xr-x 1 root root 1109 Jul 3 19:24 provision.sh
16:24:54 -rw-r--r-- 1 root root 1205 Jul 3 19:24 README.md
16:24:54 -rwxr-xr-x 1 root root 1480 Jul 3 19:24 setOVADefault.sh
16:24:54 -rwxr-xr-x 1 root root 756 Jul 3 19:24 Vagrantfile
16:24:54 -rw-r--r-- 1 root root 5543 Jul 3 19:24 wazuh_ovf_template
16:33:20 TASK [Run provision script] ****************************************************
16:33:20 task path: /home/ec2-user/workspace/Packages_Builder_OVA/ansible-playbooks/wazuh_ova_generation.yml:37
16:33:20 fatal: [Packages_Builder_OVA_B235_20230703192249]: FAILED! => {
16:33:20 "changed": true,
16:33:20 "cmd": [
16:33:20 "sh",
16:33:20 "provision.sh",
16:33:20 "staging",
16:33:20 "yes"
16:33:20 ],
16:33:20 "delta": "0:00:00.002836",
16:33:20 "end": "2023-07-03 19:33:20.346639",
16:33:20 "invocation": {
16:33:20 "module_args": {
16:33:20 "_raw_params": "sh provision.sh staging yes",
16:33:20 "_uses_shell": false,
16:33:20 "argv": null,
16:33:20 "chdir": "/var/provision/wazuh-packages/ova",
16:33:20 "creates": null,
16:33:20 "executable": null,
16:33:20 "removes": null,
16:33:20 "stdin": null,
16:33:20 "stdin_add_newline": true,
16:33:20 "strip_empty_ends": true,
16:33:20 "warn": true
16:33:20 }
16:33:20 },
16:33:20 "rc": 127,
16:33:20 "start": "2023-07-03 19:33:20.343803"
16:33:20 }
16:33:20
16:33:20 STDERR:
16:33:20
16:33:20 sh: provision.sh: No such file or directory
I found that the error is possibly occurring in the shutdown of the instance that occurs in the "clean" stage in the steps.sh script that is part of provision.sh.
Removing this step, the construction of the OVA is successful, although I have encountered some performance problems, which I am investigating if they are due to this.
https://ci.wazuh.info/view/Packages/job/Packages_Builder_OVA/246/console
The error that occurs is that after starting the OVA, the virtual machine stops responding, it freezes, I'm investigating what could be the reason
On Hold by release protocol
Adapt the branches pointing to master in both wazuh-packages and wazuh-jenkins, test the creation of the OVAs and both locally and through the pipeline, the OVAs are built correctly.
I find an error when running it on Virtualbox, after a moment the Virtualbox terminal is frozen. The VM continues to work, Wazuh dashboard as per ssh connection, I don't know if this is due to the version of Virtualbox I have, I'm going to request that someone else test it locally. This does not happen when running the OVA in VMware Player, the OVA works correctly here
The modifications made for the change of the operating system of the OVA on branch 4.4.5 were applied, a creation test was carried out and it finished correctly:
https://ci.wazuh.info/job/Packages_Builder_OVA/264/console
It remains to carry out a test on the OVA generated to verify that all the Wazuh functionalities have been installed correctly
The changes made on the 4.4.5
branch were applied, the execution of the OVA was tested and the same problem was found as the version created for 4.7.0
.
https://ci.wazuh.info/job/Packages_Builder_OVA/266/console
All possible causes were analyzed (memory, VirtualBox version, CPU, network, etc) and it was found that the error is generated when the OVA VM is started in Virtualbox using the XboxVGA
video driver, which is loaded by default. when we import the OVA:
The video driver was modified for VMSVGA
and we no longer had the freeze problem in the VM window that started:
After solving this problem, we proceeded to verify that the Wazuh stack has been deployed correctly and that FIPS is enabled on the server:
4.4.5 OVA testing done in https://github.com/wazuh/wazuh/issues/18115
Testing has finished. The PR https://github.com/wazuh/wazuh-documentation/pull/6287 will be merged as part of https://github.com/wazuh/wazuh/issues/18190.
It is necessary to research and choose a new operating system to use in the OVA package once CentOS 7 reaches its EOL, so that we can make the pertinent changes and carry out the necessary testing.
Currently CentOS 7 EOL is set for June 30, 2024.
Regards, Raúl.
Resolution (edit)
Research
The research of this issue is in https://github.com/wazuh/wazuh-packages/issues/1575#issuecomment-1471720669.
Extra configuration
FIPS mode should be enabled for the OVA following this documentation: https://aws.amazon.com/blogs/publicsector/enabling-fips-mode-amazon-linux-2/ This was manually tested with an EC2 instance with Amazon Linux 2. FIPS mode was configured and Wazuh installed with the assistant. Everything works fine.
Testing
The testing of this issue is in https://github.com/wazuh/wazuh-packages/issues/1575#issuecomment-1480942295.
Generation of OVA
The generation of the OVA is described in https://github.com/wazuh/wazuh-packages/issues/1575#issuecomment-1486405552. It includes the generation of the base Vagrant box for the OVA.