Closed fcaffieri closed 10 months ago
fatal: [Agent1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Load key \"/wazuh-qa/poc-tests/utils/keys\": error in libcrypto\r\nvagrant@192.168.56.13: Permission denied (publickey,password).", "unreachable": true}
Add new test_install.py
test file with some basic install validations:
The script was tested on the agent execution with all the scenarios passing.
============================= test session starts ============================== platform linux -- Python 3.8.10, pytest-7.4.3, pluggy-1.3.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /tmp/tests collecting ... collected 4 items test_install.py::test_wazuh_user PASSED [ 25%] test_install.py::test_wazuh_group PASSED [ 50%] test_install.py::test_wazuh_configuration PASSED [ 75%] test_install.py::test_wazuh_control PASSED [100%] ============================== 4 passed in 0.02s ===============================
Pushed changes on branch enhacenment/4666-dtt1-poc-test-module
Replace test_manager
& test_agent
for test_install.py
Add more constants and move them into an independent file inside a new helpers directory
Create utils.py file to store the utilities functions used in the tests
Add test_wazuh_daemons
case to test_install.py
& update the file to use the helpers
Create test_registration
python and yaml test files
Output of the tests executed locally
============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.3, pluggy-1.3.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /tmp/tests collecting ... collected 4 items test_registration.py::test_agent_is_registered_in_server PASSED [ 25%] test_registration.py::test_register_logs_were_generated PASSED [ 50%] test_registration.py::test_client_keys_file_exists PASSED [ 75%] test_registration.py::test_agent_key_is_in_client_keys PASSED [100%] ============================== 4 passed in 0.06s ===============================
Add file_monitor
function to utils, it helps us find a certain string on a file with a timeout _(a resumed version of the integration framework's FileMonitor)_
find_string_in_file
with this new function.Add test_connection
python test file and test playbook
============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.3, pluggy-1.3.0 -- /usr/bin/python3 cachedir: .pytest_cache rootdir: /tmp/tests collecting ... collected 2 items test_connection.py::test_agent_connects_to_manager PASSED test_connection.py::test_agent_connection_status PASSED ============================== 2 passed in 0.12s ===============================
when: target.stdout == 'server'
to when: "'Manager' in inventory_hostname"
Add test_basic_info
python testfile and playbook (a very simple test, could be updated)
Add restart
provision playbook with test_restart
playbook and python test file
Add util to check the service status
Complete test_restart
python test function
Add test_stop
playbook and python testfile
Add test_uninstall
python testfile & ansible playbooks
Improve the tests ansible playbooks applying the following template:
- hosts: all
become: true
vars:
path: "/tmp"
test_directory: "tests"
test_file: "TEST_FILE.py"
curl_path: /usr/bin/curl
tasks:
- name: Test TEST_NAME
block:
- name: Execute tests
command: "pytest {{ test_file }} --target={{ target }} -vl"
args:
chdir: "{{ path }}/{{ test_directory }}"
vars:
target: "{% if 'Agent' in inventory_hostname %}agent{% elif 'Manager' in inventory_hostname %}server{% endif %}"
register: pytest_log
always:
- name: Save pytest output to a file
copy:
remote_src: yes
content: "{{ pytest_log.stdout }}"
dest: "{{ path }}/{{ test_directory }}/{{ test_file }}.log"
/temp/tests/TEST_NAME.log
as raw pytest output with the -l
argument to get some extra info in the errors traceback. This method is viable, but there is other options like:
pytest-tinybird
: It sends the metrics of the test execution to a TinyBird API, and can be used with the grafana-tinybird
plugin to integrate it with Grafana. -> source.pytest-influxdb
: Exports the test results to a influxDB so it can be picked up by Grafana later (This redit thread can be useful -> https://www.reddit.com/r/grafana/comments/dfhl3s/send_python_output_to_grafana/)
In my opinion, may be better to just get the results from the raw pytest output, anyway the
InfluxDB
solution seems interesting.
influxdb
implementation facing some problems in the setup, not that easy to make it work correctly as the pytest-influxdb plugin's documentation is not updated. Concluded in leaving aside this implementation.Implement pytest-tinybird
, it seems more viable to use this solution, the implementation is easy and straight forward:
environment:
TINYBIRD_URL: https://api.us-east.tinybird.co # Depends on the region.
TINYBIRD_DATASOURCE: <DATASOURCE_NAME> # Will be created if not existent.
TINYBIRD_TOKEN: <TOKEN_API> # Token with write permissions.
--report-to-tinybird
to the test executionFinished modifications on the test module & create pull request to merge this module with the rest of them
Reviews answered at https://github.com/wazuh/wazuh-qa/issues/4524#issuecomment-1871507629
Description
This issue aims to design and create a PoC of the Test module.
This module is responsible for launching the test into the infrastructure allocated and provisioned. This module will work as a black box where it will have a clear input and output.
For this PoC, this module will test the following use of cases: