Closed M4rt1nCh closed 6 months ago
Thank you for contribution!β¨
This PR has been merged and the docs are now incorporated into main
:
https://ansible-collections.github.io/community.hashi_vault/branch/main
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 99.20%. Comparing base (
e8c05f2
) to head (e5ebb9b
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@M4rt1nCh thanks for your work on this! I'm excited to add this content to the collections.
It's not necessary to squash your commits locally, nor to use conventional commits, they will all be squashed on merge. If you want to do that it's ok, but once it's ready for review it will make reviewing a little easier if each commit after that is separate, so I can more easily see changes as we go along.
@briantist thanks - I'll consider adding commits then :-) From my perspective, the work is quite close to be done for the moment. Of course, there is / will be more potential for improvements as the stuff gets used, for sure.
@briantist I'm going to need some support from your side on the rotate root credential integration test: At the end of the test case (so after the root credential rotation), I want / have to reset the database user password to a default value as a cleanup. I'm trying to run a very simple cleanup playbook:
- name: add container to inventory
ansible.builtin.add_host:
name: postgres
ansible_connection: community.docker.docker
changed_when: false
- name: Drop user in PostgreSQL
become: true
become_user: postgres
ansible.builtin.shell: 'psql -c "DROP USER {{ db_user }};"'
delegate_to: postgres
connection: community.docker.docker
changed_when: true
but I am not successful connecting to the sample postgres container that is provisioned. Any suggestions on how to do that from the test control node (I've already added docker
as a python dependency)? I'm getting:
TASK [module_vault_db_rotate_root_creds : Drop user in PostgreSQL] *************
task path: /root/ansible_collections/community/hashi_vault/tests/output/.tmp/integration/module_vault_db_rotate_root_creds-d96g6qzj-Γ
ΓΕΓΞ²ΕΓ/tests/integration/targets/module_vault_db_rotate_root_creds/tasks/module_vault_db_rotate_root_creds_cleanup.yml:18
Read vars_file 'integration_config.yml'
The full traceback is:
Traceback (most recent call last):
File "/root/ansible/lib/ansible/executor/task_executor.py", line 165, in run
res = self._execute()
^^^^^^^^^^^^^^^
File "/root/ansible/lib/ansible/executor/task_executor.py", line 574, in _execute
self._connection = self._get_connection(cvars, templar, current_connection)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/lib/ansible/executor/task_executor.py", line 953, in _get_connection
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/lib/ansible/plugins/loader.py", line 899, in get_with_context
self._module_cache[path] = self._load_module_source(resolved_type_name, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/ansible/lib/ansible/plugins/loader.py", line 837, in _load_module_source
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 990, in exec_module
File "<frozen importlib._bootstrap_external>", line 1138, in get_code
File "<frozen importlib._bootstrap_external>", line 769, in _code_to_timestamp_pyc
ValueError: unmarshallable object
fatal: [testhost -> postgres]: FAILED! => {
"msg": "Unexpected failure during module execution: unmarshallable object",
"stdout": ""
}
Sure, happy to see where I could help. Do you have that non-working cleanup pushed up? I didn't see it in there.
One thing that's not obvious about the "localenv" thing (which I kind of just made up) is that the idea is to be able to support multiple ways of supplying the test requirements, and docker is one of those options (we actually used to have one that installed and ran Vault directly on the test machine for example), but I know that abstraction is harder to see since it's the only defined localenv right now.
That means that the tests themselves really should not count on their components being run in docker and shouldn't be aware of it.
I'm wondering if maybe instead we can use the community.postgresql
collection to talk directly to the DB and do that reset?
https://docs.ansible.com/ansible/latest/collections/community/postgresql/index.html
The way to pass information into the tests so that they know things like the database server address, names of objects, credentials, etc. is to add them to integration_config.yml
, and the localenv role is responsible for templating that file, see tests/integration/targets/setup_localenv_docker/templates/docker-compose.yml.j2
, so that's how you might give the relevant connection info to the postgresql collection content.
Let me know if that sounds viable or I've misunderstood anything.
I still haven't looked closely at everything in here but I do want to say this is looking amazing, and you've done an incredible job with the testing, the localenv system, getting a new service (container) into the integration tests, etc. Seriously, great job!
Also I noticed most of the coverage misses are in exceptions and that kind of thing. That's totally normal and we can't test most of that in integration testing, we'll use units to get at those edge cases, and I can help with that kind of stuff too.
Hey. Iβm on holidays this week, will get back to you next week :)
Take your time, enjoy!
@briantist I'm back and I've pushed my (non working) branch.
What I'm trying to achieve here is the creation of a user in the postgres database, however, I'm not able to achieve this here. It also doesn't matter if I'm using a shell command or any module from community.postgresql
.
The inventory of the "setup" playbook is not aware of any of the containers. For the other tests that are executed, you basically do not need that connection as you communicate via APIs, hence the inventories for the existing integration tests contain the test container and that's it.
Any idea / advice is appreciated.
It also doesn't matter if I'm using a shell command or any module from
community.postgresql
.
It matters only in the sense that you're using the docker connection so that the shell runs in the postgres container; you could use the shell module not in the container and connect to the DB by its address, but that has the issue that you'd be missing the pgsql command.
What I meant to convey is that none of the integration tests are (or should be) aware of the containers. By having the IP (for example) of the PGSQL database, you could use that with the postgres collection to make a direct connection; in this way it doesn't matter where the DB is running.
The way to pass information into the tests so that they know things like the database server address, names of objects, credentials, etc. is to add them to integration_config.yml, and the localenv role is responsible for templating that file, see
tests/integration/targets/setup_localenv_docker/templates/docker-compose.yml.j2
, so that's how you might give the relevant connection info to the postgresql collection content.
What I mentioned above is how you can ensure that the localenv (which happens to be using docker) can pass the PGSQL information into the integration tests, so that at runtime, they know where to connect to.
I'm super short on time recently, but if I can find some, I can pull down your branch and see if I can demonstrate it, which might be better than my explanation π
@briantist I think I made it work - the only "problem" is still that I have to add the community.postgresql
collection as a dependency. I've added it to tests/integration/targets/setup_localenv_docker/files/requirements/requirements.yml
, but it seems that this is the wrong place?! I ultimately need to ensure that this collection is installed on the ansible test controller. Your guidance is appreciated.
Thanks
Great news! π
I think I made it work - the only "problem" is still that I have to add the
community.postgresql
collection as a dependency. I've added it totests/integration/targets/setup_localenv_docker/files/requirements/requirements.yml
, but it seems that this is the wrong place?! I ultimately need to ensure that this collection is installed on the ansible test controller. Your guidance is appreciated.
Ah that's correct, that file is not quite the right place. That file is for dependencies that are needed to run the localenv setup.
There's actually no place in the collection that describes ansible collection dependencies needed for integration tests, so that is kind of confusing. Basically, it's up to the developer to install those locally. Since those of us who work on collections often have many checked out to work on, we often want to use checked out versions and not installed versions.
Anyway, up until now, we had no dependencies for the integration tests themselves. We could add a file like tests/integration/requirements.yml
for that now that we have some. It won't be used automatically, and it's not a defined official file, but it's a good location for now if you want to add that. I can't guarantee we won't have to move it out of there at some point.
It seems you've already found the appropriate way to add the dependency for CI though: by installing it in the workflow :)
@M4rt1nCh I see you're addressing the failures in python<3.8.
These are not actually python issues, but rather that the feature you're using was added in hvac==2.0.0
, which doesn't support those python versions:
As a result, hvac==1.2.1
is being installed on the target:
I think in the requirements for those modules you can specify that it needs hvac>=2.0.0
rather than a specific Python version.
And also in the method itself, instead of checking python version, catching AttributeError
in the try
/except
and then having the error say it requires hvac>=2.0.0
is the better approach (this will also fail on python>=3.8
if an older hvac
is installed for example).
Let me just check on how to properly skip those in the integration tests.
I don't see an appropriate integration alias for skipping the integration tests. In this case, I think adding a when:
condition early in the integration target that uses Python version is fine; it should be straightforward enough.
I'm more aggressively dropping older ansible-core versions, and so older python will be dropped by extension due to ansible-core's python support, so core 2.14 will probably be dropped from the collection (and CI) in a few months.
Thanks @briantist for the input. I'll take care of that later. :+1:
@briantist the build finally passed successfully. From my perspective, the PR is ready to be merged and released. Feel free to change the approach in loading the community.postgresql
collection to a requirements.yml
file. For me, the approach to add a task into the pipeline worked as well.
Let me know if there is anything else - looking forward to seeing this released as our team is waiting for these modules :)
Thanks so much @M4rt1nCh ! The only thing still missing from my perspective is the remaining test coverage, but I think we can fill that in with units.
I intend to do a thorough review this weekend, and I can add some unit tests then, and go over the rest of the code.
I will push my changes to your branch, so please remember to pull locally if you see I've put some changes up.
I will ask you to take a look at any changes I put up before merging, and then from there I can cut a release pretty quickly.
Cheers!
Thanks so much @M4rt1nCh ! The only thing still missing from my perspective is the remaining test coverage, but I think we can fill that in with units.
I intend to do a thorough review this weekend, and I can add some unit tests then, and go over the rest of the code.
I will push my changes to your branch, so please remember to pull locally if you see I've put some changes up.
I will ask you to take a look at any changes I put up before merging, and then from there I can cut a release pretty quickly.
Cheers!
Awesome - thanks @briantist . A :+1: for the unit test contribution. I'm more than happy to do a final review then :smiley:
Hi @M4rt1nCh ,
I may have been overconfident that I could review everything over this weekend, but I hope some of the changes I've pushed up (mostly on vault_database_static_roles_list
) will help you to make some of the same changes on the others. I can also do those but it may take longer for me to complete them all.
I will try to summarize what I've changed in the commits to add context to the changes themselves, I hope it's helpful.
All changes listed here relate to vault_database_static_roles_list
(just the one chose to start on) unless I mention otherwise.
version_added
was updated to 6.2.0
which is the next expected version; this number is important not just for proper documentation but also for automatic changelog generation of new contentI(option)
to O(option)
, and ensuring that option values use V(value)
instead of C(value)
or "value"
or whatever else.I(option=value)
to O(option=value)
.RV(option)
. Both forms support sub-entries (of arbitrary levels) like O(option.sub1)
or RV(option.sub1.whatever)
but the sub options must actually be defined.RV()
.path
option for this module (and possibly the others too?) has been changed to engine_mount_point
.
path
is usually used as a path underneath a backend, or as a full generic pathhvac
's default (which is database
in this case, matching the CLI functionality).roles
return value, which directly gives the list of roles (keys
) since that's the likely data you really want out of this module
keys
is a method on dict objects, you can't use the Ansible dot .
syntax like result.data.keys
to get at that value, you must use result.data["keys"]
syntax. I've added an entry in the notes documentation section about this.roles
return value.{"errors": []}
so the data
entry doesn't exist at all.
db -> database
). I didn't do this because I felt it might make it harder for you to review the changes I made in git, so please feel free to change this any timeUnfortunately I think I've run out of time to continue today, if you can find the time to apply similar changes to the other modules it is much appreciated. Otherwise I will get back to these when I can.
Please let me know if you disagree with any of the changes above or have questions about them. Thanks!
Oh also, if you're going to run the units locally, here's some tips:
ansible-test units --docker default --python 3.10 tests/unit/plugins/modules/test_vault_database_static_roles_list.py
--docker-network
is not needed--python <ver>
then it runs the unit tests against all valid python versions. This is very useful before pushing up changes, but it can be quite slow (mostly because it needs to install the requirements for every python version) when you're trying to develop or fix the tests quickly, so pick one version for that case as above.@briantist I agree with the suggestions you've made. I've already started working on some of the changes / improvements as well as the tests. Keeping you posted, but I'm sure we're able to move this forward sooner than later :-)
probably one thing I would start adding as a best practice is to use FQCN for tasks and give all tasks a name as well. ansible-lint
is not very happy with the playbooks added.
probably one thing I would start adding as a best practice is to use FQCN for tasks and give all tasks a name as well.
ansible-lint
is not very happy with the playbooks added.
Please do not apply ansible-lint
formatting or suggestions to the existing playbooks and roles, if you want to use its suggestions on content you're adding or changing in the PR already that's ok, but if we were going to do a big formatting change, I would want it to be in another PR so that we could add the formatting-only commit to ignore-revs :)
(also we could use that PR to discuss the specific rules I would and would not want to use, which is too distracting and spicy πΆοΈ for this PR π)
probably one thing I would start adding as a best practice is to use FQCN for tasks and give all tasks a name as well.
ansible-lint
is not very happy with the playbooks added.Please do not apply
ansible-lint
formatting or suggestions to the existing playbooks and roles, if you want to use its suggestions on content you're adding or changing in the PR already that's ok, but if we were going to do a big formatting change, I would want it to be in another PR so that we could add the formatting-only commit to ignore-revs :)(also we could use that PR to discuss the specific rules I would and would not want to use, which is too distracting and spicy πΆοΈ for this PR π)
I was online thinking about the βnewβ stuff, so weβre fully aligned on that
Hey @briantist . I've updated integration and unit testing in the branch. Are things perfect? No, but I think it is a minimum baseline to keep a certain quality. I've tried to implement as much as possible from the suggestions you've made. Feel free to improve the recent status, I'm currently not planning any additional changes. Also, please forgive the commit messages that are not that creative... In case you want to rename modules (which I haven't done so far), feel free to do so.
@M4rt1nCh I started looking at these to fill out the remaining coverage gaps, I added two commits to update vault_database_connection_configure
, but as I'm looking at the data
return value, I wanted to ask if you're set on keeping it:
data:
description: The raw result of the operation.
returned: always
type: dict
sample:
data:
ok: true
status: "success"
status_code: 204
return_data = {
"status": "success",
"status_code": raw.status_code,
"ok": raw.ok,
}
This pattern is in a few of the other database modules too, ones that probably correspond to APIs that have no other return value?
I'm inclined to remove these because:
status
string doesn't correspond to an API return valueok
and status
become redundant: if the module succeeds, then we know the status and it is ok, and if it doesn't, then the task fails.I wanted to check with you first to see if you had other reasons for using this data structure and wanted to argue for keeping it.
One more thing regarding the checks themselves for status codes besides 200
and 204
, did you encounter this scenario at all?
return_data = {
"status": "success",
"status_code": raw.status_code,
"ok": raw.ok,
}
if raw.status_code not in [200, 204]:
return_data["status"] = "failure"
module.fail_json(
data=return_data,
msg="Failed to create connection. Status code: %s" % raw.status_code,
)
The reason I ask is that by default hvac
will not return successfully for other status codes. In theory you can configure hvac
that way, but in this collection we don't, so these code paths should never be hit.
If you encountered this code path at all in your testing I'd like to know more about it, otherwise I may consider removing these too (and thus avoid needing to add tests for them).
If we do want to keep them, then I'd consider including more of the error information in them because at the moment, it would return only a generic message and not much information from the original response.
Thanks!
I was reviewing this page re the http status codes: https://developer.hashicorp.com/vault/api-docs
I havenβt seen anything different than 204 to be honest. Since Iβm not a vault power user (so far) I thought it might be good to check for the http status code, but I donβt insist on this :) Next, we could also simplify what the module returns in case the result is a 204 (so no status, etc). I thought it may be easier to check if a task is successful when used in a playbook, but youβre actually right - as the module reports changed, we can / must assume that the action was performed successfully.
Feel free to go ahead with the βsimplificationβ. I could probably also do some changes next week as well.
@M4rt1nCh I've made several updates to the modules, tests, and docs. We're at 100% patch coverage now, and I think I've found all the docs changes and other small things.
Tomorrow I will take a final look with fresh eyes and hopefully we'll get this merged!
Please take a look at the changes I've made to ensure they look good to you and hopefully catch any errors I might have made. Thanks!
@M4rt1nCh I've made several updates to the modules, tests, and docs. We're at 100% patch coverage now, and I think I've found all the docs changes and other small things.
Tomorrow I will take a final look with fresh eyes and hopefully we'll get this merged!
Please take a look at the changes I've made to ensure they look good to you and hopefully catch any errors I might have made. Thanks!
Hey @briantist . Nice job / work. Happy to see that we finally made it. Thanks for improving the stuff I was adding "quick and dirty" - I've also learned quite some new things and concepts during the contribution. From my pov, we are good to go. Looking forward to the next release :smile:
Thanks again for your work on this @M4rt1nCh !
SUMMARY
Add ansible modules to manage database connections, dynamic roles and static roles.
Fixes #425
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
Added new modules
ADDITIONAL INFORMATION
N/A