Closed mdavis-xyz closed 1 year ago
Files identified in the description:
plugins/modules/s3_lifecycle.py
](https://github.com/['ansible-collections/amazon.aws', 'ansible-collections/community.aws', 'ansible-collections/community.vmware']/blob/main/plugins/modules/s3_lifecycle.py)If these files are inaccurate, please update the component name
section of the description or use the !component
bot command.
cc @jillr @markuman @s-hertel @tremble click here for bot help
@mdavis-xyz Can you debug e.g. with q
and see what the value of the changed
variable is, two lines above? https://github.com/ansible-collections/community.aws/blob/bdb7c9f26f6ff39654cd90e2dd18605a6e3b026c/plugins/modules/s3_lifecycle.py#L467
import q
q(changed)
and cat /tmp/q
after it is executed.
I've never heard of the q
library before. Is this how you're supposed to debug Ansible modules? I've always struggled to debug code changes I've written, because even print statements don't work.
We should add this to the contribution docs for this collection, and Ansible in general.
Just to be clear, regardless of what compare_and_update_configuration
does, put_bucket_lifecycle_configuration
will always be called.
There's really two issues here.
For the second one, we can fix that with changed |= True
in an else
after that try
.
For the first one, perhaps everything after compare_and_update_configuration
is called should be inside an if changed
?
I'll try the q
thing later today. It will take me a while because figuring out how to run a clone of a module, without polluting my already-installed modules is not something that I find obvious nor easy.
Ok I couldn't figure out how to run a playbook using a local clone of the module, without messing with my real global installation. (Are there docs for that somewhere? As an Ansible user I never need to touch galaxy or anything like that, because I only use the standard pre-installed collections.)
So I just created a whole new VM to test in, and modified the file in the globally installed collection.
/tmp/q
is:
0.3s create_lifecycle_rule: True
1.0s create_lifecycle_rule: False
1.4s create_lifecycle_rule: False
1.1s create_lifecycle_rule: False
1.2s create_lifecycle_rule: False
1.1s create_lifecycle_rule: False
1.0s create_lifecycle_rule: False
1.2s create_lifecycle_rule: False
1.1s create_lifecycle_rule: False
1.1s create_lifecycle_rule: False
0.7s create_lifecycle_rule: False
0.4s create_lifecycle_rule: False
So it was True the first time, as expected, and False the remainder, as expected.
I tried wrapping up the put and try inside an if
statement. That worked as expected. Now the MWE passes. (Not sure how to handle _retries
)
(changed, lifecycle_configuration) = compare_and_update_configuration(client, module,
old_lifecycle_rules,
new_rule)
if changed:
# Write lifecycle to bucket
try:
client.put_bucket_lifecycle_configuration(
aws_retry=True,
Bucket=name,
LifecycleConfiguration=lifecycle_configuration)
except is_boto3_error_message('At least one action needs to be specified in a rule'):
# Amazon interpretted this as not changing anything
changed = False
except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except
module.fail_json_aws(e, lifecycle_configuration=lifecycle_configuration, name=name, old_lifecycle_rules=old_lifecycle_rules)
_changed = changed
_retries = 10
while wait and _changed and _retries:
# We've seen examples where get_bucket_lifecycle_configuration returns
# the updated rules, then the old rules, then the updated rules again,
time.sleep(5)
_retries -= 1
new_rules = fetch_rules(client, module, name)
(_changed, lifecycle_configuration) = compare_and_update_configuration(client, module,
new_rules,
new_rule)
else:
_retries=0
new_rules = fetch_rules(client, module, name)
module.exit_json(changed=changed, new_rule=new_rule, rules=new_rules,
old_rules=old_lifecycle_rules, _retries=_retries,
_config=lifecycle_configuration)
What's the best way to add a unit/integration test for this? My MWE uses multiple hosts. Is that easy to do with the existing test setup? Or is there a way to run with a loop
concurrently on one host?
Ok I couldn't figure out how to run a playbook using a local clone of the module, without messing with my real global installation. (Are there docs for that somewhere?)
Yeah, basically you can also place hacky/patched modules in your roles/playbook directory in the library
folder. The only thing you must change than is to call s3_lifecycle:
instead of community.aws.s3_lifecycle:
See https://docs.ansible.com/ansible/2.8/user_guide/playbooks_best_practices.html#directory-layout
library/ # if any custom modules, put them here (optional)
What's the best way to add a unit/integration test for this? My MWE uses multiple hosts. Is that easy to do with the existing test setup? Or is there a way to run with a loop concurrently on one host?
Maybe @goneri or @tremble got an idea about testing.
Note for testing: my MWE only was for a non-empty list of rules. The same change applies for removing rules. In my PR I wrote the same change twice. For an integration test we may want to duplicate the last 2 tasks in the MWE to change present to absent, to test that second change.
Summary
When
s3_lifecycle
is run and there are no changes to make, it still callsput_bucket_lifecycle_configuration
.My use case is that I am running a playbook multiple times concurrently, for a lifecycle configuration which is not changing. And I'm getting errors because of concurrency clashes. If I'm not changing the lifecycle, I expect only read-only calls to S3, which shouldn't clash.
This module should get the existing lifecycle config, compare it to what we want, and only if it differs, put the new lifecycle.
Issue Type
Bug Report
Component Name
s3_lifecycle
Ansible Version
Collection Versions
AWS SDK versions
Configuration
OS / Environment
Amazon Linux 2
Steps to Reproduce
playbook.yaml
hosts.yaml
Run with:
Expected Results
By the time we get to the last task, the bucket already has the lifecycle config we want. So the last tasks should also report success (no change), without throwing any errors. boto3 should only be used for read-only calls. No put call should be made by Ansible.
Actual Results
i.e. some reported success, with no change. Others threw an error.
Code of Conduct