Open va1entin opened 2 months ago
Hi, @va1entin
I think when you change something in container definition it recreates the container from scratch, and force_restart
doesn't make a lot of sense here. The container will be deleted and created with new parameters. force_restart
is for restarting existing not-changed containers.
For example you have a running container nginx
and want just to restart it without specifying now all required original data:
- containers.podman.podman_container:
name: nginx
state: started
force_restart: true
When you change the container definition I don't see why to use force_restart
key, it will be just recreated.
Hi @sshnaidm thanks for your swift reply and view on the matter!
When you change the container definition I don't see why to use force_restart key, it will be just recreated.
I think there is a use case in adding force_restart to the container definition rather than a separate task as in your example because that allows you to have a more concise playbook in some cases. You can have the container definition and the restart in one task rather than having two separate tasks.
If the container definition doesn't change you can rest assured that the container will be restarted without needing a separate task and if the definition does change the changes will be applied.
In fact that is essentially my use case: I have two container definitions in my playbook and usually only one of them changes at a time because of version updates or so. With force_restart I can ensure that the other, unchanged, container will still be restarted without needing a separate job.
It's a bit of a special use case but feels valid to me in the spirit of keeping playbooks DRY.
@va1entin let me understand better your case. Do you restart the second container only if first changed? In this case it's better to have a condition:
- name: First container
podman_container:
name: first
...
register: first
- name: Second container
podman_container:
name: second
....
- name: Restart second if first has changed
podman_container:
name: second
force_restart: true
when: first is changed
I can't think about a use case when you need to restart the container every time you run the playbook. Do you have such?
@sshnaidm Currently, I don't restart the second container only if the first changed.
Restarting the container every time is not mandatory in my setup but I like to do it to ensure that a change in container 1 doesn't disrupt the functionality of container 2, even if the latter is restarted. While it's hopefully rare, I have seen cases where only a restart of the app exposes a problem that wouldn't immediately appear during normal operation when something is changed in a container that this app relies on. Such problems might otherwise only manifest way later when one wouldn't necessarily connect it to the container update one did potentially days or weeks ago, depending on when the container is restarted next.
Apart from changes in the containers themselves, the restart could also exposes new problems between host and containers that might only show way later - which makes it useful regardless of whether the container definitions changed.
In essence, containers should survive restarts at any point and services should not or only minimally be disrupted when they happen - otherwise that's an indication for a problem in my setup and having every playbook run restart the container gives me an opportunity to add that "restart test case" into my playbook very leanly.
I still not fully understand your use case, but anyway - if we start require all settings to be in container definition with force_restart
, we'll loose the option to restart container just by name. I think it might be solved by adding state: restarted
maybe, not sure, it will require some of redesign of current logic. I think we can have it as RFE, though not for 1.x.x because it will be breaking change.
Hi @sshnaidm I have created a PR with an idea how to fix this without requiring a new state or losing the possibility to restart a container just by name in #820. Looking forward to your thoughts!
We are also affected from the issue š¢
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Hi team, I'm filing this as a bug but it could be just a docs bug if the behavior I'm describing is actually desired. When
force_restart: true
is set on apodman_container
task and the container already exists,podman_container
will ignore any parameters changed in the task.For example, you could deploy a Ubuntu
22.04
container withforce_restart
set totrue
. The container then exists on the target host. Then you change the image to Ubuntu24.04
and run the playbook again. The container will be restarted but the new image will be ignored. This applies at least toimage
andcommand
but looking at make_started I'd assume it applies to any parameter.The return statement here means the function will always return before any parameter changes to containers are made if
restart
/force_restart
aretrue
.I also couldn't find anything in the docs describing this behavior and iirc the docker container module doesn't do it this way either.
Personally, I don't think
force_restart
should mean "do only a restart and literally NOTHING else". If I needforce_restart
for some reason that means I have to set it tofalse
every time I want to change anything about my container or create a second task that only contains the containername
,image
andforce_restart
to have a "restart only" task and then a separate task that contains my actual container parameter. This seems very counterintuitive and might have unforeseen consequences that I don't see right now.My proposal is to remove the return statement linked above. That means after the restart is done, the rest of
make_started
will still be executed and changed parameters will be reflected on the target host. There might be downsides to this particular approach that I don't see. For example,update_container_result()
could be called twice in one run ofmake_started()
- I think that will be a problem or at the very least the restart information might be lost.If you don't agree with my understanding of
force_restart
and do think that the default whentrue
should be to ignore any changed parameters, one could also leverage therecreate
parameter here, check whether it'strue
and only then apply changed parameters whenforce_restart: true
. Currently, even withrecreate: true
my container will not be recreated as long asforce_restart
istrue
as far as I could tell.Happy to create a PR for this by the way - just wanted to discuss whether this behavior is desired and which approach would be best before I do. :smiley:
Steps to reproduce the issue:
Deploy a container with
force_restart: true
Change a container parameter like
image
orcommand
in the playbook and run it againContainer will be restarted but your changed parameter will not be reflected;
image
/command
or whatever will not be changed to what you set in step 2 until you setforce_restart
tofalse
and re-run the playbookDescribe the results you received: Containers are restarted but updated parameters are not reflected when
force_restart: true
Describe the results you expected: Containers are restarted and updated parameters are reflected when
force_restart: true
Additional information you deem important (e.g. issue happens only occasionally):
Version of the
containers.podman
collection: Either git commit if installed from git:git show --summary
Or version fromansible-galaxy
if installed from galaxy:ansible-galaxy collection list | grep containers.podman
Output of
ansible --version
:Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Playbok you run with ansible (e.g. content of
playbook.yaml
):Command line and output of ansible run with high verbosity
Please NOTE: if you submit a bug about idempotency, run the playbook with
--diff
option, like:ansible-playbook -i inventory --diff -vv playbook.yml
Additional environment details (AWS, VirtualBox, physical, etc.):