Closed edsantiago closed 3 weeks ago
I see the problem, fix should be simple.
I see the problem, fix should be simple.
Nvm, I take that back. I think the flag is not being propagated correctly into the cleanup process but trying this out locally I see the correct conmon args being set for both cases.
--exit-command-arg --events-backend --exit-command-arg file
If the args are really the problem then my cleanup tracer from https://github.com/containers/podman/pull/23487 should log that. So I guess I should get that PR ready.
Ok I think I know what is triggering this. The died event is created in syncContainer(), and syncContainer() must be called every time we lock the container basically. This means that any other command that acts on all container may notice the conmon exit file and creates the died event based on that. So even something like podman (un-)mount
, podman ps
or or really any other command that acts on this container will trigger this tiny race...
The conclusion the test is not parallel safe. (And knowing that every test using --events-backend may not be parallel safe I fear, at least all the ones that need the died event)
If you want to verify this yourself (assuming journald is the default logger):
terminal 1
while :; do bin/podman --events-backend file run --rm quay.io/libpod/testimage:20240123 true; done
terminal 2
bin/podman --events-backend file events
terminal 3
bin/podman events
terminal 4 (without unshare when running as root)
while :; do bin/podman unshare bin/podman mount; done
Just after a few seconds I got the died event in terminal 3
When you say "is not parallel safe", do you mean "not currently but it needs to be fixed," or "will never be parallel safe and we cannot guarantee that --events-backend=file
will actually ever work"?
will never be parallel safe and we cannot guarantee that --events-backend=file will actually ever work
This is expected behavior, you select the events backend per podman process so that does not exclude another podman process with a different event logger from creating events for your container created by the original process. So this is totally normal behavior from how podman works (I known it is most certainly is surprising and not POLA).
If you want to make sure all test see the same events you must make sure all podman commands run are using the same events backend.
A friendly reminder that this issue had no activity for 30 days.
Closing: this is just something we will have to live with.
Completely bizarre:
Only fails in parallel tests. The test itself is super short. There is no possible way it could fail (obviously this is a lie).
Most curious finding: in every case I looked at (three out of the five flakes), I go look at journal, search for
t160
, and the log line is in the journal. On successful test passes, the stringt160
does not appear in the journal.[sys] |090| events - labels included in container die