Closed cbrevik closed 2 weeks ago
Name | Link |
---|---|
Latest commit | 67664f3d68848d8c52ed5c66d90d239ca4183816 |
Latest deploy log | https://app.netlify.com/sites/testcontainers-node/deploys/672a2df22f65910008d2ee24 |
Deploy Preview | https://deploy-preview-849--testcontainers-node.netlify.app |
Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Nice, thanks for raising. Could you please add a test for the new functionality?
Sure thing @cristianrgreco! Added a new test for reusing stopped (but not removed) containers 👍🏻
Another thing I thought about which I'm unsure if should be handled explicitly or not, but the status
you can filter for can be one of created|restarting|running|removing|paused|exited|dead
. But I'm guessing if it is dead
or removing
while trying to start it, then container.start
will just throw and that is fine?
Also, coming from using TestContainers in C# I was a bit surprised to find that stopping the container by default also auto-removes it.
Maybe WithAutoRemove(boolean)
is something that could be implemented on GenericContainer
(in another PR)? The motivation would be so I can handle stopping/starting myself to free up resources that don't need to run all the time - but would also start faster since the container exists with reuse. But I'd like this to be configured when creating the container, not something I'd want to remember to do when stopping it.
@cbrevik You can specify not to remove the container when stopping it with await container.stop({ remove: false });
. Is this what you mean? I see you used it in the tests.
Another thing I thought about which I'm unsure if should be handled explicitly or not, but the status you can filter for can be one of created|restarting|running|removing|paused|exited|dead. But I'm guessing if it is dead or removing while trying to start it, then container.start will just throw and that is fine?
This is a good point. It is probably best not to try to restart the old container if it is dead/removing. If we add this filter, then testcontainers will start a new container as expected. In fact not adding this filter could result in breaking changes. Would you be able to update the PR with a test for this?
Yeah I've used the that way to disable removing it with container.stop({ remove: false })
. What I'm suggesting is possibly making the default behaviour configurable when creating the container. I've made an issue #850 with a suggested implementation. Just a suggestion though, not something critical for me 😅
This is a good point. It is probably best not to try to restart the old container if it is dead/removing. If we add this filter, then testcontainers will start a new container as expected. In fact not adding this filter could result in breaking changes. Would you be able to update the PR with a test for this?
The reason I thought it might be better to just let it throw is because either way one would have to manually clean it up probably (for dead
at least). Else it'll either way hit an error with colliding reuse-hash? It did so before when trying to start a new container when it ignored stopped containers.
But either way I'll update with the filter for dead/removing + tests for that. 👍🏻
@cristianrgreco I've added a filter for dead
and removing
when trying to reuse. But I am a bit unsure how to test for it. It seems dead
is something that happens in the extreme, and removing
will require hitting a race condition. I've not found a way to force a container into a specific status - maybe you know of one?
Looks like Podman doesn't support the restarting
container state:
● GenericContainer reuse › should reuse the container
(HTTP code 500) server error - unknown container state: restarting: invalid argument
We can either add a Podman specific implementation, or just forget about the filters
@cristianrgreco how about just removing restarting
from included filter? It makes sense to not attempt to start a container that is already being restarted? 🤔
@cristianrgreco how about just removing
restarting
from included filter? It makes sense to not attempt to start a container that is already being restarted? 🤔
Sure
Closes #783
I looked into this a bit and as suggested removing the status-filter in
fetchByLabel
will make it find the exited container. And then I had to check if it is running or not, and start it if not.But I hit on a problem when using
@testcontainers/postgresql
that it could not find bound ports on an exited container that is being restarted:The bound ports are created from a an inspection result when trying to reuse the container.
And the ports in the inspection results are mapped from the
NetworkSettings.Ports
section when inspecting the container.The underlying issue here though is that
NetworkSettings
seems to be reflecting what a running container has as current network settings. Which is empty values when doingdocker inspect
on an exited container:Compared to a running container:
So basically I had to inspect the container again after re-starting it to get the correct bound ports and their IPs.