Closed Viachaslau-Zinkevich closed 9 months ago
Just to confirm, you are running a kubedock > 0.14.0?
If I do a localstack start
and a localstack status
, they both work. Do you have a test I can do to reproduce your scenario?
Do you have a test I can do to reproduce your scenario?
Given kubedock running in background, I can reproduce it like this:
git clone --depth 1 --branch 1.19.5 https://github.com/testcontainers/testcontainers-java.git
cd testcontainers-java
# plain docker: OK
export DOCKER_HOST=; ./gradlew :localstack:cleanTest :localstack:test \
--no-build-cache \
--tests 'org.testcontainers.containers.localstack.LocalstackContainerTest'
# kubedock: FAILURE
export DOCKER_HOST=tcp://127.0.0.1:2475; ./gradlew :localstack:cleanTest :localstack:test \
--no-build-cache \
--tests 'org.testcontainers.containers.localstack.LocalstackContainerTest'
Thanks @davidecavestro! I am not sure if it's the same problem @Viachaslau-Zinkevich is having. The unit-test in testcontainers-java was failing because the ryuk container was getting it's exposed ports from an (undocumented) config section of the docker api. Added that, and that made the above tests succeed. (Disabling ryuk fixed it as well)
Ough... I may be wrong OTOH on the failed kubedock container I saw the stacktrace of docker python sdk complaining about the ID attribute missing from container details. That said, tomorrow I'll try to get some other clues.
Still not sure, but this is the evidence I got from the main container of the kubedock pod complaining for the missing resource ID I can reproduce it both with kubedock 0.15.3 in reverse-proxy and port-forward mode, i.e. with the former
NAMESPACE=devops; kubedock server --reverse-proxy -v 10
For a single test launched from my notebook with RYUK disabled
git clone --depth 1 --branch 1.19.5 https://github.com/testcontainers/testcontainers-java.git
cd testcontainers-java
export TESTCONTAINERS_RYUK_DISABLED=true; \
export TESTCONTAINERS_CHECKS_DISABLE=true; \
export DOCKER_HOST=tcp://127.0.0.1:2475; \
./gradlew :localstack:cleanTest :localstack:test \
--no-build-cache \
--tests 'org.testcontainers.containers.localstack.LocalstackContainerTest$S3SkipSignatureValidation'
This is an excerpt from kubedock output
I0222 09:14:30.844245 306055 util.go:100] Response Body:
�LocalStack version: 2.3.2
tUnexpected exception while starting infrastructure: Resource ID was not provided Traceback (most recent call last):
U File "/opt/code/localstack/localstack/services/infra.py", line 380, in start_infra
, print_runtime_information(is_in_docker)
c File "/opt/code/localstack/localstack/services/infra.py", line 346, in print_runtime_information
! id = get_main_container_id()
! ^^^^^^^^^^^^^^^^^^^^^^^
k File "/opt/code/localstack/localstack/utils/container_networking.py", line 124, in get_main_container_id
: return DOCKER_CLIENT.get_container_id(container_name)
: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
r File "/opt/code/localstack/localstack/utils/container_utils/container_client.py", line 754, in get_container_id
8 return self.inspect_container(container_name)["Id"]
2 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
t File "/opt/code/localstack/localstack/utils/container_utils/docker_sdk_client.py", line 366, in inspect_container
D return self.client().containers.get(container_name_or_id).attrs
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
o File "/opt/code/localstack/.venv/lib/python3.11/site-packages/docker/models/containers.py", line 925, in get
; resp = self.client.api.inspect_container(container_id)
; ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
q File "/opt/code/localstack/.venv/lib/python3.11/site-packages/docker/utils/decorators.py", line 16, in wrapped
raise errors.NullResource(
9docker.errors.NullResource: Resource ID was not provided
The same happens also when launching kubedock and gradle from a pod (I can provide the steps doing the same from scratch on a gradle pod)
Thanks @davidecavestro , this was exactly what I needed. I was able to reproduce, and the root-cause was localstack not providing a name to the container. Fixed by returning the id as a name if no name is attached to the container.
Released in 0.15.4.
Hi, first of all, thanks a lot for the great job you are doing!
Unfortunately we face an issue trying to use Testcontainers + Localstack with Kubedock and Gitlab. Apparently localstack internally uses docker api and tries to get container ID of a starting container. In case of Kubedock it seems to be
null
. This effectively causes Localstack container to fail.The place where it fails: https://github.com/localstack/localstack/blob/master/localstack/utils/container_networking.py#L124C16-L124C29
Would really appreciate if it somehow could be checked and ideally supported.
Best regards.