Closed asbachb closed 2 years ago
@vrothberg PTAL. We should find this in the local storage?
Test containers use the Docker API which will behave as Docker does. “foo” will hence resolve to “docker.io/library/foo:latest”. The behavior can be changed via the compat_api… option in containers.conf.
That means that Podman resolves images differently than Docker. Podman’s Docker API behaves as Docker by default.
I’m on a train but suggest closing as it’s expected behavior.
On Sat 13 Aug 2022 at 13:43, Daniel J Walsh @.***> wrote:
@vrothberg https://github.com/vrothberg PTAL. We should find this in the local storage?
— Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/15306#issuecomment-1214143520, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZDRA23OGUZ3WQC5JGCKATVY6C4RANCNFSM56N4UBTQ . You are receiving this because you were mentioned.Message ID: @.***>
Is it possible somehow to target local image storage instead of a registry with docker api?
Yes, you can use the image ID, the image digest, the full image name or short names that resolve to docker.io.
On Sat 13 Aug 2022 at 14:29, Benjamin Asbach @.***> wrote:
Is it possible somehow to target local image storage instead of a registry with docker api?
— Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/15306#issuecomment-1214149055, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZDRA2WOXXKBKUSRVAJRQLVY6IJJANCNFSM56N4UBTQ . You are receiving this because you were mentioned.Message ID: @.***>
@vrothberg From my understanding I already used the full image name:
I tried with localhost/aaa/hello-world:1.0
and aaa/hello-world:1.0
which both does not resolve the local image.
On Sat 13 Aug 2022 at 14:41, Benjamin Asbach @.***> wrote:
@vrothberg https://github.com/vrothberg From my understanding I already used the full image name:
I tried with localhost/aaa/hello-world:1.0 and aaa/hello-world:1.0 which both does not resolve the local image.
The full image name should work. I just tried on my machine. There must be something else going on.
—
Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/15306#issuecomment-1214150778, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZDRA2UXQVTNWPAP2IDPXLVY6JXZANCNFSM56N4UBTQ . You are receiving this because you were mentioned.Message ID: @.***>
I made a test repo showing the issue: https://github.com/asbachb/podman-testcontainers
@vrothberg Could you give some more details how you tested the full image name?
Apologies for the late reply, I was traveling last week.
Actually, I am not sure why it's not working on your end.
curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/aaa%2Fhello-world:1.0/json
and
curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json
Works for me.
@vrothberg No need for apologies ;)
I did some further investigation and recognized that /run/ser/1000/podman/podman.sock
behaves differently than /var/run/docker.sock
[nix-shell:~]$ curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/aaa%2Fhello-world:1.0/json
{"Id":"sha256:4e404aca34bd8257c9b08a2d6dfeb4f59102fa90f1ac2bfeec857657783db45f","RepoTags":["localhost/aaa/hello-world:1.0"],"RepoDigests":["localhost/aaa/hello-world@sha256:6a6afb8cc611df0c1cf7818ee5b6ae177b95d7cba376bb902fc67672ecf587fa"],"Parent":"feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412","Comment":"","Created":"2022-08-11T16:07:33.012781066Z","Container":"","ContainerConfig":{"Hostname":"4e404aca34b","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"","Author":"","Config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"Cmd":["/hello"],"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":{"io.buildah.version":"1.26.1"}},"Architecture":"amd64","Os":"linux","Size":18801,"VirtualSize":18801,"GraphDriver":{"Data":{"LowerDir":"/home/asbachb/.local/share/containers/storage/overlay/e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359/diff","UpperDir":"/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/diff","WorkDir":"/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/work"},"Name":"overlay"},"RootFS":{"Type":"layers","Layers":["sha256:e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359","sha256:9624a5a5fdb04406c0759643002249345dc804ecc36d010e4f05c7fc5d3b7a43"]},"Metadata":{"LastTagTime":"0001-01-01T00:00:00Z"}}
[nix-shell:~]$ curl -XGET --unix-socket /var/run/docker.sock http:/v1.32/images/aaa%2Fhello-world:1.0/json{"cause":"failed to find image aaa/hello-world:1.0: docker.io/aaa/hello-world:1.0: No such image","message":"failed to find image aaa/hello-world:1.0: docker.io/aaa/hello-world:1.0: No such image","response":404}
[nix-shell:~]$ ls -l /var/run/docker.sock
lrwxrwxrwx 1 root root 23 Aug 22 11:28 /var/run/docker.sock -> /run/podman/podman.sock
[nix-shell:~]$ ls -l /run/podman/podman.sock
srw-rw---- 1 root podman 0 Aug 22 11:28 /run/podman/podman.sock
[nix-shell:~]$ ls -l /run/user/1000/podman/podman.sock
srw-rw---- 1 asbachb users 0 Aug 22 11:31 /run/user/1000/podman/podman.sock
@asbachb, can you also share the output of podman images
and docker images
? Just to be sure that both have the same image.
@vrothberg just tried to clean my system a little bit. These are the images which could not be remove somehow:
asbachb@nixos-t14s ~ podman images --all
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> b7da1c8450f3 2 weeks ago 271 MB
<none> <none> 2098b6b136cf 2 weeks ago 271 MB
<none> <none> 37ca1790740f 2 weeks ago 271 MB
<none> <none> c31593d9c1a8 2 weeks ago 271 MB
<none> <none> 53c7f3b38c03 3 weeks ago 271 MB
<none> <none> 226d43b46339 3 weeks ago 271 MB
<none> <none> 2579260e4b1f 3 weeks ago 271 MB
<none> <none> 593cef60250a 3 weeks ago 271 MB
docker.io/library/eclipse-temurin 17-jre dfbdb43d129b 3 weeks ago 271 MB
asbachb@nixos-t14s ~ docker images --all
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> b7da1c8450f3 2 weeks ago 271 MB
<none> <none> 2098b6b136cf 2 weeks ago 271 MB
<none> <none> 37ca1790740f 2 weeks ago 271 MB
<none> <none> c31593d9c1a8 2 weeks ago 271 MB
<none> <none> 53c7f3b38c03 3 weeks ago 271 MB
<none> <none> 226d43b46339 3 weeks ago 271 MB
<none> <none> 2579260e4b1f 3 weeks ago 271 MB
<none> <none> 593cef60250a 3 weeks ago 271 MB
docker.io/library/eclipse-temurin 17-jre dfbdb43d129b 3 weeks ago 271 MB
Thanks, @asbachb.
What I wanted to figure out was which images were present when running http:/v1.32/images/aaa%2Fhello-world:1.0/json
against the endpoints.
@vrothberg
asbachb@nixos-t14s ~ docker images --all | grep aaa
localhost/aaa/hello-world 1.0 9ae4f6030c86 29 minutes ago 18.8 kB
asbachb@nixos-t14s ~ podman images --all | grep aaa
localhost/aaa/hello-world 1.0 9ae4f6030c86 29 minutes ago 18.8 kB
[nix-shell:~]$ curl -XGET --unix-socket /run/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 233 100 233 0 0 4609 0 --:--:-- --:--:-- --:--:-- 4660
{
"cause": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
"message": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
"response": 404
}
[nix-shell:~]$ curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1839 100 1839 0 0 47779 0 --:--:-- --:--:-- --:--:-- 48394
{
"Id": "sha256:9ae4f6030c86cd7500159c002e15c33761597c4304f5492391928594eb405ff7",
"RepoTags": [
"localhost/aaa/hello-world:1.0"
],
"RepoDigests": [
"localhost/aaa/hello-world@sha256:0eae6bda33576b1a139425f697150a5559b9a40c65296605e5f4fc1029770337"
],
"Parent": "feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412",
"Comment": "",
"Created": "2022-08-22T12:21:02.818201952Z",
"Container": "",
"ContainerConfig": {
"Hostname": "9ae4f6030c8",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": null,
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"DockerVersion": "",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/hello"
],
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"io.buildah.version": "1.27.0"
}
},
"Architecture": "amd64",
"Os": "linux",
"Size": 18801,
"VirtualSize": 18801,
"GraphDriver": {
"Data": {
"LowerDir": "/home/asbachb/.local/share/containers/storage/overlay/e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359/diff",
"UpperDir": "/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/diff",
"WorkDir": "/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/work"
},
"Name": "overlay"
},
"RootFS": {
"Type": "layers",
"Layers": [
"sha256:e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359",
"sha256:9624a5a5fdb04406c0759643002249345dc804ecc36d010e4f05c7fc5d3b7a43"
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
curl -XGET --unix-socket /run/podman/podman.sock
Can you do that against the docker socket? Thanks a lot for collaborating. I am sure we'll find the source of the issue.
[nix-shell:~]$ curl -XGET --unix-socket /run/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 233 100 233 0 0 4609 0 --:--:-- --:--:-- --:--:-- 4660
{
"cause": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
"message": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
"response": 404
}
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="/nix/store/f5saxg50gkwkqawhga7qfh8h059kzl9a-podman-4.2.0/bin/podman filtering at log level info"
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="Setting parallel job count to 49"
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="Using systemd socket activation to determine API endpoint"
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="API service listening on \"/run/podman/podman.sock\""
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="Request Failed(Not Found): failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image"
Aug 22 17:03:50 nixos-t14s podman[14225]: @ - - [22/Aug/2022:17:03:50 +0400] "GET /images/localhost%2Faaa%2Fhello-world:1.0/json HTTP/1.1" 404 233 "" "curl/7.84.0"
/run/podman/podman.sock
that is using the rootful Podman socket. Can you try with /var/run/docker.sock
and also list the images there?
My distribution maps /var/run/docker.sock
to /run/podman/podman.sock
. Is that the way the socket should be symlinked?
[nix-shell:~]$ ls -l /var/run/docker.sock
lrwxrwxrwx 1 root root 23 Aug 22 11:28 /var/run/docker.sock -> /run/podman/podman.sock
For mapping root Docker to root Podman, that does seem appropriate. The difference could be that we do not support the Docker group, so non-root users that are part of that group cannot access the root Podman socket (which was a terrible idea anyways, with the socket effectively being passwordless root access to the system).
@asbachb can you run sudo podman images
? Rootless and rootful Podman do not share images and containers.
@vrothberg I guess that's the problem: I create that image with rootless podman. Testcontainers is using docker compat socket which links to root podman socket - Which does not know about that image.
Since this does not seem to be a bug in Podman, closing, conversation can continue.
I wonder if it's user expectation that rootles and rootful podman should share the same image files?
Just for future reference: On NixOS docker.sock is mapped to rootful podman.sock. So in order to have an image it needs to be written into rootful image store.
When you want to run testcontainers with rootless podman it makes more sense to manually configure that socket:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<configuration>
<environmentVariables>
<DOCKER_HOST>unix:///var/run/user/1000/podman/podman.sock</DOCKER_HOST>
</environmentVariables>
</configuration>
</plugin>
Thanks! I am out of office next weeks but maybe others can have a look. Maybe there’s something else going on? Did you try pointing Docker to the Podman socket and lookup the image?
On Sat 13 Aug 2022 at 15:06, Benjamin Asbach @.***> wrote:
I made a test repo showing the issue: https://github.com/asbachb/podman-testcontainers
— Reply to this email directly, view it on GitHub https://github.com/containers/podman/issues/15306#issuecomment-1214157128, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACZDRA5I5YUA6PBSA6VQRD3VY6MU7ANCNFSM56N4UBTQ . You are receiving this because you were mentioned.Message ID: @.***>
@vrothberg could you, please, tell more about this compat_api
? The only documentation[1] about containers.conf, which I was able to found, doesn't mention such option.
I have the same problem as OP, I made a small reproducer[2] for it and followed this manual[3] to install and configure podman.
[1] https://man.archlinux.org/man/containers.conf.5.en [2] https://github.com/fedinskiy/reproducer/tree/reproducer/podman-testcontainers [3] https://quarkus.io/blog/quarkus-devservices-testcontainers-podman/
@fedinskiy, it looks like the option isn't documented in containers.conf. I will open a PR to address that.
I'd appreciate a reproducer that uses Podman/Docker directly. Setting up some external tools such as test-containers or Quarkus is very time consuming as I need to dig into this code (and I don't speak Java anymore).
The option is documented in containers.conf (see https://github.com/containers/common/blob/main/pkg/config/containers.conf#L369-L372).
It's also turned on by default.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Steps to reproduce the issue: Podman fails to resolve local image when testcontainers request it via socket.
Java test class example
Testcontainers output
podman socket log
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.): physical