Closed wrobell closed 3 years ago
@vrothberg PTAL - I suspect this is the localhost/
prefix?
@wrobell What Podman version, what distribution?
A full podman info
would be very helpful.
@mheon
# podman info
host:
arch: arm
buildahVersion: 1.21.0
cgroupControllers:
- cpuset
- cpu
- io
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: /usr/bin/conmon is owned by conmon 1:2.0.29-1
path: /usr/bin/conmon
version: 'conmon version 2.0.29, commit: 7e6de6678f6ed8a18661e1d5721b81ccee293b9b'
cpus: 4
distribution:
distribution: archarm
version: unknown
eventLogger: journald
hostname: rpi-mm4
idMappings:
gidmap: null
uidmap: null
kernel: 5.10.42-1-ARCH
linkmode: dynamic
memFree: 1755041792
memTotal: 4031725568
ociRuntime:
name: crun
package: /usr/bin/crun is owned by crun 0.20-1
path: /usr/bin/crun
version: |-
crun version 0.20
commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 0
swapTotal: 0
uptime: 45h 38m 2.75s (Approximately 1.88 days)
registries: {}
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 6
paused: 0
running: 2
stopped: 4
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 6
runRoot: /var/run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 3.2.0
Built: 1622932849
BuiltTime: Sat Jun 5 23:40:49 2021
GitCommit: 0281ef262dd0ffae28b5fa5e4bdf545f93c08dc7
GoVersion: go1.16.4
OsArch: linux/arm
Version: 3.2.0
How did you create your test image?
cat /tmp/Containerfile | /bin/podman build -f - /tmp/test
STEP 1: FROM alpine
STEP 2: run echo hello
hello
STEP 3: COMMIT
--> 967f1b76390
967f1b7639067e2d7f039fbeb4042758f14aeab052619bbdf8dd3d645d51258f
$ podman tag 967 dan
$ podman run dan
$ podman run dan echo hi
hi
The image is built using buildah, then
Please note, that I can run the image using its image id (see the last command in the description of the bug).
I suspect that the image's architecture is not matching the one of your local machine.
Can you share the output of podman image inspect --format "Arch: {{.Architecture}}, OS: {{.Os}}" test-image
?
# podman image inspect --format "Arch: {{.Architecture}}, OS: {{.Os}}" test-image
Arch: armv7l, OS: linux
# uname -a
Linux rpi-mm4 5.10.42-1-ARCH #1 SMP Tue Jun 8 14:18:39 UTC 2021 armv7l GNU/Linux
Does podman run --arch=armv7l --os=linux test-image ls
work?
Aaaah, I am slowly getting a feeling for the issue. Bottom line: the image is not OCI compliant.
The image specification states that the mentioned architecture of an image must adhere to the GOARCH values. armv7l
is not a valid value, so the match won't work.
In a previous version, Podman would just pick the local image even if the architecture does not match. This is something we've fixed.
How was the OCI archive created?
# podman run --arch=armv7l --os=linux test-image ls
Error: error getting default registries to try: short-name "test-image" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
The OCI archive was created using buildah.
# buildah inspect 76 | grep version
...
"io.buildah.version": "1.20.0"
...
``
So it seems I should rebuild my images? Or is there some chance for transition period with a warning?
So it seems I should rebuild my images?
Yes, that would be good.
Or is there some chance for transition period with a warning?
Hard to say. We did not consider mistakenly wrong architectures.
I am currently investigating if there is programmatic way of detecting wrong os/arch combinations and warn about them during image lookup.
NOTE: one workaround is to use the image ID. That will instruct Podman to use exactly this image. A lookup by name will always perform the os/arch matching.
@rhatdan what do you think?
BTW. For architectures armv6 and armv7l, the only matching entry in "architecture" field is to be "arm", isn't it? Therefore it will not be possible to distinguish between these two?
BTW. @vrothberg The relevant part of the image specification uses "should", not "must".
BTW. For architectures armv6 and armv7l, the only matching entry in "architecture" field is to be "arm", isn't it?
Either "arm" or "arm64". I am always lost in the vast forest of ARM platforms.
Therefore it will not be possible to distinguish between these two?
An image index or in Docker-slang a "manifest list" has a "variant" field for discriminating platforms.
BTW. @vrothberg The relevant part of the image specification uses "should", not "must".
Fair point. That means the image is compliant but off the recommended path. I will investigate, why run --arch=armv7l
doesn't select it. I think that's a good middle ground.
Fair point. That means the image is compliant but off the recommended path. I will investigate, why
run --arch=armv7l
doesn't select it. I think that's a good middle ground.
I opened https://github.com/containers/common/pull/622. It will require some plumbing in Podman as well but is a good first step. Once done, even a podman run --arch=foobar
would work if a matching image of arch foobar is present in the local storage.
I'll be on vacation for a couple of days, so this will be something to land in Podman v3.3.
I have just build an image on my laptop using buildah 1.21.0
# podman inspect test-image | grep arch
"io.balena.architecture": "armv7hf",
"io.balena.architecture": "armv7hf",
Should I wait for newer version of buildah?
This sounds like a potential Buildah bug, then - @TomSweeneyRedHat @nalind Can you guys take a look?
Yeah, the defaults we set when we don't have values to inherit from a base image probably need a going over for arm
and variants.
@wrobell, could you open an additional issue for Buildah?
Once https://github.com/containers/common/pull/634 is merged, I will do some plumbing in Podman and then it should work again. There are further use cases that suggest that Podman should continue eating those images.
/kind bug
The image cannot be found after tagging it with podman 3.2.0 (the below works with podman 3.1.1)
but