containers / podman

Podman: A tool for managing OCI containers and pods.
https://podman.io
Apache License 2.0
23.07k stars 2.35k forks source link

Podman build does not keep ENV from Dockerfile on COPY or ADD #4878

Closed giflw closed 3 years ago

giflw commented 4 years ago

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When using ENV defined vars to select files to ADD or COPY, build fail.

Steps to reproduce the issue:

  1. Create Dockerfile

    FROM scratch
    ENV  VERSION=0.0.1
    COPY file-${VERSION}.txt /
  2. Create file file-0.0.1.txt in same directory

  3. Use podman build . to build image on that directory

Describe the results you received:

The build fail with message if file-0.0.1.txt exists:

Error: error dry-running "COPY file-${VERSION}.txt /": no files found matching "/tmp/podman-test/file-.txt": no such file or directory

If file-0.0.1.txt does not exists: Error: error building at STEP "COPY file-${VERSION}.txt /": no files found matching "/tmp/podman-test/file-0.0.1.txt": no such file or directory

Describe the results you expected:

Build success with file-0.0.1.txt added to image

Additional information you deem important (e.g. issue happens only occasionally):

I have buildah on my machine and the command buildah bud . works fine. Buildah version is:

Version:         1.10.1
Go Version:      go1.10.4
Image Spec:      1.0.1
Runtime Spec:    1.0.1-dev
CNI Spec:        0.4.0
libcni Version:  
Git Commit:      
Built:           Thu Aug  8 17:29:48 2019
OS/Arch:         linux/amd64

Output of podman version:

Version:            1.6.2
RemoteAPI Version:  1
Go Version:         go1.10.4
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.10.4
  podman version: 1.6.2
host:
  BuildahVersion: 1.11.3
  CgroupVersion: v1
  Conmon:
    package: 'conmon: /usr/bin/conmon'
    path: /usr/bin/conmon
    version: 'conmon version 2.0.3, commit: unknown'
  Distribution:
    distribution: neon
    version: "18.04"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  MemFree: 1488613376
  MemTotal: 8269008896
  OCIRuntime:
    name: runc
    package: 'containerd.io: /usr/bin/runc'
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
      spec: 1.0.1-dev
  SwapFree: 2147479552
  SwapTotal: 2147479552
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: s3gui
  kernel: 5.0.0-37-generic
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: 'slirp4netns: /usr/bin/slirp4netns'
    Version: |-
      slirp4netns version 0.4.2
      commit: unknown
  uptime: 39m 49.49s
registries:
  blocked: null
  insecure: null
  search: null
store:
  ConfigFile: /home/guilherme/.config/containers/storage.conf
  ContainerStore:
    number: 1
  GraphDriverName: vfs
  GraphOptions: {}
  GraphRoot: /home/guilherme/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 19
  RunRoot: /run/user/1000
  VolumePath: /home/guilherme/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman/bionic,now 1.6.2-1~ubuntu18.04~ppa1 amd64 [installed]

Additional environment details (AWS, VirtualBox, physical, etc.):

O.S. on my notebook: KDE Neon based on Ubuntu Bionic

No LSB modules are available.
Distributor ID: neon
Description:    KDE neon User Edition 5.17
Release:        18.04
Codename:       bionic
rhatdan commented 4 years ago

Could you check if the same problem happens with buildah bud. If so then open an issue with buildah. Podman just vendors in buildah for podman build

giflw commented 4 years ago

buildah bud . on the same directory Works as expected.

rhatdan commented 4 years ago

What versions of Buildah are you using?

$ podman info | grep -i Buildah
  BuildahVersion: 1.12.0
$ buildah --version
buildah version 1.12.0 (image-spec 1.0.1-dev, runtime-spec 1.0.1-dev)
giflw commented 4 years ago

All version information is on description:

Buildah: Version: 1.10.1 Buildah on Podman: BuildahVersion: 1.11.3

rhatdan commented 4 years ago

Strange that the older vresion of Buildah works, but the newer Buildah in Podman fails. Do you have access to buildah in master version or something newer to see if this is fixed in latest releases?

rhatdan commented 4 years ago

@TomSweeneyRedHat Any ideas?

giflw commented 4 years ago

@rhatdan Its interesting to note that interpolation works when file not exists, but not after file is found (when actually copying it)

TomSweeneyRedHat commented 4 years ago

Something's gone south, it's not working in Builda upstream, looks like a regression. I've been moving to a new PC most of the day today and need to run shortly, will look deeper in the morning.

TomSweeneyRedHat commented 4 years ago

I don't have a root cause yet, but a suspicion or two, still digging. As you noted, this should still work, however I did find a workaround that might be useful? I adjusted your sample Dockerfile like this:

FROM scratch
ENV VERSION=0.0.1
ARG NEWVERSION=${VERSION}
COPY file-${NEWVERSION}.txt /

And that works. it looks like the code is now only using ARG variables when resolving variables rather than ARG variables and ENV variables as it should.

Still working on a permanent fix, but thought I'd share in case this is helpful in the short term.

TomSweeneyRedHat commented 4 years ago

@giflw Proposed fix at https://github.com/containers/buildah/pull/2095. Once merged we'll then have to vendor Buildah upstream. Should be in the next release of the Podman (v1.7.1+).

mheon commented 4 years ago

@TomSweeneyRedHat Have we pulled in a Buildah with this version yet?

TomSweeneyRedHat commented 4 years ago

@mheon, not yet, this will be in Buildah v1.13.2 (or maybe 1.14.0). This just got merged earlier this week there.

baude commented 4 years ago

@mheon want to pull in the commit instead of waiting for a release? If not, @TomSweeneyRedHat how do you feel about getting a release done

mheon commented 4 years ago

We're waiting on a c/image release right now, to pull some architecture checks that were added in error, for a fresh Podman release - gives us a bit of time?

I'd prefer not to pull in a bare commit if the Buildah folks don't want one, liable to make their lives a lot harder

TomSweeneyRedHat commented 4 years ago

I'm happy to do a release, but won't get to it until tomorrow (Tues 1/28) if that works.

mheon commented 4 years ago

We're stuck waiting for containers.conf right now. Reassigning to Dan.

dpepper commented 4 years ago

Is there any update on this? This blocks me from using podman at all. I'm actually working around multiple issues on different podman levels, so moving down to a different level doesn't work.

mheon commented 4 years ago

I think we've pulled a Buildah with a fix into master.

dpepper commented 4 years ago

So how can we use that? Is there a way to make podman use a buildah that is installed on the machine instead of one packaged with it?

TomSweeneyRedHat commented 4 years ago

Yes, this was in 1.13.2 which was grabbed with a recent vendor of Buildah. It also might have been done in an earlier rc.

@dpepper You couldn't use the updated Buildah from Podman directly unless you pulled down from upstream.

You could however, install Buildah and then change your podman build ... command to bulidah bud --layers ...

(and sometimes I hate my track pad and how it will close a github reply too early)

TomSweeneyRedHat commented 4 years ago

@dpepper, please see my edited comment.

dpepper commented 4 years ago

@TomSweeneyRedHat thanks I actually just tried that, I'm also hitting a bug in buildah bud --layers unfortunately where buildah does not use it's cache in an automated build environment. https://github.com/containers/buildah/issues/2215

github-actions[bot] commented 4 years ago

A friendly reminder that this issue had no activity for 30 days.

rhatdan commented 4 years ago

I believe this is fixed in the latest podman/buildah please reopen if I am mistaken.

hoshsadiq commented 4 years ago

I have upgraded to podman 2.0.1, and still getting this issue when using multistage dockerfiles:

$ cat Dockerfile
FROM php:7.4-fpm-alpine as builder

ENV EXT_INSTALL_DIR=/usr/local/lib/php/extensions/no-debug-non-zts-20190902

RUN curl -fsSL -o /usr/local/bin/install-php-extensions \
        https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions
RUN chmod +x /usr/local/bin/install-php-extensions
RUN install-php-extensions intl

RUN ls -al $EXT_INSTALL_DIR
RUN ls -al $PHP_INI_DIR/conf.d

FROM php:7.4-fpm-alpine

ENV EXT_INSTALL_DIR=/usr/local/lib/php/extensions/no-debug-non-zts-20190902

RUN env | grep EXT_INSTALL_DIR && env | grep PHP_INI_DIR

COPY --from=builder $EXT_INSTALL_DIR/*.so $EXT_INSTALL_DIR/
COPY --from=builder $PHP_INI_DIR/conf.d/*.ini "/tmp/"

$ podman build -t testing .
STEP 1: FROM php:7.4-fpm-alpine AS builder
STEP 2: ENV EXT_INSTALL_DIR=/usr/local/lib/php/extensions/no-debug-non-zts-20190902
--> 948d897a45e
STEP 3: RUN curl -fsSL -o /usr/local/bin/install-php-extensions         https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions
--> 798469be35a
STEP 4: RUN chmod +x /usr/local/bin/install-php-extensions
--> 558a99b6e64
STEP 5: RUN install-php-extensions intl
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
<clipped for brevity>
OK: 46 MiB in 34 packages
--> f63658a1a19
STEP 6: RUN ls -al $EXT_INSTALL_DIR
total 2628
drwxr-xr-x    2 root     root          4096 Jun 28 22:21 .
drwxr-xr-x    3 root     root          4096 Jun 28 22:21 ..
-rwxr-xr-x    1 root     root       1987616 Jun 28 22:21 intl.so
-rwxr-xr-x    1 root     root        579688 Jun 11 19:09 opcache.so
-rwxr-xr-x    1 root     root        108040 Jun 11 19:09 sodium.so
--> d7d7b837106
STEP 7: RUN ls -al $PHP_INI_DIR/conf.d
total 16
drwxr-xr-x    2 root     root          4096 Jun 28 22:21 .
drwxr-xr-x    3 root     root          4096 Jun 28 22:21 ..
-rw-r--r--    1 root     root            18 Jun 28 22:21 docker-php-ext-intl.ini
-rw-r--r--    1 root     root            20 Jun 11 19:09 docker-php-ext-sodium.ini
--> a131afd58cd
STEP 8: FROM php:7.4-fpm-alpine
STEP 9: ENV EXT_INSTALL_DIR=/usr/local/lib/php/extensions/no-debug-non-zts-20190902
--> Using cache 948d897a45eee067d2e44acd6d3dd7260618c595229f3744cb322c44a136e8ad
STEP 10: RUN env | grep EXT_INSTALL_DIR && env | grep PHP_INI_DIR
EXT_INSTALL_DIR=/usr/local/lib/php/extensions/no-debug-non-zts-20190902
PHP_INI_DIR=/usr/local/etc/php
--> 1e594f5e5a4
STEP 11: COPY --from=builder $EXT_INSTALL_DIR/*.so $EXT_INSTALL_DIR/
--> 21b4ab8dc6e
STEP 12: COPY --from=builder $PHP_INI_DIR/conf.d/*.ini "/tmp/"
Error: error dry-running "COPY --from=builder $PHP_INI_DIR/conf.d/*.ini \"/tmp/\"": no files found matching "/home/hosh/.local/share/containers/storage/overlay/da9c57d485c7b8917a81dd9d294cef921f5c1bbec2a090d914864203c45b3624/merged/conf.d/*.ini": no such file or directory

Editing the COPY statement to not use variables:

$ sed -i '/^COPY/s#$PHP_INI_DIR#/usr/local/etc/php#' Dockerfile

$ podman build -t testing .
STEP 1: FROM php:7.4-fpm-alpine AS builder
STEP 2: ENV EXT_INSTALL_DIR=/usr/local/lib/php/extensions/no-debug-non-zts-20190902
--> Using cache 948d897a45eee067d2e44acd6d3dd7260618c595229f3744cb322c44a136e8ad
STEP 3: RUN curl -fsSL -o /usr/local/bin/install-php-extensions         https://raw.githubusercontent.com/mlocati/docker-php-extension-installer/master/install-php-extensions
--> Using cache 798469be35ad737cc9fd645451106f916d1089897c339df750df35377cc0e7bc
STEP 4: RUN chmod +x /usr/local/bin/install-php-extensions
--> Using cache 558a99b6e6450788300e065ca44d6d1f998a4878319eccaca1eb922d8eda75a1
STEP 5: RUN install-php-extensions intl
--> Using cache f63658a1a1953e34f95fbbfc29d977482548a00c09aba7349123aec6641523c8
STEP 6: RUN ls -al $EXT_INSTALL_DIR
--> Using cache d7d7b8371061305aecba7b0dcae8c61829de97f17f0779fe4603c7a2a1a7aeb7
STEP 7: RUN ls -al $PHP_INI_DIR/conf.d
--> Using cache a131afd58cd35ff6ea5958a16d83ac2928ff25434926197c536e789773e99fb8
STEP 8: FROM php:7.4-fpm-alpine
STEP 9: ENV EXT_INSTALL_DIR=/usr/local/lib/php/extensions/no-debug-non-zts-20190902
--> Using cache 948d897a45eee067d2e44acd6d3dd7260618c595229f3744cb322c44a136e8ad
STEP 10: RUN env | grep EXT_INSTALL_DIR && env | grep PHP_INI_DIR
--> Using cache 1e594f5e5a4d7d46e4e72191cb77028167108d18d430e3c0e415000e1aeebc36
STEP 11: COPY --from=builder $EXT_INSTALL_DIR/*.so $EXT_INSTALL_DIR/
--> Using cache 21b4ab8dc6e010fc3ac9b77ee515e72a6762e401ed98e4f9262b4670508d2f98
STEP 12: COPY --from=builder /usr/local/etc/php/conf.d/*.ini "/tmp/"
STEP 13: COMMIT testing
--> 2b9c67a47fa
2b9c67a47fa9083e41efec8deb920034ea83c14a4f86dc40d0f0d4e3d81608ba
rhatdan commented 4 years ago

It is best to report this to Buildah, as Podman is just using Buildah under the covers. @TomSweeneyRedHat Any ideas?

bentito commented 3 years ago

this is still a problem. Tried on podman -v podman version 1.6.4 ENV from Dockerfile are quietly not acted on.

TomSweeneyRedHat commented 3 years ago

@bentito can you try with a newer version of Podman? I'm pretty sure this was fixed in v1.9, but it might have been cured in one of the v2.* streams.

bentito commented 3 years ago

@TomSweeneyRedHat I took the latest that CentOS 8 dnf install podman gave me. Do you happen to have a command to try the later versions you mention?

TomSweeneyRedHat commented 3 years ago

Not readily handy that I know of unless you can spin up a Fedora VM easily or if you can clone and build from GitHub. @lsm5 other thoughts?

TomSweeneyRedHat commented 3 years ago

@bentito, wrong pointer in my last reply, that was for Buildah, here's Podman's.

bentito commented 3 years ago

Thanks @TomSweeneyRedHat ,that did it. Using $ podman --version podman version 2.1.1 which I installed following the development install for CentOS 8 from the linked page.

TomSweeneyRedHat commented 3 years ago

Closing as this is cured upstream.