Closed nathwill closed 6 years ago
We should be adding the user volumes first, and then adding the image volumes second, ignoring any mountpoints that already exist. It looks like that logic isn't working, so we're overwriting the user mountpoints with the image mounts.
The problem seems to only appear when a file is being mounted. It works when using a directory.
I noticed some other hiccup as there's a clear difference between -v pwd
:/config and -v pwd
:/config/ as the latter doesn't work (I suspect because of the trailing /
). Docker accepts both.
Closing as the issue has been fixed with https://github.com/containers/libpod/pull/1243.
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
Description
Steps to reproduce the issue:
id an image with an image-defined volume (e.g. uchiwa/uchiwa has a
VOLUME /config
instruction)try to mount a volume in
podman run
(e.g.podman run --volume=/etc/sensu/uchiwa.json:/config/config.json uchiwa/uchiwa:latest /bin/sh
)note in
ls /config
that the volume is not mountedDescribe the results you received:
no config.json is accessible after being mounted in the container
Describe the results you expected:
config.json should be mounted inside the image-defined volume
Additional information you deem important (e.g. issue happens only occasionally):
it's possible to work around the issue by also passing --image-volume=ignore. seems like the image-volume is being mounted on top of the --volume specified mount?
Additional environment details (AWS, VirtualBox, physical, etc.):
VirtualBox