Open aaron-prindle opened 2 years ago
@sharifelgamal this is the issue I referenced in my chat w/ you
Does the Dockerfile have some mode restrictions like 640 ? Can you list the files ls -l
perhaps, the tar format might preserve the owner and group from the host?
Yes, it has 640
/-rw-r-----
permissions:
$ stat Dockerfile
...
Access: (0640/-rw-r-----)
...
$ ls -l Dockerfile
-rw-r----- 1 aprindle primarygroup 211 Jul 30 00:36 Dockerfile
Changing the file perms to 644
resolved this issue, thanks!
Does docker build
change the permissions ?
I'm not sure in what context/where you are referring to the files permissions. docker build
works for this Dockerfile
with the original permissions (why I was initially confused). The file permissions are unchanged (remain 640
) after using docker build
.
No worries, I can look into it myself. The docker build
transport is somewhat different, even if it is the same "tar" code being used.
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/fileutils"
Indeed, they replace the user and assume that it is "root" that is reading the archive.
buildCtx, err = archive.TarWithOptions(contextDir, &archive.TarOptions{
ExcludePatterns: excludes,
ChownOpts: &idtools.Identity{UID: 0, GID: 0},
})
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
There seems to be an easy workaround for this thankfully, but the UX is less than ideal, so I'll leave this open.
In attempting to use the
minikube image build
command on a local dockerfile I am seeing the following:This is the dockerfile I am trying to build (from skaffold/examples/microservices/base/Dockerfile):
os: linux (debian-rodete) minikube_version: v1.23.2
full logs running
minikube image build --alsologtostderr .
withkvm2
driver: https://gist.github.com/aaron-prindle/3b15b9082ad8b5994e56e722f78a9cd6full logs running
minikube image build --alsologtostderr .
withdocker
driver: https://gist.github.com/aaron-prindle/d62d5c3d8a64f2e04216d427eeffab89This occurs for me when using using both docker and KVM virtualization (
--vm-driver=docker
&--vm-driver=kvm2
)NOTE: My linux user id is in both
kvm
andlibvirt
groups and the kvm2 driver is working properly:Any idea why I might be seeing this permission denied issue? Am I perhaps using
minikube image build .
incorrectly with my current env/vm-driver setup? I can see the directory referenced in thepermission denied error
-/var/lib/minikube/build/build.87560460/Dockerfile
in the minikube vm (below output from--vm-driver=kvm2
) but not theDockerfile
: