Closed sclevine closed 5 years ago
@dsyer I am trying to reproduce this with the latest version of pack and cannot repoduce this bug. Do you still have this problem with the latest cli? https://storage.cloud.google.com/pack-cli/pack-360-linux
dsyer@pivotal.io does not have storage.objects.get access to pack-cli/pack-360-linux.
@dsyer our artifacts are on the move between GCP accounts! You can use this for now, https://storage.cloud.google.com/pack-cli-transfer/pack-365-darwin. But by tomorrow the urls should be the same as before.
$ pack-365-linux create-builder dsyer/builder -b builder.toml --no-pull
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0x958008, 0x16)
/home/travis/.gimme/versions/go1.11.1.linux.amd64/src/runtime/panic.go:608 +0x72
...
Still seeing that same problem though, once I make enough memory available:
$ docker images | head
REPOSITORY TAG IMAGE ID CREATED SIZE
...
dsyer/builder latest 42aa32829f4a 7 days ago 1.75GB
<none> <none> 0b2bbf7d2b94 7 days ago 1.75GB
...
That 0b2bbf7d2b94
is actually the previous "latest" of dsyer/builder
. Maybe that's a clue?
@dsyer So I believe I found the memory hog. Reading the base image (using go-containerregistry) locks away about 2Gb when I use the cfbuildpacks/cflinuxfs3-cnb-experimental:build
image. I am now investigating what best to do about it.
I'm still investigating the untagged images; I believe your statement about it being the previous image will be helpful
@dsyer I believe that leaving the previous builder image is expected behavior. We create a new builder image but don't assume that the previous image should be deleted.
However I would expect the previous image to have repository dsyer/builder
(and tag <none>
) rather than repository <none>
. That seems like something we should investigate and fix.
This raises another question; would it be helpful to introduce a flag to cleanup the previous image to support rapid iteration? I don't think we would want that to be the default behavior, but it looks like there are cases where that would be helpful.
It makes sense that we don't make assumptions. But think about what happens if I use docker build -t dsyer/app .
- it just replaces the old image with the new one. I think the least surprising default is to behave the same way (with an option to rename it if you like - but then you need to ask for a name)?
The current behavior actually matches the behavior of docker build -t
exactly.
<none> <none>
is expected:
liberty:test pivotal$ cat Dockerfile
FROM alpine
RUN echo hi
liberty:test pivotal$ docker build -t tagtest .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM alpine
---> 196d12cf6ab1
Step 2/2 : RUN echo hi
---> Using cache
---> d44621ee8b36
Successfully built d44621ee8b36
Successfully tagged tagtest:latest
liberty:test pivotal$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tagtest latest d44621ee8b36 17 seconds ago 4.41MB
liberty:test pivotal$ cat Dockerfile
FROM alpine
RUN echo hello
liberty:test pivotal$ docker build -t tagtest .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM alpine
---> 196d12cf6ab1
Step 2/2 : RUN echo hello
---> Running in 21839d4d41cd
hello
Removing intermediate container 21839d4d41cd
---> f3dca48a4cb7
Successfully built f3dca48a4cb7
Successfully tagged tagtest:latest
liberty:test pivotal$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
tagtest latest f3dca48a4cb7 3 seconds ago 4.41MB
<none> <none> d44621ee8b36 51 seconds ago 4.41MB
OK. You win. I didn't know that. Using pack
just churns images faster than I normally do with docker
I guess, so I notice it more.
From dsyer in Slack: