Open TaborKelly opened 5 years ago
This is working as expected. We cannot remove volumes which are referenced.
The force flag is in case the volume driver fails for whatever reason, the user can force tell docker to go ahead and delete that from docker.
Perhaps that is true. Neither the command line help nor the man page say that. The command line helps says that --force
will Force the removal of one or more volumes
.
I changed top-level volume but remained the same name. Docker told me that I should remove the same name one. I run the command with '-f', understanding what Im doing, but couldn't delete that.
What worked for me was just pruning my environment:
$ docker container prune
$ docker volume prune
$ docker network prune
Does anyone know how to de-reference volumes? My approach is destructive but I didn't care. Others may want something less nuclear.
The error message you receive in these situations is like:
Error response from daemon: unable to remove volume: remove mydata: volume is in use - [1cbcfa3d47a32db7b0075e113216f7146a436a4da22a97dc2f7b60c68de95c3d]
What is that ID? How can it be used to de-reference the volume?
I think many have been bit by this type of thing: https://serverfault.com/q/892656/409848
Volumes can only be referenced by containers. The ID is a container ID.
The only way to de-reference the volume is to remove the container.
The --force
option sets cfg.PurgeOnError
option, which is used here; https://github.com/moby/moby/blob/2df693e533e904f432c59279c07b2b8cbeece4f0/volume/service/service.go#L148-L162
v, err := s.vs.Get(ctx, name)
if err != nil {
if IsNotExist(err) && cfg.PurgeOnError {
return nil
}
return err
}
err = s.vs.Remove(ctx, v, rmOpts...)
if IsNotExist(err) {
err = nil
} else if IsInUse(err) {
err = errdefs.Conflict(err)
} else if IsNotExist(err) && cfg.PurgeOnError {
err = nil
}
Perhaps the flag description (and API description https://docs.docker.com/engine/api/v1.40/#operation/VolumeDelete) should be updated to describe that it's used to not produce an error if the volume doesn't (or "no longer") existed.
I guess the option was added for situations where a race-condition cause problems; volume in the process of being removed by the volume driver, and no longer present at the moment when the actual "delete" is attempted.
So a volume can be in use even by a stopped container. Right? In which case you would need to do:
$ docker container stop <container-id>
$ docker container rm <container-id>
Followed by:
$ docker volume rm <volume-id>
For some reason in my rush to do what I needed to do, I did not make the connection that a stopped container still references a volume. I think the above combination would have also worked for me. It seems glaringly obvious in hindsight.
So a volume can be in use even by a stopped container. Right?
Correct: the reason for marking those volumes as "in use" is that;
docker create
a container (using a volume), and docker start
it separatelydocker run
(which is a combination of docker create
followed by docker run
: marking the volume as "in use" is to prevent race conditions where the volume would potentially be removed between those steps)In short: containers should generally not contain "state", and be considered ephemeral (you can (docker pull
the image, and) start a new new container, but volumes contain data that should be preserved, so we try to prevent accidental removal of volumes for situations listed above.
In which case you would need to do:
Yes; if you want to destroy / remove a container, including "anonymous" volumes that may be attached, you could use docker rm -fv <container-id>
; -f
forces killing the container if it's still running (without waiting it to shutdown cleanly), and the v
(-v
) removes anonymous volumes that are attached.
After that, docker volume rm <volume name>
allows you to remove named volumes.
I have the same problem, I have deleted all the containers, but I cannot remove the volume.
And I do not have the containers to try docker rm -fv <container-id>
w@w:~$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0875ca29dc35 ark74/nc_fts "/tini -- /usr/local…" 2 weeks ago Up 5 minutes 127.0.0.1:9200->9200/tcp, 127.0.0.1:9300->9300/tcp fts_esror
w@w:~$ sudo docker volume ls
DRIVER VOLUME NAME
local esdata
local snipe-vol
local snipesql-vol
w@w:~$ sudo docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0875ca29dc35 ark74/nc_fts "/tini -- /usr/local…" 2 weeks ago Up 6 minutes 127.0.0.1:9200->9200/tcp, 127.0.0.1:9300->9300/tcp fts_esror
w@w:~$ sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
20ceef544a8b bridge bridge local
799b799a3ffa host host local
58047c88bc54 none null local
w@w:~$ sudo docker volume rm -f snipe-vol snipesql-vol
Error response from daemon: remove snipe-vol: volume is in use - [d5ef36f089738154e6e611b802e570b9416a9bc939ce7c0f4801e6a57ddf05f1]
Error response from daemon: remove snipesql-vol: volume is in use - [c51481ca6ee8f0240281b5f139689a7e99fc9d44d406fd3f30a76a84eacfa623]
Salved using
docker ps -a -q
Then compare the list with the ones that I actually still have sudo docker container ls
and delete the ones that I have already deleted. using docker rm -fv <container-id>
docker ps -a -q Then compare the list with the ones that I actually still have sudo docker container ls and delete the ones that I have already deleted. using docker rm -fv
This is very unintuitive but I can confirm that it works.
Salved using
docker ps -a -q
Then compare the list with the ones that I actually still have sudo docker container ls
and delete the ones that I have already deleted. using docker rm -fv <container-id>
It worked for me. Thank you! Appreciate it.
Expected behavior
docker volume rm --force
lets you remove local volumes that are in use by stopped containersActual behavior
docker volume rm --force
does not let you remove local volumes that are in use by stopped containersSteps to reproduce the behavior
docker volume rm --force
--force
on the command line.Output of
docker version
:Output of
docker info
:I'm running on Ubuntu 18.04.