Open marceldarvas opened 3 years ago
Perhaps look at this alternative backup.
It definitely won't downgrade anything and the backup doesn't stop the stack or take anything down.
The biggest "hole" i'm aware of is that the only database my solution backs up correctly is InfluxDB. If you're running MariaDB or PostgreSQL then the only safe way to capture those is to take down the container before starting the backup. Adding live-backup and restore for those two is on my to-do list.
I followed Graham Garner's lead in explicitly omitting Nextcloud. Aside from the "running database" issue, I assumed Graham's reasoning was that it was likely to grow to be "too big" and that it was up to the user to figure it out.
I've noticed that there were a couple different versions of images unlinked from containers, I assume that's what caused the the issues.
So I cleared them via the menu.
Going to look more into managing docker all together, too many apps are never simple to manage and backups 😅 But you're making me want to utilize InfluxDB more!
@Paraphraser Thank you for your activities as well, for supporting this project!
At the risk of telling you things you already know, there are three basic types of image/container to think about:
docker-compose.yml
and contain image:
statements. Portainer, InfluxDB and Grafana are examples of this.docker-compose.yml
and contain build:
statements. Node-RED is an example of this and I'm hoping Mosquitto soon will be too (at the moment, Mosquitto is one of the above).docker-compose.yml
.On a first-time install of IOTstack, these images are pulled down from DockerHub, and instantiated to become your running containers. Using WireGuard as the example, if you do a:
$ docker images
you'll see a pattern like:
REPOSITORY TAG IMAGE ID CREATED SIZE
ghcr.io/linuxserver/wireguard latest 229d2ef4682c 6 days ago 291MB
The TAG reflects its status. Tags are mostly "latest" but you might see "stable" or an explicit version number too.
You can update Type 1 containers just by doing a:
$ docker-compose pull
Absent a "pin" to freeze an image at a particular version, any later image will come down from DockerHub. That new image will gain the tag while the older image will lose its tag and be marked "\<none>":
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ghcr.io/linuxserver/wireguard latest 637459cc342e 23 minutes ago 291MB
ghcr.io/linuxserver/wireguard <none> 229d2ef4682c 6 days ago 291MB
Nothing about your running stack has changed. You can actually figure that out:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bb3fca0e153d 229d2ef4682c "/init" 5 days ago Up 5 days 0.0.0.0:51720->51820/udp, :::51720->51820/udp wireguard
Notice how the IMAGE column which is normally a name has been replaced with the IMAGE ID of the untagged image. When you do:
$ docker-compose up -d
the new image will be instantiated to become the running container, with the old container sent to the bit-bucket.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
91c0e4be0dd6 ghcr.io/linuxserver/wireguard "/init" 7 seconds ago Up 5 seconds 0.0.0.0:51720->51820/udp, :::51720->51820/udp wireguard
The old (untagged) image can't be cleaned up until the old container has gone away (which makes sense; it'd be like pulling the rug out from under it). But, once the old container has gone, the old (untagged) image can be cleaned up with a:
$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all dangling build cache
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: ghcr.io/linuxserver/wireguard@sha256:afe79369eb96a822a51f32bbdb16d9c5601e8f38110f7f4d8ba41d28ee7a6d36
deleted: sha256:229d2ef4682cf2004440b823094f5e7a285f7920d2c707175ab8ede285624274
deleted: sha256:e34a32b4f3e28c88d9f75c7012988ad168493d94e0a15ec432a043e3010d37a5
deleted: sha256:e0e5b01d9786dde052ea63555ab6ac30f7aa308f80a266e2b0de17a8132c3e21
deleted: sha256:cd154a75548b772b0f42d9ee9b74efe7076727cfbdfea845b17a873a52563c36
deleted: sha256:ad850d41c6e335c573e8a570fd329ce13532b6fd9dcf16d8d98e32873c6a750a
deleted: sha256:8590551f3555fbdd1c4bce05ef1275b28cfc8f30cf2dcce3660b9e04fcb1f9ba
deleted: sha256:cd32c2bb2f0bf2295175c52681ba39b94544e8dc6f81d17b2eaab66f3410d145
deleted: sha256:71d35c621b3179ca1f9802f332a7b3e1d583ccaab443c6b53d62b397116c7c68
deleted: sha256:85e0f34ae0fa19adaff0d2a629f6079c619951812aa278e60f0ae5669bea28df
deleted: sha256:d593584affc108afc972dc563a79d8a1a9eaefb31d81c3e8094e3ab08ac849de
Total reclaimed space: 291MB
These involve a two-step process. On a first install, the "latest" (or "stable" or pinned) version of the base image comes down from DockerHub. Then the Dockerfile is run to produce a local image. The local image is instantiated to become the running container.
Using Node-RED as the example, you see two images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
iotstack_nodered latest 750cfe0e6516 3 days ago 423MB
nodered/node-red latest-12 1c451e6f8470 3 days ago 386MB
nodered/node-red
is the base image from DockerHub while iotstack_nodered
is the local image. The ps
output makes it clear that iotstack_nodered
is what is running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f11d711a381 iotstack_nodered "npm --no-update-not…" 3 days ago Up 3 days (healthy) 0.0.0.0:1880->1880/tcp, :::1880->1880/tcp nodered
A prune
won't get rid of nodered/node-red
because Docker is aware that it's the base for the local image but, strictly speaking, it isn't being used, and Portainer reports it as "unused". You can force it to go away but there's little point because of the first update situation, which I'm about to get to.
There are two "update" situations with Dockerfile-based containers.
The first is a local change (eg you decide to add a new add-on node to your Dockerfile - instead of using Manage Palette in the GUI). This is also what happens if you revisit the menu to change the list of Node-RED add-on nodes. If you want to "apply" such a change then you need:
$ docker-compose up --build -d nodered
That starts with the base image (which is why it is a good idea to keep it around), re-runs the Dockerfile to produce a new local image and, because it's an "up" command, instantiates that new local image to become the running container, with the old container discarded.
The same "tag swap" happen with the two local images. The old image gets "\<none>" while the new image acquires "latest", and a prune
can clean up the old image.
The second "update situation" is when there's a new base image on DockerHub. For that you need:
$ docker-compose build --no-cache --pull nodered
That pulls down the new base image (a tag-swap occurs), then the new local image is built by running the Dockerfile (another tag-swap) but, because this isn't an "up" command, nothing changes until you run:
$ docker-compose up -d
The new local image is instantiated to become the running container and the old container goes puff. Once the old container is gone, the first prune
can clobber the old local image but it can't zap the old base image until the old local image has gone. Once the old local image has gone, a second prune
can remove the old base image.
One of the key things to note about the commands for Types 1 and 2 is that they are a mixture of docker
(meaning "whole of Docker") and docker-compose
(meaning, "you need to be in ~/IOTstack
to run the command, and the command only affects something mentioned in docker-compose.yml
).
Type 3 only use docker
commands. You'll get images of this kind if you're following instructions on the web which include something like this:
$ docker pull httpd
$ docker run -it -p 8080:80 httpd
It will show up in an images list:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd latest b3cc94b68658 11 days ago 107MB
but a prune
won't get rid of it. You have to kill it by hand, typically by using its IMAGE ID:
$ docker rmi b3cc94b68658
Even then you frequently run afoul of messages like:
Error response from daemon: conflict: unable to delete b3cc94b68658 (must be forced) - image is being used by stopped container c08168d428d2
This means that the container that was instantiated from that image was stopped but not removed.
With Type 1 and 2 containers, you're generally stopping your stack with a
down
and that implies both astop
and arm
(remove) of each container.
A prune
will get rid of stopped containers but you can also just use the container ID at the end of the error message:
$ docker rm -f c08168d428d2
and then re-do the rmi
.
Rolling all this together, you do have to run the various pull
, build
and --build up
commands, keep a watch on docker images
, run prune
when needed, and occasionally get more heavy-handed with rmi
and rm -f
. The menu doesn't really deal with all of the subtleties of this.
I was wondering why some of my apps like Homebridge or Home Assistant act odd.
Yesterday I used the menu to run a backup and it actually downgraded my homebridge version and shutdown my home assistant installed as supervisor
I really like this project but this stability makes me anxious.
@Slyke I've been watching your work, excited to see the new release!