Closed Deathproof76 closed 1 year ago
Please recheck:
root@lsioplex:/# chmod +x dbr
root@lsioplex:/# ./dbr
Plex Media Server is currently running, cannot continue.
Please stop Plex Media Server and restart this utility.
root@lsioplex:/# ls
app boot config dbr dev etc init lib32 libx32 mnt package root sbin sys transcode var
bin command data defaults docker-mods home lib lib64 media opt proc run srv tmp usr
root@lsioplex:/# s6-svc -d /var/run/service/svc-plex
root@lsioplex:/# ./dbr
Plex Media Server Database Repair Utility (Docker)
Select
1. Check database
2. Vacuum database
3. Reindex database
4. Attempt database repair
5. Replace current database with newest usable backup copy
6. Undo last successful action (Vacuum, Reindex, Repair, or Replace)
7. Import Viewstate / Watch history from another PMS database
8. Show logfile
9. Exit
Enter choice:
Downloaded and copied the script again. I'm not familiar with cat /proc/1/cgroup | grep docker
But if a Plex process should be expected... Something wrong on my end maybe?
$ docker exec -it plex /bin/bash
root@Server-Zero:/# cd "/config/Library"
root@Server-Zero:/config/Library# chmod +x DBRepair.sh
root@Server-Zero:/config/Library# ls
'Application Support' DBRepair.sh License.md README.md
root@Server-Zero:/config/Library# ./DBRepair.sh
Error: Unknown host. Currently supported hosts are: QNAP, Synology, Netgear, Mac, ASUSTOR, WD (OS5) and Linux Workstation/Server
Error: Unknown host. Currently supported hosts are: QNAP, Synology, Netgear, Mac, ASUSTOR, WD (OS5) and Linux Workstation/Server
root@Server-Zero:/config/Library# cat /proc/1/cgroup | grep docker
root@Server-Zero:/config/Library# cat /proc/1/cgroup
0::/
root@Server-Zero:/config/Library# cd ..
root@Server-Zero:/config# cat /proc/1/cgroup
0::/
root@Server-Zero:/config# cd ..
root@Server-Zero:/# cat /proc/1/cgroup
0::/
root@Server-Zero:/# ls
app boot config dev etc init lib32 libx32 media-files opt proc run srv tmp var
bin command defaults docker-mods home lib lib64 media mnt package root sbin sys usr
root@Server-Zero:/# cat /proc/1/cgroup | grep docker
root@Server-Zero:/#
My transcode folder is on a ramdisk btw. Don't know if it matters just noticed it when comparing.
There is something wrong with your docker engine or container,
Please observe:
[chuck@lizum docker.2027]$ docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[chuck@lizum docker.2028]$ lsioplex
Error response from daemon: No such container: lsioplex
Error: No such container: lsioplex
Unable to find image 'lscr.io/linuxserver/plex:latest' locally
latest: Pulling from linuxserver/plex
274402f9efdb: Pull complete
cbba887b2540: Pull complete
a15ce2a609e0: Pull complete
2f5c2978749a: Pull complete
96464c4b8240: Pull complete
a58548592b7a: Pull complete
635112d11240: Pull complete
Digest: sha256:fa27fb0841e1bd75310317dc1812599fa29d3662464eed51fd766e9ed4060362
Status: Downloaded newer image for lscr.io/linuxserver/plex:latest
d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
[chuck@lizum docker.2029]$ docker exec -it lsioplex bash
root@lsioplex:/# cat /proc/1/cgroup | grep docker
13:perf_event:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
12:net_cls,net_prio:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
11:cpu,cpuacct:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
9:memory:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
8:hugetlb:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
7:cpuset:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
6:rdma:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
5:devices:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
4:pids:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
3:freezer:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
2:blkio:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
1:name=systemd:/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
0::/docker/d001a825d7cd48434007b4a651f1a45e2a74f2cf65f3abd8d4863981120bbd99
root@lsioplex:/#
I'm still reading up, as I still have a lot to learn, but this here seems to be a possibility: https://stackoverflow.com/questions/67155739/how-to-check-if-process-runs-within-a-docker-container-cgroup-v2-linux-host I tested other containers apart from plex, they all have the same output of
cat /proc/1/cgroup
0::/
So maybe because my docker engine is using cgroup v2? But why would an earlier version of your script work? I should have had that problem by now https://www.infoq.com/news/2021/01/docker-engine-cgroups-logging/ it's not that new. But I also know that I just did a full apt upgrade yesterday (Ubuntu 22.04) and I have early updates turned on because of my Intel Arc A380. Still might be connected. I'm going to test when I have the time and update, when I understand more
docker version
Client: Docker Engine - Community
Version: 20.10.21
API version: 1.41
Go version: go1.18.7
Git commit: baeda1f
Built: Tue Oct 25 18:01:58 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.21
API version: 1.41 (minimum version 1.12)
Go version: go1.18.7
Git commit: 3056208
Built: Tue Oct 25 17:59:49 2022
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.12
GitCommit: a05d175400b1145e5e6a735a6710579d181e7fb0
runc:
Version: 1.1.4
GitCommit: v1.1.4-0-g5fd4c4d
docker-init:
Version: 0.19.0
GitCommit: de40ad0
update: my Linux seems to be cgroup2 capable and my docker is using cgroup2 https://stackoverflow.com/questions/69002675/on-debian-11-bullseye-proc-self-cgroup-inside-a-docker-container-does-not-sho
from host:
$ grep cgroup /proc/filesystems
nodev cgroup
nodev cgroup2
$ docker info | grep 'Cgroup Version'
Cgroup Version: 2
from inside container:
root@Server-Zero:/# grep cgroup /proc/filesystems
nodev cgroup
nodev cgroup2
cgroup2 appears to be a breaking change so this seems to be the reason why I'm seeing this from inside the container.
cat /proc/1/cgroup
0::/
I'm currently not sure if something like this helps e.g. this is the Id of my container
ID | fa932151776a79ed2f086a0883b3ba8b6361f5c51026acc301bd55f3b1b35e77
and then from within the container:
root@Server-Zero:/# cat /proc/self/mountinfo | grep docker
12730 6352 0:86 / / rw,relatime master:1902 - overlay overlay rw,lowerdir=/var/lib/docker/overlay2/l/3C5624PYGS2MBIBWBH2TAPCS7E:/var/lib/docker/overlay2/l/V4R5YCZPKRNB5RDOWQHBSB24UP:/var/lib/docker/overlay2/l/EYCVMZTRKCNFBNSUWCQJUAXA72:/var/lib/docker/overlay2/l/3VAWTU3OQQRQGXJUQU5YEQPWQG:/var/lib/docker/overlay2/l/W4TNFWOMASWF7M24VH6OBC7CQY:/var/lib/docker/overlay2/l/R2PM6T5VGXQEWEQS7BLXV2W656:/var/lib/docker/overlay2/l/JF7TZF6RNVTVYVKK5KH6CBNMPZ:/var/lib/docker/overlay2/l/VSMP27EE5JF5W7SUXQGPZTE5CX,upperdir=/var/lib/docker/overlay2/6bd10580d35518bbac9fce122f586e1a4c3346fe7912bcce04d6c94fb6115433/diff,workdir=/var/lib/docker/overlay2/6bd10580d35518bbac9fce122f586e1a4c3346fe7912bcce04d6c94fb6115433/work
12827 12730 259:4 /var/lib/docker/containers/fa932151776a79ed2f086a0883b3ba8b6361f5c51026acc301bd55f3b1b35e77/resolv.conf /etc/resolv.conf rw,noatime - ext4 /dev/nvme2n1p2 rw,errors=remount-ro
12828 12730 259:4 /var/lib/docker/containers/fa932151776a79ed2f086a0883b3ba8b6361f5c51026acc301bd55f3b1b35e77/hostname /etc/hostname rw,noatime - ext4 /dev/nvme2n1p2 rw,errors=remount-ro
12829 12730 259:4 /var/lib/docker/containers/fa932151776a79ed2f086a0883b3ba8b6361f5c51026acc301bd55f3b1b35e77/hosts /etc/hosts rw,noatime - ext4 /dev/nvme2n1p2 rw,errors=remount-ro
So I'll update, when I find something more similar to a cat /proc/1/cgroup
equivalent,
https://stackoverflow.com/questions/68816329/how-to-get-docker-container-id-from-within-the-container-with-cgroup-v2
There seems to be some kind of "intercompatibility mode", but it has to be forced during container creation
I'm honestly a little bit out of my depth here
Duh... Could it be that simple? When running on a cgroupv2 host:
$ cat /proc/1/cgroup
0::/init.scope
And when running in a cgroupv2 container
cat /proc/1/cgroup
0::/
If it's just a check if the script is running in a container.
I also asked ChatGPT:
Me: How can a script check if it's running in a linux based docker container, when the docker container is using Cgroup v2?
ChatGPT: You can check if your script is running inside a Docker container by looking for the presence of the /proc/self/cgroup file, which contains information about the control groups (cgroups) that the script's process belongs to.
To check if the Docker container is using Cgroup v2, you can check the contents of the /proc/self/cgroup file. If the file contains a line that starts with 0::/, it means that the container is using Cgroup v2. Here's some sample code that demonstrates this:
# Check if the /proc/self/cgroup file exists
if [ -f /proc/self/cgroup ]; then
# Read the first line of the file
cgroup=$(head -n 1 /proc/self/cgroup)
# Check if the first line starts with "0::/"
if [[ $cgroup == 0::/* ]]; then
echo "Running inside a Docker container with Cgroup v2"
else
echo "Running inside a Docker container or using Cgroup v1"
fi
else
echo "Running outside a Docker container"
fi
This script checks if the /proc/self/cgroup file exists, and if it does, it reads the first line of the file and checks if it starts with 0::/. If it does, it means that the container is using Cgroup v2. If the file doesn't exist or the first line doesn't start with 0::/, it means that the script is either running outside a Docker container or inside a container that is using Cgroup v1.
Does this help maybe?
edit: the script needs to be edited, I'll update when I find a solution. Right now the script can't differentiate if it's running in a docker with cgroup2 or a host with cgroup2.
Thank you for that, I've researched deeper and indeed Ubuntu is the issue
The answer is as simple as:
The previous version of the script works because it didn't use cgroup detection. Now, to support other plex variants, it does because cgroup is more reliable than finding the signature of each.
Will update with this shortly. expect v0.6.1 today
commit: 579004cfe07282c02d6a72f7f713083b55c08f7a
Last release of the script worked without a hitch. On the newest version:
Thank you for the script by the way!