google / cadvisor

Analyzes resource usage and performance characteristics of running containers.
Other
17.24k stars 2.33k forks source link

Issues with cadvisor on Synology NAS #1846

Closed edasque closed 6 years ago

edasque commented 6 years ago

I am having issues running cadvisor on my Synology NAS.

If ran on docker, I get this:

I1227 04:16:53.543073       1 storagedriver.go:50] Caching stats in memory for 2m0s
I1227 04:16:53.543643       1 manager.go:151] cAdvisor running in container: "/sys/fs/cgroup/cpu"
I1227 04:16:53.813431       1 fs.go:139] Filesystem UUIDs: map[]
I1227 04:16:53.813532       1 fs.go:140] Filesystem partitions: map[cgmfs:{mountpoint:/var/run/cgmanager/fs major:0 minor:18 fsType:tmpfs blockSize:0} /dev/vg1000/lv:{mountpoint:/etc/resolv.conf major:253 minor:0 fsType:ext4 blockSize:0} shm:{mountpoint:/dev/shm major:0 minor:185 fsType:tmpfs blockSize:0} /dev/md0:{mountpoint:/var/lib/docker major:9 minor:0 fsType:ext4 blockSize:0} tmpfs:{mountpoint:/dev major:0 minor:188 fsType:tmpfs blockSize:0} none:{mountpoint:/ major:0 minor:184 fsType:aufs blockSize:0} /run:{mountpoint:/var/run major:0 minor:15 fsType:tmpfs blockSize:0}]
W1227 04:16:53.827315       1 info.go:52] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1227 04:16:53.827474       1 manager.go:225] Machine: {NumCores:4 CpuFrequency:2400000 MemoryCapacity:16820633600 HugePages:[] MachineID: SystemUUID:78563412-3412-7856-90AB-CDDEEFAABBCC BootID:d3ee6742-368e-43c0-bb86-1fb864d15278 Filesystems:[{Device:none DeviceMajor:0 DeviceMinor:184 Capacity:27536640897024 Type:vfs Inodes:1707507712 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:15 Capacity:8410316800 Type:vfs Inodes:2053300 HasInodes:true} {Device:cgmfs DeviceMajor:0 DeviceMinor:18 Capacity:102400 Type:vfs Inodes:2053300 HasInodes:true} {Device:/dev/vg1000/lv DeviceMajor:253 DeviceMinor:0 Capacity:27536640897024 Type:vfs Inodes:1707507712 HasInodes:true} {Device:shm DeviceMajor:0 DeviceMinor:185 Capacity:67108864 Type:vfs Inodes:2053300 HasInodes:true} {Device:/dev/md0 DeviceMajor:9 DeviceMinor:0 Capacity:2442780672 Type:vfs Inodes:155648 HasInodes:true} {Device:tmpfs DeviceMajor:0 DeviceMinor:188 Capacity:8410316800 Type:vfs Inodes:2053300 HasInodes:true}] DiskMap:map[9:1:{Name:md1 Major:9 Minor:1 Size:2147418112 Scheduler:none} 8:96:{Name:sdg Major:8 Minor:96 Size:6001175126016 Scheduler:cfq} 253:0:{Name:dm-0 Major:253 Minor:0 Size:27975772798976 Scheduler:none} 8:48:{Name:sdd Major:8 Minor:48 Size:6001175126016 Scheduler:cfq} 252:2:{Name:zram2 Major:252 Minor:2 Size:2522873856 Scheduler:none} 135:240:{Name:synoboot Major:135 Minor:240 Size:125829120 Scheduler:cfq} 9:0:{Name:md0 Major:9 Minor:0 Size:2549940224 Scheduler:none} 9:2:{Name:md2 Major:9 Minor:2 Size:11972709974016 Scheduler:none} 9:3:{Name:md3 Major:9 Minor:3 Size:16003067543552 Scheduler:none} 8:16:{Name:sdb Major:8 Minor:16 Size:8001563222016 Scheduler:cfq} 8:32:{Name:sdc Major:8 Minor:32 Size:6001175126016 Scheduler:cfq} 8:64:{Name:sde Major:8 Minor:64 Size:6301233340416 Scheduler:cfq} 8:112:{Name:sdh Major:8 Minor:112 Size:2000398934016 Scheduler:cfq} 252:0:{Name:zram0 Major:252 Minor:0 Size:2522873856 Scheduler:none} 252:1:{Name:zram1 Major:252 Minor:1 Size:2522873856 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:6201213935616 Scheduler:cfq} 8:80:{Name:sdf Major:8 Minor:80 Size:2000398934016 Scheduler:cfq} 252:3:{Name:zram3 Major:252 Minor:3 Size:2522873856 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:00:11:32:72:84:6f Speed:1000 Mtu:1500} {Name:eth1 MacAddress:00:11:32:72:84:70 Speed:4294967295 Mtu:1500} {Name:eth2 MacAddress:00:11:32:72:84:71 Speed:4294967295 Mtu:1500} {Name:eth3 MacAddress:00:11:32:72:84:72 Speed:4294967295 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480} {Name:tun0 MacAddress: Speed:10 Mtu:1500} {Name:tun1000 MacAddress: Speed:10 Mtu:1400}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:1 Threads:[1] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:2 Threads:[2] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:3 Threads:[3] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I1227 04:16:53.831323       1 manager.go:231] Version: {KernelVersion:3.10.102 ContainerOsVersion:Alpine Linux v3.4 DockerVersion:17.05.0-ce DockerAPIVersion:1.29 CadvisorVersion:v0.28.3 CadvisorRevision:1e567c2}
I1227 04:16:53.992531       1 factory.go:356] Registering Docker factory
I1227 04:16:55.993148       1 factory.go:54] Registering systemd factory
I1227 04:16:55.993944       1 factory.go:86] Registering Raw factory
I1227 04:16:55.994759       1 manager.go:1178] Started watching for new ooms in manager
W1227 04:16:55.994918       1 manager.go:313] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory
I1227 04:16:55.999676       1 manager.go:329] Starting recovery of all containers
E1227 04:16:56.147983       1 manager.go:1103] Failed to create existing container: /docker/58f3f4c3a29cbe56341dce542b6d836afb867b9c680cca40f3186a1f7e2a0129: failed to identify the read-write layer ID for container "58f3f4c3a29cbe56341dce542b6d836afb867b9c680cca40f3186a1f7e2a0129". - open /volume1/@docker/image/aufs/layerdb/mounts/58f3f4c3a29cbe56341dce542b6d836afb867b9c680cca40f3186a1f7e2a0129/mount-id: no such file or directory
E1227 04:16:56.155083       1 manager.go:1103] Failed to create existing container: /docker/aaef03e38bb4005f272053bb947b4762e381f4d49740fa2b1b5626fa5ca6bd1b: failed to identify the read-write layer ID for container "aaef03e38bb4005f272053bb947b4762e381f4d49740fa2b1b5626fa5ca6bd1b". - open /volume1/@docker/image/aufs/layerdb/mounts/aaef03e38bb4005f272053bb947b4762e381f4d49740fa2b1b5626fa5ca6bd1b/mount-id: no such file or directory
E1227 04:16:56.158827       1 manager.go:1103] Failed to create existing container: /docker/75c40dd43227004145e3c0f81950851249676f817821a75cc14271fdbb461c73: failed to identify the read-write layer ID for container "75c40dd43227004145e3c0f81950851249676f817821a75cc14271fdbb461c73". - open /volume1/@docker/image/aufs/layerdb/mounts/75c40dd43227004145e3c0f81950851249676f817821a75cc14271fdbb461c73/mount-id: no such file or directory
E1227 04:16:56.160581       1 manager.go:1103] Failed to create existing container: /docker/e39691259271787556ee95ce44a86034b47d81c157c8155954687a5454273330: failed to identify the read-write layer ID for container "e39691259271787556ee95ce44a86034b47d81c157c8155954687a5454273330". - open /volume1/@docker/image/aufs/layerdb/mounts/e39691259271787556ee95ce44a86034b47d81c157c8155954687a5454273330/mount-id: no such file or directory
E1227 04:16:56.169157       1 manager.go:1103] Failed to create existing container: /docker/2f6a815907013b12840529d2278bb12cb220b2cb4ae1f43e014eccc5942ee552: failed to identify the read-write layer ID for container "2f6a815907013b12840529d2278bb12cb220b2cb4ae1f43e014eccc5942ee552". - open /volume1/@docker/image/aufs/layerdb/mounts/2f6a815907013b12840529d2278bb12cb220b2cb4ae1f43e014eccc5942ee552/mount-id: no such file or directory
E1227 04:16:56.199645       1 manager.go:1103] Failed to create existing container: /docker/957fe7a6a8900d7acb8ced89447cb32d24270fa3d68fff1d8adbbe33d051fe58: failed to identify the read-write layer ID for container "957fe7a6a8900d7acb8ced89447cb32d24270fa3d68fff1d8adbbe33d051fe58". - open /volume1/@docker/image/aufs/layerdb/mounts/957fe7a6a8900d7acb8ced89447cb32d24270fa3d68fff1d8adbbe33d051fe58/mount-id: no such file or directory
E1227 04:16:56.210538       1 manager.go:1103] Failed to create existing container: /docker/34e7c96b428ba437c70ffb4dad7d30169c72aed0fc0a6abc1b8ff1a697a4238b: failed to identify the read-write layer ID for container "34e7c96b428ba437c70ffb4dad7d30169c72aed0fc0a6abc1b8ff1a697a4238b". - open /volume1/@docker/image/aufs/layerdb/mounts/34e7c96b428ba437c70ffb4dad7d30169c72aed0fc0a6abc1b8ff1a697a4238b/mount-id: no such file or directory
E1227 04:16:56.217118       1 manager.go:1103] Failed to create existing container: /docker/bb440822a1ddb00fff1e706c7e040204355b2769acf7604abaa89c5119c2352e: failed to identify the read-write layer ID for container "bb440822a1ddb00fff1e706c7e040204355b2769acf7604abaa89c5119c2352e". - open /volume1/@docker/image/aufs/layerdb/mounts/bb440822a1ddb00fff1e706c7e040204355b2769acf7604abaa89c5119c2352e/mount-id: no such file or directory
E1227 04:16:56.220922       1 manager.go:1103] Failed to create existing container: /docker/1b2c301a5b74631a509309b78cfd76482ccc26f25300f77b472e2fe7a9537f4b: failed to identify the read-write layer ID for container "1b2c301a5b74631a509309b78cfd76482ccc26f25300f77b472e2fe7a9537f4b". - open /volume1/@docker/image/aufs/layerdb/mounts/1b2c301a5b74631a509309b78cfd76482ccc26f25300f77b472e2fe7a9537f4b/mount-id: no such file or directory
E1227 04:16:56.231807       1 manager.go:1103] Failed to create existing container: /docker/ca318e216e80e216c7ddc1633d9b64515c43b68e5657662b2ade499c62473b56: failed to identify the read-write layer ID for container "ca318e216e80e216c7ddc1633d9b64515c43b68e5657662b2ade499c62473b56". - open /volume1/@docker/image/aufs/layerdb/mounts/ca318e216e80e216c7ddc1633d9b64515c43b68e5657662b2ade499c62473b56/mount-id: no such file or directory
E1227 04:16:56.233728       1 manager.go:1103] Failed to create existing container: /docker/e704b47f9b5b6418f997a518f858241e3a6ad51bde71407e99ffa001be42dc32: failed to identify the read-write layer ID for container "e704b47f9b5b6418f997a518f858241e3a6ad51bde71407e99ffa001be42dc32". - open /volume1/@docker/image/aufs/layerdb/mounts/e704b47f9b5b6418f997a518f858241e3a6ad51bde71407e99ffa001be42dc32/mount-id: no such file or directory
E1227 04:16:56.245712       1 manager.go:1103] Failed to create existing container: /docker/b56825dd4fe36201b8906629c36ba770eb7f0b0b34fe165d688e831543edfb6d: failed to identify the read-write layer ID for container "b56825dd4fe36201b8906629c36ba770eb7f0b0b34fe165d688e831543edfb6d". - open /volume1/@docker/image/aufs/layerdb/mounts/b56825dd4fe36201b8906629c36ba770eb7f0b0b34fe165d688e831543edfb6d/mount-id: no such file or directory
E1227 04:16:56.341028       1 manager.go:1103] Failed to create existing container: /docker/125683cfdeeeb00384dd9939de53231ad894e9c0d3b6625530aa348cea8fdc13: failed to identify the read-write layer ID for container "125683cfdeeeb00384dd9939de53231ad894e9c0d3b6625530aa348cea8fdc13". - open /volume1/@docker/image/aufs/layerdb/mounts/125683cfdeeeb00384dd9939de53231ad894e9c0d3b6625530aa348cea8fdc13/mount-id: no such file or directory
E1227 04:16:56.355810       1 manager.go:1103] Failed to create existing container: /docker/a93481bd94bec429a9534004fe97a304c008a3c91a05f76cd7e67cf0e0f3ef47: failed to identify the read-write layer ID for container "a93481bd94bec429a9534004fe97a304c008a3c91a05f76cd7e67cf0e0f3ef47". - open /volume1/@docker/image/aufs/layerdb/mounts/a93481bd94bec429a9534004fe97a304c008a3c91a05f76cd7e67cf0e0f3ef47/mount-id: no such file or directory
E1227 04:16:56.358660       1 manager.go:1103] Failed to create existing container: /docker/8924848f17a3ad1ec749873ee9daaa0ccc3362b50f88c66a96f5c394e1742304: failed to identify the read-write layer ID for container "8924848f17a3ad1ec749873ee9daaa0ccc3362b50f88c66a96f5c394e1742304". - open /volume1/@docker/image/aufs/layerdb/mounts/8924848f17a3ad1ec749873ee9daaa0ccc3362b50f88c66a96f5c394e1742304/mount-id: no such file or directory
E1227 04:16:56.360989       1 manager.go:1103] Failed to create existing container: /docker/40c1907ae6e93353ac8d917ca1602a2f39c07be0a96502281bc369e19d33b447: failed to identify the read-write layer ID for container "40c1907ae6e93353ac8d917ca1602a2f39c07be0a96502281bc369e19d33b447". - open /volume1/@docker/image/aufs/layerdb/mounts/40c1907ae6e93353ac8d917ca1602a2f39c07be0a96502281bc369e19d33b447/mount-id: no such file or directory
E1227 04:16:56.380318       1 manager.go:1103] Failed to create existing container: /docker/b1f28a08e7a293012eb7971b546a9932c97aa53d24ae273b1c6c5d1134553b71: failed to identify the read-write layer ID for container "b1f28a08e7a293012eb7971b546a9932c97aa53d24ae273b1c6c5d1134553b71". - open /volume1/@docker/image/aufs/layerdb/mounts/b1f28a08e7a293012eb7971b546a9932c97aa53d24ae273b1c6c5d1134553b71/mount-id: no such file or directory
E1227 04:16:56.397831       1 manager.go:1103] Failed to create existing container: /docker/c6d15a3b187ae14de627bfd70e362022257d0ddc3ea5620ba488ea0a743f4c44: failed to identify the read-write layer ID for container "c6d15a3b187ae14de627bfd70e362022257d0ddc3ea5620ba488ea0a743f4c44". - open /volume1/@docker/image/aufs/layerdb/mounts/c6d15a3b187ae14de627bfd70e362022257d0ddc3ea5620ba488ea0a743f4c44/mount-id: no such file or directory
I1227 04:16:56.415905       1 manager.go:334] Recovery completed
F1227 04:16:56.417122       1 cadvisor.go:156] Failed to start container manager: inotify_add_watch /sys/fs/cgroup/blkio: no space left on device

I have tried with and without /:/rootfs:ro And with:

Similarly, after cross-compiling with a golang image, running cadvisor outside of docker gives me this:

F1226 23:48:40.518381   16911 cadvisor.go:156] Failed to start container manager: inotify_add_watch /sys/fs/cgroup/blkio: no space left on device
goroutine 1 [running]:
github.com/google/cadvisor/vendor/github.com/golang/glog.stacks(0xc420232600, 0xc4201f8140, 0x92, 0x12e)
        /go/src/github.com/google/cadvisor/vendor/github.com/golang/glog/glog.go:769 +0xcf
github.com/google/cadvisor/vendor/github.com/golang/glog.(*loggingT).output(0x15b65a0, 0xc400000003, 0xc420434790, 0x12d329c, 0xb, 0x9c, 0x0)
        /go/src/github.com/google/cadvisor/vendor/github.com/golang/glog/glog.go:720 +0x345
github.com/google/cadvisor/vendor/github.com/golang/glog.(*loggingT).printf(0x15b65a0, 0x3, 0xe12039, 0x25, 0xc42086fe70, 0x1, 0x1)
        /go/src/github.com/google/cadvisor/vendor/github.com/golang/glog/glog.go:655 +0x14c
github.com/google/cadvisor/vendor/github.com/golang/glog.Fatalf(0xe12039, 0x25, 0xc42086fe70, 0x1, 0x1)
        /go/src/github.com/google/cadvisor/vendor/github.com/golang/glog/glog.go:1148 +0x67
main.main()
        /go/src/github.com/google/cadvisor/cadvisor.go:156 +0x496

Any advice on what to try out to fix the issue?

edasque commented 6 years ago

Expected Behaviour

On a Synology, I should be able to get metrics on containers. /metrics should show a lot of metrics on containers

Current Behaviour

After a restart due to updating the minor version of Docker (to ), I get this in the err log

I1227 04:16:53.543073       1 storagedriver.go:50] Caching stats in memory for 2m0s
I1227 04:16:53.543643       1 manager.go:151] cAdvisor running in container: "/sys/fs/cgroup/cpu"
I1227 04:16:53.813431       1 fs.go:139] Filesystem UUIDs: map[]
I1227 04:16:53.813532       1 fs.go:140] Filesystem partitions: map[cgmfs:{mountpoint:/var/run/cgmanager/fs major:0 minor:18 fsType:tmpfs blockSize:0} /dev/vg1000/lv:{mountpoint:/etc/resolv.conf major:253 minor:0 fsType:ext4 blockSize:0} shm:{mountpoint:/dev/shm major:0 minor:185 fsType:tmpfs blockSize:0} /dev/md0:{mountpoint:/var/lib/docker major:9 minor:0 fsType:ext4 blockSize:0} tmpfs:{mountpoint:/dev major:0 minor:188 fsType:tmpfs blockSize:0} none:{mountpoint:/ major:0 minor:184 fsType:aufs blockSize:0} /run:{mountpoint:/var/run major:0 minor:15 fsType:tmpfs blockSize:0}]
W1227 04:16:53.827315       1 info.go:52] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1227 04:16:53.827474       1 manager.go:225] Machine: {NumCores:4 CpuFrequency:2400000 MemoryCapacity:16820633600 HugePages:[] MachineID: SystemUUID:78563412-3412-7856-90AB-CDDEEFAABBCC BootID:d3ee6742-368e-43c0-bb86-1fb864d15278 Filesystems:[{Device:none DeviceMajor:0 DeviceMinor:184 Capacity:27536640897024 Type:vfs Inodes:1707507712 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:15 Capacity:8410316800 Type:vfs Inodes:2053300 HasInodes:true} {Device:cgmfs DeviceMajor:0 DeviceMinor:18 Capacity:102400 Type:vfs Inodes:2053300 HasInodes:true} {Device:/dev/vg1000/lv DeviceMajor:253 DeviceMinor:0 Capacity:27536640897024 Type:vfs Inodes:1707507712 HasInodes:true} {Device:shm DeviceMajor:0 DeviceMinor:185 Capacity:67108864 Type:vfs Inodes:2053300 HasInodes:true} {Device:/dev/md0 DeviceMajor:9 DeviceMinor:0 Capacity:2442780672 Type:vfs Inodes:155648 HasInodes:true} {Device:tmpfs DeviceMajor:0 DeviceMinor:188 Capacity:8410316800 Type:vfs Inodes:2053300 HasInodes:true}] DiskMap:map[9:1:{Name:md1 Major:9 Minor:1 Size:2147418112 Scheduler:none} 8:96:{Name:sdg Major:8 Minor:96 Size:6001175126016 Scheduler:cfq} 253:0:{Name:dm-0 Major:253 Minor:0 Size:27975772798976 Scheduler:none} 8:48:{Name:sdd Major:8 Minor:48 Size:6001175126016 Scheduler:cfq} 252:2:{Name:zram2 Major:252 Minor:2 Size:2522873856 Scheduler:none} 135:240:{Name:synoboot Major:135 Minor:240 Size:125829120 Scheduler:cfq} 9:0:{Name:md0 Major:9 Minor:0 Size:2549940224 Scheduler:none} 9:2:{Name:md2 Major:9 Minor:2 Size:11972709974016 Scheduler:none} 9:3:{Name:md3 Major:9 Minor:3 Size:16003067543552 Scheduler:none} 8:16:{Name:sdb Major:8 Minor:16 Size:8001563222016 Scheduler:cfq} 8:32:{Name:sdc Major:8 Minor:32 Size:6001175126016 Scheduler:cfq} 8:64:{Name:sde Major:8 Minor:64 Size:6301233340416 Scheduler:cfq} 8:112:{Name:sdh Major:8 Minor:112 Size:2000398934016 Scheduler:cfq} 252:0:{Name:zram0 Major:252 Minor:0 Size:2522873856 Scheduler:none} 252:1:{Name:zram1 Major:252 Minor:1 Size:2522873856 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:6201213935616 Scheduler:cfq} 8:80:{Name:sdf Major:8 Minor:80 Size:2000398934016 Scheduler:cfq} 252:3:{Name:zram3 Major:252 Minor:3 Size:2522873856 Scheduler:none}] NetworkDevices:[{Name:eth0 MacAddress:00:11:32:72:84:6f Speed:1000 Mtu:1500} {Name:eth1 MacAddress:00:11:32:72:84:70 Speed:4294967295 Mtu:1500} {Name:eth2 MacAddress:00:11:32:72:84:71 Speed:4294967295 Mtu:1500} {Name:eth3 MacAddress:00:11:32:72:84:72 Speed:4294967295 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480} {Name:tun0 MacAddress: Speed:10 Mtu:1500} {Name:tun1000 MacAddress: Speed:10 Mtu:1400}] Topology:[{Id:0 Memory:0 Cores:[{Id:0 Threads:[0] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:1 Threads:[1] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:2 Threads:[2] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]} {Id:3 Threads:[3] Caches:[{Size:24576 Type:Data Level:1} {Size:32768 Type:Instruction Level:1}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I1227 04:16:53.831323       1 manager.go:231] Version: {KernelVersion:3.10.102 ContainerOsVersion:Alpine Linux v3.4 DockerVersion:17.05.0-ce DockerAPIVersion:1.29 CadvisorVersion:v0.28.3 CadvisorRevision:1e567c2}
I1227 04:16:53.992531       1 factory.go:356] Registering Docker factory
I1227 04:16:55.993148       1 factory.go:54] Registering systemd factory
I1227 04:16:55.993944       1 factory.go:86] Registering Raw factory
I1227 04:16:55.994759       1 manager.go:1178] Started watching for new ooms in manager
W1227 04:16:55.994918       1 manager.go:313] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory
I1227 04:16:55.999676       1 manager.go:329] Starting recovery of all containers
E1227 04:16:56.147983       1 manager.go:1103] Failed to create existing container: /docker/58f3f4c3a29cbe56341dce542b6d836afb867b9c680cca40f3186a1f7e2a0129: failed to identify the read-write layer ID for container "58f3f4c3a29cbe56341dce542b6d836afb867b9c680cca40f3186a1f7e2a0129". - open /volume1/@docker/image/aufs/layerdb/mounts/58f3f4c3a29cbe56341dce542b6d836afb867b9c680cca40f3186a1f7e2a0129/mount-id: no such file or directory

The impact I see is seeing lots of stats on subcontainers but nothing on containers.

Docker version: 17.05.0-ce OS: Linux 3.10.102 #15217 SMP Wed Dec 20 18:18:56 CST 2017 x86_64 GNU/Linux synology_avoton_1815+

docker run --name=cadvisor --env="PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" --env="GLIBC_VERSION=2.23-r3" --volume="/var/lib/docker:/var/lib/docker:ro" --volume="/var/run:/var/run:rw" --volume="/sys:/sys:ro" -p 0.0.0.0:8005:8080/tcp --detach=true google/cadvisor

/validate gives me:

cAdvisor version: v0.28.3

OS version: Alpine Linux v3.4

Kernel version: [Supported and recommended]
    Kernel version is 3.10.102. Versions >= 2.6 are supported. 3.0+ are recommended.

Cgroup setup: [Supported and recommended]
    Available cgroups: map[devices:1 freezer:1 blkio:1 cpuset:1 cpu:1 cpuacct:1 memory:1]
    Following cgroups are required: [cpu cpuacct]
    Following other cgroups are recommended: [memory blkio cpuset devices freezer]
    Hierarchical memory accounting enabled. Reported memory usage includes memory used by child containers.

Cgroup mount setup: [Supported and recommended]
    Cgroups are mounted at /sys/fs/cgroup.
    Cgroup mount directories: blkio cgmanager cpu cpuacct cpuset devices freezer memory 
    Any cgroup mount point that is detectible and accessible is supported. /sys/fs/cgroup is recommended as a standard location.
    Cgroup mounts:
    cgroup /sys/fs/cgroup/cpuset cgroup ro,nosuid,nodev,noexec,relatime,cpuset,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuset,clone_children 0 0
    cgroup /sys/fs/cgroup/cpu cgroup ro,nosuid,nodev,noexec,relatime,cpu,release_agent=/run/cgmanager/agents/cgm-release-agent.cpu 0 0
    cgroup /sys/fs/cgroup/cpuacct cgroup ro,nosuid,nodev,noexec,relatime,cpuacct,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuacct 0 0
    cgroup /sys/fs/cgroup/memory cgroup ro,nosuid,nodev,noexec,relatime,memory,release_agent=/run/cgmanager/agents/cgm-release-agent.memory 0 0
    cgroup /sys/fs/cgroup/devices cgroup ro,nosuid,nodev,noexec,relatime,devices,release_agent=/run/cgmanager/agents/cgm-release-agent.devices 0 0
    cgroup /sys/fs/cgroup/freezer cgroup ro,nosuid,nodev,noexec,relatime,freezer,release_agent=/run/cgmanager/agents/cgm-release-agent.freezer 0 0
    cgroup /sys/fs/cgroup/blkio cgroup ro,nosuid,nodev,noexec,relatime,blkio,release_agent=/run/cgmanager/agents/cgm-release-agent.blkio 0 0
    cgroup /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuset,clone_children 0 0
    cgroup /sys/fs/cgroup/cpu cgroup rw,relatime,cpu,release_agent=/run/cgmanager/agents/cgm-release-agent.cpu 0 0
    cgroup /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct,release_agent=/run/cgmanager/agents/cgm-release-agent.cpuacct 0 0
    cgroup /sys/fs/cgroup/memory cgroup rw,relatime,memory,release_agent=/run/cgmanager/agents/cgm-release-agent.memory 0 0
    cgroup /sys/fs/cgroup/devices cgroup rw,relatime,devices,release_agent=/run/cgmanager/agents/cgm-release-agent.devices 0 0
    cgroup /sys/fs/cgroup/freezer cgroup rw,relatime,freezer,release_agent=/run/cgmanager/agents/cgm-release-agent.freezer 0 0
    cgroup /sys/fs/cgroup/blkio cgroup rw,relatime,blkio,release_agent=/run/cgmanager/agents/cgm-release-agent.blkio 0 0

Docker version: [Supported and recommended]
    Docker version is 17.05.0-ce. Versions >= 1.0 are supported. 1.2+ are recommended.

Docker driver setup: [Supported and recommended]
    Storage driver is aufs.

Block device setup: [Supported and recommended]
    At least one device supports 'cfq' I/O scheduler. Some disk stats can be reported.
     Disk "zram3" Scheduler type "none".
     Disk "md1" Scheduler type "none".
     Disk "sda" Scheduler type "cfq".
     Disk "sdg" Scheduler type "cfq".
     Disk "zram2" Scheduler type "none".
     Disk "sdh" Scheduler type "cfq".
     Disk "synoboot" Scheduler type "cfq".
     Disk "zram1" Scheduler type "none".
     Disk "sde" Scheduler type "cfq".
     Disk "md2" Scheduler type "none".
     Disk "md3" Scheduler type "none".
     Disk "sdb" Scheduler type "cfq".
     Disk "sdd" Scheduler type "cfq".
     Disk "sdf" Scheduler type "cfq".
     Disk "zram0" Scheduler type "none".
     Disk "dm-0" Scheduler type "none".
     Disk "md0" Scheduler type "none".
     Disk "sdc" Scheduler type "cfq".

Inotify watches: 

Managed containers: 
    /pgsql
    /synoagentregisterd
    /crond
    /pkgctl-Plex Media Server
    /pkgctl-Java7
    /pkgctl-Perl
    /pkgctl-Git
    /pkgctl-SynoFinder
    /pkgctl-CloudSync
    /iscsi_pluginengined
    /pkgctl-git
    /pkgctl-nessentials
    /synoscgi
    /pkgctl-DownloadStation
    /pkgctl-NoteStation
    /pkgctl-monit
    /tty
    /pkgctl-nzbdrone
    /pkgctl-VPNCenter
    /synosnmpcd
    /sshd
    /
    /pkgctl-TextEditor
    /pkgctl-Init_3rdparty.conf
    /pkgctl-VideoStation
    /pkgctl-Docker
    /netatalk
    /pkgctl-darkstat
    /nginx
    /syslog-ng
    /pkgctl-PhotoStation
    /dhcp-client
    /synologd
    /synoindexd
    /pkgctl-Java8
    /pkgctl-HyperBackupVault
    /dbus-session
    /snmpd
    /synocontentextractd
    /pkgctl-PHP7.0
    /pkgctl-Init_3rdparty
    /pkgctl-zsh
    /pkgctl-notifier
    /pkgctl-oracle-java
    /pkgctl-AudioStation
    /pkgctl-FileStation
    /synologrotated
    /pkgctl-mono
    /pkgctl-SurveillanceStation
    /pkgctl-LogCenter
    /s2s_daemon
    /udevd
    /synocrond
    /pkgctl-tmux
    /synonetd
    /pkgctl-HyperBackup
    /synocgid
    /cupsd
    /synomkflvd
    /pkgctl-chromaprint
    /pkgctl-ffmpeg
    /pkgctl-PerlCGI
    /docker
    /pkgctl-python
    /ntpd
    /synorelayd
    /pkgctl-radarr
    /findhostd
    /smbd
    /pkgctl-net_notifier
    /pkgctl-mediainfo
    /avahi
    /nmbd
    /iscsi_pluginserverd
    /scemd
    /synobackupd
    /pkgctl-CloudStation
    /pkgctl-StorageAnalyzer
    /synoconfd
    /pkgctl-PHP5.6
    /pkgctl-filebot
    /synomkthumbd
    /dbus-system
    /synotifyd
    /synostoraged
    /hotplugd
    /minissdpd
    /pkgctl-WebTools
    /inetd

Which lists a lot of what would seem to be "system" containers and none of the ones that I am running, including cadvisor btw:

CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                                                                                                                                                                         NAMES
2f6a81590701        google/cadvisor             "/usr/bin/cadvisor..."   3 days ago          Up 2 hours          0.0.0.0:8005->8080/tcp                                                                                                                                                        cadvisor
b1f28a08e7a2        prom/prometheus             "/bin/prometheus -..."   3 days ago          Up 2 hours          0.0.0.0:9090->9090/tcp                                                                                                                                                        prometheus_ocho
58f3f4c3a29c        pipeline-to-graphite        "/bin/sh -c 'bash ..."   9 days ago          Up 2 hours                                                                                                                                                                                        pipeline-to-graphite-ocho
edasque commented 6 years ago

Note that I didn't map --volume=/dev/disk/:/dev/disk:ro \ because /dev/disk doesn't exist on the synology. Doing some research to figure out what's equivalent.

edasque commented 6 years ago

Hmmm, maybe all I was missing was --volume=/:/rootfs:ro

closing for now. This might have been simply resolved by the newer Docker subversion on Synology and having removed --volume=/:/rootfs:ro when troubleshooting.

edasque commented 6 years ago

or running: echo 104857 > /proc/sys/fs/inotify/max_user_watches

since

docker run --name=cadvisor --volume="/var/lib/docker:/var/lib/docker:ro" --volume=/:/rootfs:ro --volume="/var/run:/var/run:rw" --volume="/sys:/sys:ro" -p 8005:8080 --detach=true google/cadvisor seems to work.

tallesemmanuel commented 4 years ago

I have the same problem with cgroup, systemd.

In version 17.05 of Docker, it works perfectly, when I update the version to 18.09 it breaks with this part of the cgroup.

SturmB commented 2 years ago

I am sorry to say that this no longer works on Synology.

I have a DS918+ running DSM 7.0.1 and the Docker version appears to be 20.10.3.

There is no /var/lib/docker nor a /dev/disk, both of which are needed according to the docs.

I did find some info that stated I could use /volume1/@docker in place of /var/lib/docker, and that appears to be okay, but since there is no Synology equivalent of /dev/disk, does that mean us Synology users just cannot use cAdvisor?

xinmans commented 2 years ago

I have the same problem with cgroup, systemd.

dsm918+

Alex-Goaga commented 2 years ago

Did someone managed to find the right command to install cadvisor on synology docker DSM 6?

madewithpat commented 2 years ago

@SturmB if you haven't already, give it a try without the /dev/disk volume mount. I've got cadvisor running on a DSM920+ with DSM 7.0.1, and looks like I have metrics coming through to prometheus (though I'm probably lacking some disk-related metrics without that /dev/disk mount)

SturmB commented 2 years ago

Looks like removing that volume mount now allows the container to start. Thank you, @madewithpat. It does still give me an error, though:

W0513 21:03:15.142872       1 fs.go:216] stat failed on /dev/mapper/cachedev_0 with error: no such file or directory

I hope that isn't affecting the program adversely.

p6002 commented 1 year ago

I just create new folder and install cadvisor there. it works.

gregorskii commented 7 months ago

Old issue, but was working with this today. On Synology DSM Container Manager, docker is located at /var/packages/ContainerManager/var/docker

volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/packages/ContainerManager/var/docker/:/var/lib/docker:ro

I dropped /var/disk also.