Open ovizii opened 8 months ago
I did some digging and can add some more info about this incident:
As I mentioned, I moved disks between PCs so when I last was able to access the scrutiny interface, I saw some disks listed multiple times under different names i.e. /dev/sda, sdb, etc. I then tried cleaning up each disk except the one with the last report date, and after I clicked delete device and eventually restarted the scrutiny container, the problem occurred.
BTW. I restored my scrutiny DB from a backup and the GUI is back up and running again.
Please let me know how to safely delete duplicate disks safely.
Having a similar issue after upgrading to v0.8.1
.
I had just removed a disk but it had continued to show up with null / 0 values for everything, prior to the upgrade. Given the OP went through something similar, and the stack trace below, I get the sense the database is corrupted in some way.
ERROR Error: Uncaught (in promise): Ut: {"headers":{"normalizedNames":{},"lazyUpdate":null},"status":500,"statusText":"Internal Server Error","url":"http://192.168.1.203:9990/api/summary","ok":false,"name":"HttpErrorResponse","message":"Http failure response for http://192.168.1.203:9990/api/summary: 500 Internal Server Error","error":null}
[Recovery] 2024/04/09 - 15:35:33 panic recovered:
runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/runtime/panic.go:260 (0x44cffc)
/usr/local/go/src/runtime/signal_unix.go:841 (0x44cfcc)
/go/src/github.com/analogj/scrutiny/webapp/backend/pkg/database/scrutiny_repository.go:436 (0xc615d4)
/go/src/github.com/analogj/scrutiny/webapp/backend/pkg/web/handler/get_devices_summary.go:14 (0xe0be68)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/context.go:161 (0xdcd07a)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/recovery.go:83 (0xdcd066)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/context.go:161 (0xe0fbfe)
/go/src/github.com/analogj/scrutiny/webapp/backend/pkg/web/middleware/config.go:11 (0xe0fbe5)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/context.go:161 (0xe1115e)
/go/src/github.com/analogj/scrutiny/webapp/backend/pkg/web/middleware/repository.go:29 (0xe11145)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/context.go:161 (0xe10153)
/go/src/github.com/analogj/scrutiny/webapp/backend/pkg/web/middleware/logger.go:56 (0xe1012e)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/context.go:161 (0xdcbfc9)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/gin.go:409 (0xdcbc17)
/go/src/github.com/analogj/scrutiny/vendor/github.com/gin-gonic/gin/gin.go:367 (0xdcb733)
/usr/local/go/src/net/http/server.go:2936 (0x7961d5)
/usr/local/go/src/net/http/server.go:1995 (0x7916f1)
/usr/local/go/src/runtime/asm_amd64.s:1598 (0x469260)
clientIP=192.168.96.1 hostname=0abfb89a7def latency=30 level=error method=GET msg=192.168.96.1 - 0abfb89a7def [09/Apr/2024:15:35:33 +0000] "GET /api/summary" 500 0 "" "" (30ms) path=/api/summary referer=resp Length=0 statusCode=500 time=2024-04-09T15:35:33Z type=webuser Agent=
https://github.com/jgwehr/homelab-docker/blob/main/services/monitor/docker-compose.yml
scrutiny:
container_name: scrutiny
image: ghcr.io/analogj/scrutiny:master-omnibus
ports:
- ${PORT_SCRUTINY}:8080 # webapp
- ${PORT_SCRUTINY_DB}:8086 # influxDB admin
labels:
- diun.enable=true
- homepage.group=System
- homepage.name=Scrutiny
- homepage.icon=scrutiny
- homepage.href=http://${SERVER_URL}:${PORT_SCRUTINY}
- homepage.description=Harddrive Health Monitoring
- homepage.widget.type=scrutiny
- homepage.widget.url=http://${SERVER_URL}:${PORT_SCRUTINY}
devices:
- /dev/sda
- /dev/sdb
- /dev/sdc
volumes:
- /run/udev:/run/udev:ro
- ${CONFIGDIR}/scrutiny:/opt/scrutiny/config
- ${DBDIR}/scrutiny:/opt/scrutiny/influxdb
cap_add:
- SYS_RAWIO #necessary to allow smartctl permission to query your device SMART data
- SYS_ADMIN #necessary for NVMe drives
deploy:
resources:
limits:
cpus: '1'
memory: 256M
restart: unless-stopped
Client: Docker Engine - Community
Version: 26.0.0
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.13.1
Path: /usr/libexec/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.25.0
Path: /usr/libexec/docker/cli-plugins/docker-compose
scan: Docker Scan (Docker Inc.)
Version: v0.23.0
Path: /usr/libexec/docker/cli-plugins/docker-scan
Server:
Containers: 38
Running: 37
Paused: 0
Stopped: 1
Images: 48
Server Version: 26.0.0
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
runc version: v1.1.12-0-g51d5e94
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.5.0-26-generic
Operating System: Ubuntu 22.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.15GiB
Name: redacted
ID: redacted
Docker Root Dir: /var/lib/docker
Debug Mode: false
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Describe the bug Trying to access the omnibus Web GUI results in a perpetual loading state.
Additional Info This instance of scrutiny has been working for more than a few years already. Prior to this error I migrated the existing physical machine to new hardware by transplanting the OS disks as well as all other disks and peripherals to a new case, mainboard and CPU without reinstalling. 99% of the system has kept working, so I am unsure what I could have changed to trigger this error with scrutiny.
Btw. there was a similar issue which has disappeared: https://github.com/AnalogJ/scrutiny/issues/523
Expected behaviour I was expecting to access the GUI (as usual)
Screenshots
Log Files
Please also provide the output of
docker info
Here is my docker-compose.yml: