MauriceNino / dashdot

A simple, modern server dashboard, primarily used by smaller private servers
https://getdashdot.com/
MIT License
2.53k stars 95 forks source link

[Bug] Storage reported as almost 100% #1049

Open melsophos opened 5 months ago

melsophos commented 5 months ago

Description of the bug

I have three disks connected to my server (NUC8i3BEH): one for the system(/dev/sda, M.2 ATA SSD) and two for the data (2.5" Samsung SSD and external 3.5" Western Digital HDD). The system disks appear as full, whereas df shows only 4% usage (even when executed inside the docker container). However, information for both other disks is correct.

This sounds similar to #1032.

image

image

How to reproduce

No response

Relevant log output

Running `curl http://localhost:3001/info | jq`

{
  "os": {
    "arch": "x64",
    "distro": "Ubuntu",
    "kernel": "6.2.0-39-generic",
    "platform": "linux",
    "release": "23.04",
    "uptime": 3619225.61,
    "dash_version": "5.8.3",
    "dash_buildhash": "f7ac2728b89a6c75502c9c736c46a94ff386889b"
  },
  "cpu": {
    "brand": "Intel",
    "model": "Core™ i3-8109U",
    "cores": 2,
    "ecores": 0,
    "pcores": 2,
    "threads": 4,
    "frequency": 3.6
  },
  "ram": {
    "size": 8178470912,
    "layout": [
      {
        "brand": "Crucial",
        "type": "DDR4",
        "frequency": 2667
      },
      {
        "brand": "Crucial",
        "type": "DDR4",
        "frequency": 2667
      }
    ]
  },
  "storage": [
    {
      "size": 512110190592,
      "disks": [
        {
          "device": "sda",
          "brand": "ATA",
          "type": "SSD"
        }
      ]
    },
    {
      "size": 4000787030016,
      "disks": [
        {
          "device": "sdb",
          "brand": "Samsung",
          "type": "SSD"
        }
      ]
    },
    {
      "size": 3000592982016,
      "disks": [
        {
          "device": "sdc",
          "brand": "External",
          "type": "HD"
        }
      ]
    }
  ],
  "network": {
    "interfaceSpeed": 1000,
    "speedDown": 0,
    "speedUp": 0,
    "lastSpeedTest": 0,
    "type": "Wired",
    "publicIp": ""
  },
  "gpu": {
    "layout": []
  }
}

# curl http://localhost:3001/load/storage
[510765391872,2986302857216,2582610644992]

### Info output of dashdot cli

```shell
INFO
=========
Yarn: 3.7.0
Node: v20.11.0
Dash: 5.8.3

Cwd: /app
Hash: f7ac2728b89a6c75502c9c736c46a94ff386889b
Platform: Linux 63117f0a668e 6.2.0-39-generic #40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023 x86_64 Linux
Docker image: base
In Docker: true
In Docker (env): true
In Podman: false


### What browsers are you seeing the problem on?

Firefox

### Where is your instance running?

Linux Server

### Additional context

_No response_
ithinkmax commented 3 months ago

I have the same error, dash run on docker on a synology NAS with 80% free space and it show 99,5%used...

this is the docker DF output from portainer:

/app # df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% / tmpfs 65536 0 65536 0% /dev tmpfs 16320904 0 16320904 0% /sys/fs/cgroup shm 65536 0 65536 0% /dev/shm /dev/md0 2385528 1634316 632428 72% /mnt/host tmpfs 16320904 0 16320904 0% /mnt/host/sys/fs/cgroup devtmpfs 16283540 0 16283540 0% /mnt/host/proc/bus/usb devtmpfs 16283540 0 16283540 0% /mnt/host/dev tmpfs 16320904 244 16320660 0% /mnt/host/dev/shm tmpfs 1073741824 0 1073741824 0% /mnt/host/dev/virtualization tmpfs 16320904 44324 16276580 0% /mnt/host/run tmpfs 16320904 2908 16317996 0% /mnt/host/tmp /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% /mnt/host/volume1 /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% /mnt/host/volume1/@docker /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% /mnt/host/volume1/@docker/btrfs /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18 tmpfs 65536 0 65536 0% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18/dev shm 65536 0 65536 0% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18/dev/shm tmpfs 16320904 0 16320904 0% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18/sys/fs/cgroup none 524288000 114259876 410028124 22% /mnt/host/volume1/ALi-Commerciale none 524288000 81498300 442789700 16% /mnt/host/volume1/ALi-Amministrazione none 2621440000 1575776384 1045663616 60% /mnt/host/volume1/ALi-Produzione none 786432000 408708 786023292 0% /mnt/host/volume1/PagaRent /volume1/@ALi-Admin@ 74981076176 15275270516 59705805660 20% /mnt/host/volume1/ALi-Admin /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% /etc/resolv.conf /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% /etc/hostname /dev/mapper/cachedev_0 74981076176 15275270516 59705805660 20% /etc/hosts

HwQsWZ3aoomxgswOYCrUeTZd8oY1tzdhKPz6WO0O