att / netarbiter

Multi-site Network Emulation, Kubeadm-installed Kubernetes, NVMe over Fabrics
18 stars 17 forks source link

Clarify specifics for ceph health status check #5

Open blsaws opened 7 years ago

blsaws commented 7 years ago

In the guidance below from: netarbiter/sds/ceph-docker/examples/helm/README.md

To check ceph health status [3]

kubectl -n ceph exec -it ceph-mon-0 -- ceph -s

What the expected ceph health status is needs to be clarified, e.g. the result of the following command should be HEALTH_OK: kubectl -n ceph exec -it ceph-mon-0 -- ceph -s | awk "/health:/{print \$2}"

knowpd commented 7 years ago

The output should be similar to the following:

$ kubectl -n ceph exec -it ceph-mon-0 -- ceph -s
  cluster:
    id:     b007e867-c1ce-4076-b898-8a9af337a346
    health: HEALTH_WARN
            crush map has straw_calc_version=0
            Degraded data redundancy: 49/177 objects degraded (27.684%), 89 pgs unclean, 89 pgs degraded, 89 pgs un
dersized
            application not enabled on 1 pool(s)
            mon rc-ceph-6 is low on available space

  services:
    mon: 1 daemons, quorum rc-ceph-6
    mgr: rc-ceph-7(active)
    osd: 4 osds: 4 up, 4 in; 11 remapped pgs

  data:
    pools:   1 pools, 100 pgs
    objects: 59 objects, 136 MB
    usage:   976 MB used, 798 GB / 799 GB avail
    pgs:     49/177 objects degraded (27.684%)
             89 active+undersized+degraded
             11 active+clean+remapped

Hee Won Lee knowpd@research.att.com