Tendrl / ui

A repository for the front-end artifacts of Tendrl UI
GNU Lesser General Public License v2.1
6 stars 19 forks source link

Inconsistent Volume Status showing in UI and Grafana dashboards (vs. CLI) #1004

Open julienlim opened 6 years ago

julienlim commented 6 years ago

Here's the scenario I went through:

  1. Create cluster (ju_cluster) with no volume
  2. Added tendrl
  3. Imported Cluster (with no volumes)
  4. Created a volume (vol1) in the cluster

volume is up and running just fine:

# gstatus -al

     Product: Community          Capacity: 728.00 MiB(raw bricks)
      Status: HEALTHY                       39.00 MiB(raw used)
   Glusterfs: 3.12.9                       243.00 MiB(usable from volumes)
  OverCommit: No                Snapshots:   0

   Nodes       :  3/  3       Volumes:   1 Up
   Self Heal   :  3/  3                  0 Up(Degraded)
   Bricks      :  3/  3                  0 Up(Partial)
   Connections :  3/   9                     0 Down

Volume Information
    vol1             UP - 3/3 bricks up - Replicate
                     Capacity: (5% used) 13.00 MiB/243.00 MiB (used/total)
                     Snapshots: 0
                     Self Heal:  3/ 3
                     Tasks Active: None
                     Protocols: glusterfs:on  NFS:off  SMB:on
                     Gluster Connectivty: 3 hosts, 9 tcp connections

    vol1------------ +
                     |
                Replicated (afr)
                         |
                         +-- Replica Set0 (afr)
                               |
                               +--tendrl-node-1:/gluster/brick1/brick1(UP) 13.00 MiB/243.00 MiB 
                               |
                               +--tendrl-node-2:/gluster/brick1/brick1(UP) 13.00 MiB/243.00 MiB 
                               |
                               +--tendrl-node-3:/gluster/brick1/brick1(UP) 13.00 MiB/243.00 MiB 

# gluster volume info vol1

Volume Name: vol1
Type: Replicate
Volume ID: 63c33318-5789-46c8-9cb7-9f96bafcba8f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: tendrl-node-1:/gluster/brick1/brick1
Brick2: tendrl-node-2:/gluster/brick1/brick1
Brick3: tendrl-node-3:/gluster/brick1/brick1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

tendrl UI (Volumes page) shows the status of the Volume as Unknown (?) even though it active and running: screen shot 2018-07-03 at 12 58 09 pm

The cluster dashboard shows that my volume is down: screen shot 2018-07-03 at 12 56 12 pm

The volume dashboard shows N/A for the volume: screen shot 2018-07-03 at 12 56 38 pm

Release information: # rpm -qa | grep tendrl | sort

tendrl-ansible-1.6.3-2.el7.centos.noarch
tendrl-api-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-api-httpd-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-commons-1.6.3-20180628T114340.d094568.noarch
tendrl-grafana-plugins-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-grafana-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-monitoring-integration-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-node-agent-1.6.3-20180618T083110.ba580e6.noarch
tendrl-notifier-1.6.3-20180618T083117.fd7bddb.noarch
tendrl-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-ui-1.6.3-20180625T085228.23f862a.noarch

@Tendrl/qe @nthomas-redhat @gnehapk @cloudbehl @shirshendu

gnehapk commented 6 years ago

@julienlim Can you please share the API response for \volumes

julienlim commented 6 years ago

@gnehapk I unfortunately deleted my environment so I can't get the API response for \volumes. That being said, if you follow the sequence I provided, i.e. create cluster without volume, install WA, create volume, you should be able to see the same results.