ClusterLabs / pcs

Pacemaker command line interface and GUI
GNU General Public License v2.0
249 stars 115 forks source link

Unable to add the existing cluster to Web UI #213

Closed Ogekuri closed 3 months ago

Ogekuri commented 5 years ago

I've a running cluster previusly managed fron crmsh, now I've installed and configured the pcs and from the pcs console it's seem work corretly (se below hr line).

Now I'm trying to add the cluster to the Web UI but from UI I get an "undefined" error.

image

From the log I see only an error about the check_auth: I, [2019-10-15T16:45:42.232 #00027] INFO -- : No response from: 192.168.1.9 request: check_auth, error: operation_timedout I, [2019-10-15T16:45:42.233 #00027] INFO -- : No response from: 192.168.1.10 request: check_auth, error: operation_timedout

I've try to force again the auth command but it seem complete without errors:

`I, [2019-10-15T16:47:02.291 #00000] INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 3955.75ms I, [2019-10-15T16:47:12.188 #00000] INFO -- : 200 POST /remote/check_auth (192.168.1.10) 4124.69ms I, [2019-10-15T16:47:19.858 #00020] INFO -- : Saved config 'known-hosts' version 3 240b5cd093762a0a466cfff55373354c53e387b7 to '/var/lib/pcsd/known-hosts' I, [2019-10-15T16:47:19.861 #00000] INFO -- : 200 POST /remote/set_configs (192.168.1.10) 3992.16ms

I, [2019-10-15T16:44:49.020 #00023] INFO -- : Sending config response from envision1: {"status"=>"ok", "result"=>{"known-hosts"=>"accepted"}} I, [2019-10-15T16:44:49.021 #00023] INFO -- : Sending config response from envision2: {"status"=>"ok", "result"=>{"known-hosts"=>"accepted"}}`

Any suggestions? Thanks!


`root@envision1:~ # pcs cluster status Cluster Status: Stack: corosync Current DC: envision2 (version 2.0.1-9e909a5bdd) - partition with quorum Last updated: Tue Oct 15 16:51:34 2019 Last change: Mon Oct 14 16:50:36 2019 by root via cibadmin on envision1 2 nodes configured 69 resources configured (24 DISABLED)

PCSD Status: envision2: Online envision1: Online `

`root@envision1:~ # pcs status Cluster name: envision Stack: corosync Current DC: envision2 (version 2.0.1-9e909a5bdd) - partition with quorum Last updated: Tue Oct 15 16:52:23 2019 Last change: Mon Oct 14 16:50:36 2019 by root via cibadmin on envision1

2 nodes configured 69 resources configured (24 DISABLED)

Online: [ envision1 envision2 ]

Full list of resources:

Clone Set: fencing [st-dummy] Started: [ envision1 envision2 ] Clone Set: clone_drbdlinks [p_drbdlinks] Started: [ envision1 envision2 ] Clone Set: clone_munin-node [p_munin-node] Stopped (disabled): [ envision1 envision2 ] p_VirtualIP-IP1 (ocf::heartbeat:IPaddr2): Started envision1 p_VirtualIP-IP2 (ocf::heartbeat:IPaddr2): Started envision2 p_VirtualIP-IP3 (ocf::heartbeat:IPaddr2): Started envision2 p_pushover-vip1 (systemd:pushover-vip1): Started envision1 p_pushover-vip2 (systemd:pushover-vip2): Started envision2 p_pushover-vip3 (systemd:pushover-vip3): Started envision2 Resource Group: g_NODE1 p_links-ip1 (systemd:links-ip1): Started envision1 Resource Group: g_ENVISION p_mysql (systemd:mysql): Started envision1 p_apache2 (systemd:apache2): Started envision1 Resource Group: g_METERN p_metern-con-0 (systemd:metern-con-0): Started envision1 p_metern-con-1 (systemd:metern-con-1): Started envision1 p_metern-get-0-1 (systemd:metern-get-0-1): Started envision1 p_metern-get-1-3 (systemd:metern-get-1-3): Started envision1 p_metern-process (systemd:metern-process): Started envision1 p_metern-alert (systemd:metern-alert): Started envision1 Resource Group: g_TTRSS p_ttrss-update (systemd:ttrss-update): Started envision1 Resource Group: g_SMOKEPING p_smokeping (systemd:smokeping): Started envision1 Resource Group: g_AVAHI p_avahi-daemon (systemd:avahi-daemon): Started envision1 p_avahi-dnsconfd (systemd:avahi-dnsconfd): Started envision1 Resource Group: g_NODE2 p_links-ip2 (systemd:links-ip2): Started envision2 Resource Group: g_SAWMILL p_http-https_traffic_log (systemd:http-https_traffic_log): Stopped (disabled) p_sawmill7 (systemd:sawmill7): Stopped (disabled) p_sawmill8 (systemd:sawmill8): Stopped (disabled) Resource Group: g_QUAKE p_q3ded-1v1 (systemd:q3ded-1v1): Stopped (disabled) p_q3ded-ffa (systemd:q3ded-ffa): Stopped (disabled) Resource Group: g_NODE3 p_links-ip3 (systemd:links-ip3): Started envision2 Resource Group: g_PYLOAD p_pyload-extractor (systemd:pyload-extractor): Started envision2 p_pyload (systemd:pyload): Started envision2 Resource Group: g_ARIA2 p_aria2-daemon (systemd:aria2-daemon): Stopped (disabled) p_aria2-webui (systemd:aria2-webui): Stopped (disabled) Clone Set: clone_pushover-base [p_pushover-base] Stopped: [ envision1 envision2 ] Resource Group: g_CUPS p_cups (systemd:cups): Started envision1 p_cups-browsed (systemd:cups-browsed): Started envision1 p_pushover-g_ARIA2 (systemd:pushover-g_ARIA2): Stopped p_pushover-g_AVAHI (systemd:pushover-g_AVAHI): Started envision1 p_pushover-g_CUPS (systemd:pushover-g_CUPS): Started envision1 p_pushover-g_METERN (systemd:pushover-g_METERN): Started envision1 p_pushover-g_NODE1 (systemd:pushover-g_NODE1): Started envision1 p_pushover-g_NODE2 (systemd:pushover-g_NODE2): Started envision2 p_pushover-g_NODE3 (systemd:pushover-g_NODE3): Started envision2 p_pushover-g_PYLOAD (systemd:pushover-g_PYLOAD): Started envision2 p_pushover-g_SAWMILL (systemd:pushover-g_SAWMILL): Stopped p_pushover-g_SMOKEPING (systemd:pushover-g_SMOKEPING): Started envision1 p_pushover-g_TTRSS (systemd:pushover-g_TTRSS): Started envision1 p_pushover-g_QUAKE (systemd:pushover-g_QUAKE): Stopped p_pushover-g_DAAPD (systemd:pushover-g_DAAPD): Started envision1 Resource Group: g_DAAPD p_forked-daapd (systemd:forked-daapd): Started envision1 p_pushover-g_SAMBA (systemd:pushover-g_SAMBA): Started envision1 Resource Group: g_SAMBA p_nmbd (systemd:nmbd): Started envision1 p_smbd (systemd:smbd): Started envision1 p_pushover-g_DLNA (systemd:pushover-g_DLNA): Started envision1 Resource Group: g_DLNA p_minidlna (systemd:minidlna): Started envision1 p_pushover-g_PLEX (systemd:pushover-g_PLEX): Stopped Resource Group: g_PLEX p_plexmediaserver (systemd:plexmediaserver): Stopped (disabled) p_pushover-ALL (systemd:pushover-ALL): Stopped p_pushover-ALL1 (systemd:pushover-ALL1): Started envision1 p_pushover-ALL2 (systemd:pushover-ALL2): Stopped p_pushover-ALL3 (systemd:pushover-ALL3): Started envision2 p_pushover-g_NTOPNG (systemd:pushover-g_NTOPNG): Stopped Resource Group: g_NTOPNG p_redis-server (systemd:redis-server): Stopped (disabled) p_nprobe (systemd:nprobe): Stopped (disabled) p_ntopng (systemd:ntopng): Stopped (disabled)

Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled `

tomjelinek commented 5 years ago

Can you provide more details?

Ogekuri commented 5 years ago

Can you provide more details?

Sure!

  • What pcs version are you running?

I'm on debian buster:

corosync   3.0.1-2
crmsh      4.0.0~git20190108.3d56538-3
pacemaker  2.0.1-5
pcs        0.10.1-2
pcsc-tools 1.5.4-1
pcscd      1.8.24-1
sbd        1.4.0-18-g5e3283c-1
  • Are you using web UI running on one of your cluster nodes?

Yes from envision1, node 1, ip 192.168.1.9

  • What did you enter in the Add existing cluster form?

I've try with envision1, envision2, 192.168.1.9 or 192.168.1.10. Same result.

  • 192.168.1.9 and 192.168.1.10 are your cluster nodes, right?

Yes

I've also try to create manually a pcs_settings.conf but it still not visible from Web UI

root@envision1:~ # cat /var/lib/pcsd/pcs_settings.conf
{
  "format_version": 2,
  "data_version": 0,
  "clusters": [
    {
      "name": "envision",
      "nodes": [
        "envision1",
        "envision2"
      ]
    }
    ],
  "permissions": {
    "local_cluster": [
      {
        "type": "group",
        "name": "haclient",
        "allow": [
          "grant",
          "read",
          "write"
        ]
      }
    ]
  }
}

I'm logging with hacluster user that is part of the group haclient.

root@envision1:~ # cat /etc/group | grep hacl
haclient:x:131:hacluster

The know hosts include both node names and ips:

root@envision1:~ # cat /var/lib/pcsd/known-hosts
{
  "format_version": 1,
  "data_version": 7,
  "known_hosts": {
    "192.168.1.10": {
      "dest_list": [
        {
          "addr": "192.168.1.10",
          "port": 2224
        }
      ],
      "token": "bae562ef-c02e-40be-af03-36f1cebafb87"
    },
    "192.168.1.9": {
      "dest_list": [
        {
          "addr": "192.168.1.9",
          "port": 2224
        }
      ],
      "token": "386d3045-b8ae-4e31-a52e-5311de5a1101"
    },
    "envision1": {
      "dest_list": [
        {
          "addr": "envision1",
          "port": 2224
        }
      ],
      "token": "5981dc72-5290-46b9-883a-7f781b628b1d"
    },
    "envision2": {
      "dest_list": [
        {
          "addr": "envision2",
          "port": 2224
        }
      ],
      "token": "fc920fdd-4cc2-4eb6-bec2-f0178f4f0ae6"
    }
  }

When i force the pcs node auth or cluster auth, from log it's seem work correcly and the "pcs_settings.conf" is excanged and updated:

root@envision1:~ # ls -l /var/lib/pcsd/
total 40
-rw-rw-rw- 1 root root    4 Oct 16 09:56 cfgsync_ctl
-rw-rw-rw- 1 root root  815 Oct 16 10:13 known-hosts
-rw-rw-rw- 1 root root 1541 Oct 14 11:55 pcsd.crt
-rw-rw-rw- 1 root root 2484 Oct 14 11:55 pcsd.key
-rw-rw-rw- 1 root root  380 Oct 16 10:13 pcs_settings.conf
-rw-r--r-- 1 root root  380 Oct 16 10:05 pcs_settings.conf.1571213141
-rw-r--r-- 1 root root  380 Oct 16 10:05 pcs_settings.conf.1571213145
-rw-r--r-- 1 root root  380 Oct 16 10:13 pcs_settings.conf.1571213629
-rw-rw-rw- 1 root root  842 Oct 16 10:04 pcs_users.conf
-rw-rw-rw- 1 root root   66 Oct 16 09:58 tokens
root@envision1:~ # cat /var/lib/pcsd/pcs_settings.conf.1571213629
{
  "format_version": 2,
  "data_version": 0,
  "clusters": [
    {
      "name": "envision",
      "nodes": [
        "envision1",
        "envision2"
      ]
    }
    ],
  "permissions": {
    "local_cluster": [
      {
        "type": "group",
        "name": "haclient",
        "allow": [
          "grant",
          "read",
          "write"
        ]
      }
    ]
  }
}

The token file is empy (is it correct?):

root@envision1:~ # cat /var/lib/pcsd/tokens

{
  "format_version": 2,
  "data_version": 0,
  "tokens": {
  }
}

But ... when I log from web UI i still view nothing.

Ogekuri commented 5 years ago

Here the full log (2019-10-17T17:29:41.486) from the login to the error:

I, [2019-10-17T17:25:42.658 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 4505.65ms
I, [2019-10-17T17:25:53.478 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 4133.86ms
I, [2019-10-17T17:25:49.524 #01743]     INFO -- : SRWT Node: envision2 Request: get_configs
I, [2019-10-17T17:25:49.524 #01743]     INFO -- : SRWT Node: envision1 Request: get_configs
I, [2019-10-17T17:25:49.525 #01743]     INFO -- : Connecting to: https://envision1:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:25:49.525 #01743]     INFO -- : Connecting to: https://envision2:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:26:45.313 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 4157.48ms
I, [2019-10-17T17:26:57.578 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 4055.74ms
I, [2019-10-17T17:26:53.672 #01744]     INFO -- : SRWT Node: envision1 Request: get_configs
I, [2019-10-17T17:26:53.673 #01744]     INFO -- : Connecting to: https://envision1:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:26:53.673 #01744]     INFO -- : SRWT Node: envision2 Request: get_configs
I, [2019-10-17T17:26:53.673 #01744]     INFO -- : Connecting to: https://envision2:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:27:48.452 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 4209.85ms
I, [2019-10-17T17:28:01.386 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 4053.57ms
I, [2019-10-17T17:27:57.432 #01745]     INFO -- : SRWT Node: envision1 Request: get_configs
I, [2019-10-17T17:27:57.433 #01745]     INFO -- : SRWT Node: envision2 Request: get_configs
I, [2019-10-17T17:27:57.433 #01745]     INFO -- : Connecting to: https://envision1:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:27:57.433 #01745]     INFO -- : Connecting to: https://envision2:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:28:51.351 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 4127.03ms
I, [2019-10-17T17:29:05.505 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 4131.51ms
I, [2019-10-17T17:29:01.563 #01746]     INFO -- : SRWT Node: envision1 Request: get_configs
I, [2019-10-17T17:29:01.563 #01746]     INFO -- : Connecting to: https://envision1:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:29:01.563 #01746]     INFO -- : SRWT Node: envision2 Request: get_configs
I, [2019-10-17T17:29:01.564 #01746]     INFO -- : Connecting to: https://envision2:2224/remote/get_configs?cluster_name=envision
W, [2019-10-17T17:29:41.486 #00000]  WARNING -- : SSL Error on 9 ('217.141.50.226', 53581): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:1056)
I, [2019-10-17T17:29:41.881 #00000]     INFO -- : 302 GET / (217.141.50.226) 6.64ms
I, [2019-10-17T17:29:46.182 #00000]     INFO -- : 200 GET /login (217.141.50.226) 4236.23ms
I, [2019-10-17T17:29:46.385 #00000]     INFO -- : 200 GET /css/style.css (217.141.50.226) 8.98ms
W, [2019-10-17T17:29:46.662 #00000]  WARNING -- : SSL Error on 11 ('217.141.50.226', 53585): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:1056)
W, [2019-10-17T17:29:46.663 #00000]  WARNING -- : SSL Error on 12 ('217.141.50.226', 53587): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:1056)
W, [2019-10-17T17:29:46.665 #00000]  WARNING -- : SSL Error on 13 ('217.141.50.226', 53586): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:1056)
W, [2019-10-17T17:29:46.666 #00000]  WARNING -- : SSL Error on 10 ('217.141.50.226', 53583): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:1056)
W, [2019-10-17T17:29:46.945 #00000]  WARNING -- : SSL Error on 10 ('217.141.50.226', 53588): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:1056)
W, [2019-10-17T17:29:46.954 #00000]  WARNING -- : SSL Error on 14 ('217.141.50.226', 53584): [SSL: SSLV3_ALERT_CERTIFICATE_UNKNOWN] sslv3 alert certificate unknown (_ssl.c:1056)
I, [2019-10-17T17:29:46.960 #00000]     INFO -- : 200 GET /css/liberation.css (217.141.50.226) 3.79ms
I, [2019-10-17T17:29:47.281 #00000]     INFO -- : 200 GET /css/overpass.css (217.141.50.226) 5.85ms
I, [2019-10-17T17:29:47.323 #00000]     INFO -- : 200 GET /css/jquery-ui-1.10.1.custom.css (217.141.50.226) 9.04ms
I, [2019-10-17T17:29:47.357 #00000]     INFO -- : 200 GET /js/tools.js (217.141.50.226) 4.81ms
I, [2019-10-17T17:29:47.361 #00000]     INFO -- : 200 GET /js/jquery-1.9.1.min.js (217.141.50.226) 108.48ms
I, [2019-10-17T17:29:47.386 #00000]     INFO -- : 200 GET /js/handlebars-v1.2.1.js (217.141.50.226) 84.64ms
I, [2019-10-17T17:29:48.339 #00000]     INFO -- : 200 GET /js/api.js (217.141.50.226) 10.89ms
I, [2019-10-17T17:29:48.834 #00000]     INFO -- : 200 GET /js/cluster-setup.js (217.141.50.226) 262.71ms
I, [2019-10-17T17:29:48.842 #00000]     INFO -- : 200 GET /js/cluster-destroy.js (217.141.50.226) 4.57ms
I, [2019-10-17T17:29:49.114 #00000]     INFO -- : 200 GET /js/node-add.js (217.141.50.226) 5.57ms
I, [2019-10-17T17:29:49.115 #00000]     INFO -- : 200 GET /js/jquery-ui-1.10.1.custom.min.js (217.141.50.226) 1828.26ms
I, [2019-10-17T17:29:49.227 #00000]     INFO -- : 200 GET /js/node-remove.js (217.141.50.226) 3.74ms
I, [2019-10-17T17:29:49.284 #00000]     INFO -- : 200 GET /js/pcsd.js (217.141.50.226) 13.20ms
I, [2019-10-17T17:29:49.414 #00000]     INFO -- : 200 GET /js/nodes-ember.js (217.141.50.226) 12.90ms
I, [2019-10-17T17:29:53.975 #00000]     INFO -- : 200 GET /js/ember-1.4.0.js (217.141.50.226) 6646.33ms
I, [2019-10-17T17:29:54.386 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 4217.72ms
I, [2019-10-17T17:29:54.689 #00000]     INFO -- : 200 GET /images/field_bg.png (217.141.50.226) 6.29ms
I, [2019-10-17T17:29:54.695 #00000]     INFO -- : 200 GET /images/HAM-logo.png (217.141.50.226) 3.65ms
I, [2019-10-17T17:29:54.700 #00000]     INFO -- : 200 GET /images/Shell_bg.png (217.141.50.226) 3.69ms
I, [2019-10-17T17:29:55.300 #00000]     INFO -- : 200 GET /css/LiberationSans-Regular.ttf (217.141.50.226) 574.04ms
I, [2019-10-17T17:29:57.514 #00000]     INFO -- : 200 GET /css/Overpass-Bold.ttf (217.141.50.226) 2812.77ms
W, [2019-10-17T17:29:58.113 #00000]  WARNING -- : 401 GET /favicon.ico (217.141.50.226) 5.89ms
I, [2019-10-17T17:30:09.521 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 4039.08ms
I, [2019-10-17T17:30:05.575 #01747]     INFO -- : SRWT Node: envision1 Request: get_configs
I, [2019-10-17T17:30:05.576 #01747]     INFO -- : SRWT Node: envision2 Request: get_configs
I, [2019-10-17T17:30:05.577 #01747]     INFO -- : Connecting to: https://envision2:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:30:05.577 #01747]     INFO -- : Connecting to: https://envision1:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:30:46.670 #00000]     INFO -- : Attempting login by 'hacluster'
I, [2019-10-17T17:30:46.835 #00000]     INFO -- : Successful login by 'hacluster'
I, [2019-10-17T17:30:46.852 #00000]     INFO -- : 303 POST /login (217.141.50.226) 224.64ms
I, [2019-10-17T17:30:51.293 #00000]     INFO -- : 200 GET /manage (217.141.50.226) 4395.66ms
I, [2019-10-17T17:30:51.404 #00000]     INFO -- : 200 GET /css/jquery-ui-1.10.1.custom.css (217.141.50.226) 5.57ms
I, [2019-10-17T17:30:51.410 #00000]     INFO -- : 200 GET /css/style.css (217.141.50.226) 4.08ms
I, [2019-10-17T17:30:51.415 #00000]     INFO -- : 200 GET /css/overpass.css (217.141.50.226) 2.94ms
I, [2019-10-17T17:30:51.420 #00000]     INFO -- : 200 GET /css/liberation.css (217.141.50.226) 3.43ms
I, [2019-10-17T17:30:51.640 #00000]     INFO -- : 200 GET /js/handlebars-v1.2.1.js (217.141.50.226) 21.04ms
I, [2019-10-17T17:30:51.703 #00000]     INFO -- : 200 GET /js/tools.js (217.141.50.226) 3.06ms
I, [2019-10-17T17:30:51.719 #00000]     INFO -- : 200 GET /js/api.js (217.141.50.226) 3.63ms
I, [2019-10-17T17:30:51.917 #00000]     INFO -- : 200 GET /js/cluster-setup.js (217.141.50.226) 6.05ms
I, [2019-10-17T17:30:51.953 #00000]     INFO -- : 200 GET /js/cluster-destroy.js (217.141.50.226) 3.34ms
I, [2019-10-17T17:30:52.191 #00000]     INFO -- : 200 GET /js/node-add.js (217.141.50.226) 2.99ms
I, [2019-10-17T17:30:52.400 #00000]     INFO -- : 200 GET /js/node-remove.js (217.141.50.226) 4.46ms
I, [2019-10-17T17:30:52.639 #00000]     INFO -- : 200 GET /js/pcsd.js (217.141.50.226) 15.50ms
I, [2019-10-17T17:30:52.922 #00000]     INFO -- : 200 GET /js/nodes-ember.js (217.141.50.226) 15.21ms
I, [2019-10-17T17:30:52.978 #00000]     INFO -- : 200 GET /js/jquery-ui-1.10.1.custom.min.js (217.141.50.226) 1432.47ms
I, [2019-10-17T17:30:54.091 #00000]     INFO -- : 200 GET /js/jquery-1.9.1.min.js (217.141.50.226) 2669.39ms
I, [2019-10-17T17:30:57.361 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 4202.19ms
I, [2019-10-17T17:30:58.620 #00000]     INFO -- : 200 GET /js/ember-1.4.0.js (217.141.50.226) 6991.14ms
I, [2019-10-17T17:30:59.119 #00000]     INFO -- : 200 GET /css/images/ui-bg_inset-soft_25_000000_1x100.png (217.141.50.226) 9.32ms
I, [2019-10-17T17:30:59.130 #00000]     INFO -- : 200 GET /css/images/pbar-ani.gif (217.141.50.226) 7.43ms
I, [2019-10-17T17:30:59.146 #00000]     INFO -- : 200 GET /css/images/ui-icons_cccccc_256x240.png (217.141.50.226) 7.21ms
I, [2019-10-17T17:30:59.154 #00000]     INFO -- : 200 GET /css/images/ui-bg_glass_20_555555_1x400.png (217.141.50.226) 5.73ms
I, [2019-10-17T17:30:59.163 #00000]     INFO -- : 200 GET /css/images/ui-bg_gloss-wave_25_333333_500x100.png (217.141.50.226) 6.07ms
I, [2019-10-17T17:30:59.178 #00000]     INFO -- : 200 GET /css/images/ui-bg_flat_50_5c5c5c_40x100.png (217.141.50.226) 5.90ms
I, [2019-10-17T17:30:59.215 #00000]     INFO -- : 200 GET /css/images/ui-bg_glass_40_0078a3_1x400.png (217.141.50.226) 7.62ms
I, [2019-10-17T17:30:59.228 #00000]     INFO -- : 200 GET /css/images/ui-icons_ffffff_256x240.png (217.141.50.226) 4.59ms
I, [2019-10-17T17:30:59.248 #00000]     INFO -- : 200 GET /images/HAM-logo.png (217.141.50.226) 4.59ms
I, [2019-10-17T17:30:59.253 #00000]     INFO -- : 200 GET /images/Shell_bg.png (217.141.50.226) 2.99ms
I, [2019-10-17T17:30:59.260 #00000]     INFO -- : 200 GET /images/action-icons.png (217.141.50.226) 5.96ms
I, [2019-10-17T17:30:59.288 #00000]     INFO -- : 200 GET /css/LiberationSans-Regular.ttf (217.141.50.226) 13.24ms
I, [2019-10-17T17:31:02.424 #00000]     INFO -- : 200 GET /css/Overpass-Bold.ttf (217.141.50.226) 3156.08ms
W, [2019-10-17T17:31:02.924 #00000]  WARNING -- : 401 GET /favicon.ico (217.141.50.226) 128.17ms
I, [2019-10-17T17:31:11.549 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.9) 6420.19ms
I, [2019-10-17T17:31:15.641 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 4248.92ms
I, [2019-10-17T17:31:11.692 #01748]     INFO -- : SRWT Node: envision1 Request: get_configs
I, [2019-10-17T17:31:11.693 #01748]     INFO -- : Connecting to: https://envision1:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:31:11.693 #01748]     INFO -- : SRWT Node: envision2 Request: get_configs
I, [2019-10-17T17:31:11.694 #01748]     INFO -- : Connecting to: https://envision2:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:31:20.947 #00000]     INFO -- : 200 GET /images/field_bg.png (217.141.50.226) 3.78ms
W, [2019-10-17T17:31:21.020 #00000]  WARNING -- : 401 GET /favicon.ico (217.141.50.226) 44.08ms
I, [2019-10-17T17:31:04.774 #01749]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:31:04.774 #01749]     INFO -- : SRWT Node: envision2 Request: check_auth
I, [2019-10-17T17:31:04.775 #01749]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:31:04.775 #01749]     INFO -- : Connecting to: https://envision2:2224/remote/check_auth?
I, [2019-10-17T17:31:07.775 #01749]     INFO -- : No response from: envision2 request: check_auth, error: operation_timedout
I, [2019-10-17T17:31:07.775 #01749]     INFO -- : No response from: envision1 request: check_auth, error: operation_timedout
I, [2019-10-17T17:31:07.776 #01749]     INFO -- : SRWT Node: envision2 Request: cluster_status
I, [2019-10-17T17:31:07.776 #01749]     INFO -- : Connecting to: https://envision2:2224/remote/cluster_status
I, [2019-10-17T17:31:15.776 #01749]     INFO -- : No response from: envision2 request: cluster_status, error: operation_timedout
I, [2019-10-17T17:31:15.777 #01749]     INFO -- : SRWT Node: envision1 Request: cluster_status
I, [2019-10-17T17:31:15.777 #01749]     INFO -- : Connecting to: https://envision1:2224/remote/cluster_status
I, [2019-10-17T17:31:23.780 #00000]     INFO -- : 200 GET /clusters_overview (217.141.50.226) 24599.25ms
I, [2019-10-17T17:31:24.863 #00000]     INFO -- : 200 GET /css/images/ui-bg_inset-soft_30_f58400_1x100.png (217.141.50.226) 6.66ms
I, [2019-10-17T17:31:35.454 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.9) 5298.55ms
I, [2019-10-17T17:31:29.534 #01750]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:31:29.535 #01750]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:31:35.537 #00000]     INFO -- : 200 GET /manage/check_auth_against_nodes?node_list%5B%5D=envision1 (217.141.50.226) 10561.71ms
I, [2019-10-17T17:31:20.358 #01751]     INFO -- : Running: /usr/sbin/pcs status nodes both
I, [2019-10-17T17:31:20.359 #01751]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:24.360 #01751]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:24.360 #01751]     INFO -- : SRWT Node: envision2 Request: status
I, [2019-10-17T17:31:24.361 #01751]     INFO -- : SRWT Node: envision1 Request: status
I, [2019-10-17T17:31:24.361 #01751]     INFO -- : Connecting to: https://envision2:2224/remote/status?version=2&operations=1
I, [2019-10-17T17:31:24.362 #01751]     INFO -- : Connecting to: https://envision1:2224/remote/status?version=2&operations=1
I, [2019-10-17T17:31:39.362 #01751]     INFO -- : No response from: envision2 request: status, error: operation_timedout
I, [2019-10-17T17:31:39.377 #01751]     INFO -- : No response from: envision1 request: status, error: operation_timedout
I, [2019-10-17T17:31:39.380 #00000]     INFO -- : 200 POST /remote/cluster_status (192.168.1.9) 23377.63ms
I, [2019-10-17T17:31:52.838 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.9) 5287.98ms
I, [2019-10-17T17:32:01.093 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 4697.15ms
I, [2019-10-17T17:31:47.280 #01752]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:31:47.280 #01752]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:31:47.280 #01752]     INFO -- : SRWT Node: envision2 Request: check_auth
I, [2019-10-17T17:31:47.281 #01752]     INFO -- : Connecting to: https://envision2:2224/remote/check_auth?
I, [2019-10-17T17:31:50.281 #01752]     INFO -- : No response from: envision1 request: check_auth, error: operation_timedout
I, [2019-10-17T17:31:50.281 #01752]     INFO -- : No response from: envision2 request: check_auth, error: operation_timedout
I, [2019-10-17T17:31:50.282 #01752]     INFO -- : SRWT Node: envision1 Request: cluster_status
I, [2019-10-17T17:31:50.282 #01752]     INFO -- : Connecting to: https://envision1:2224/remote/cluster_status
I, [2019-10-17T17:31:58.282 #01752]     INFO -- : No response from: envision1 request: cluster_status, error: operation_timedout
I, [2019-10-17T17:31:58.282 #01752]     INFO -- : SRWT Node: envision2 Request: cluster_status
I, [2019-10-17T17:31:58.283 #01752]     INFO -- : Connecting to: https://envision2:2224/remote/cluster_status
I, [2019-10-17T17:32:06.284 #00000]     INFO -- : 200 GET /clusters_overview (217.141.50.226) 27097.60ms
I, [2019-10-17T17:31:42.820 #01753]     INFO -- : SRWT Node: envision1 Request: status
I, [2019-10-17T17:31:42.821 #01753]     INFO -- : Connecting to: https://envision1:2224/remote/status?
I, [2019-10-17T17:32:12.821 #01753]     INFO -- : No response from: envision1 request: status, error: operation_timedout
W, [2019-10-17T17:32:12.823 #00000]  WARNING -- : 400 POST /manage/existingcluster (217.141.50.226) 37242.84ms
I, [2019-10-17T17:31:55.372 #01754]     INFO -- : Running: /usr/sbin/pcs status nodes both
I, [2019-10-17T17:31:55.372 #01754]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:59.373 #01754]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:59.373 #01754]     INFO -- : SRWT Node: envision2 Request: status
I, [2019-10-17T17:31:59.374 #01754]     INFO -- : Connecting to: https://envision2:2224/remote/status?version=2&operations=1
I, [2019-10-17T17:31:59.374 #01754]     INFO -- : SRWT Node: envision1 Request: status
I, [2019-10-17T17:31:59.375 #01754]     INFO -- : Connecting to: https://envision1:2224/remote/status?version=2&operations=1
I, [2019-10-17T17:32:14.375 #01754]     INFO -- : No response from: envision2 request: status, error: operation_timedout
I, [2019-10-17T17:32:14.376 #01754]     INFO -- : No response from: envision1 request: status, error: operation_timedout
I, [2019-10-17T17:32:14.378 #00000]     INFO -- : 200 POST /remote/cluster_status (192.168.1.9) 23845.09ms
I, [2019-10-17T17:32:14.995 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.10) 5666.83ms
I, [2019-10-17T17:31:21.304 #01755]     INFO -- : Running: /usr/sbin/cibadmin -Q -l
I, [2019-10-17T17:31:21.305 #01755]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:31:21.306 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:22.307 #01755]     INFO -- : Running: /usr/sbin/crm_mon --one-shot -r --as-xml
I, [2019-10-17T17:31:22.307 #01755]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:31:23.307 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:24.308 #01755]     INFO -- : Running: /usr/sbin/pcs alert get_all_alerts
I, [2019-10-17T17:31:24.308 #01755]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:31:27.309 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:27.309 #01755]     INFO -- : Running: /usr/sbin/pcs status nodes both
I, [2019-10-17T17:31:27.310 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:31.310 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:32.311 #01755]     INFO -- : Running: systemctl status pacemaker.service
I, [2019-10-17T17:31:32.311 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:32.312 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:32.312 #01755]     INFO -- : Running: systemctl is-enabled pacemaker.service
I, [2019-10-17T17:31:32.313 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:32.313 #01755]     INFO -- : Return Value: 1
I, [2019-10-17T17:31:32.314 #01755]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:31:32.315 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:38.315 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:38.316 #01755]     INFO -- : Running: systemctl status pacemaker_remote.service
I, [2019-10-17T17:31:38.326 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:38.327 #01755]     INFO -- : Return Value: 4
I, [2019-10-17T17:31:38.327 #01755]     INFO -- : Running: systemctl is-enabled pacemaker_remote.service
I, [2019-10-17T17:31:38.328 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:38.328 #01755]     INFO -- : Return Value: 1
I, [2019-10-17T17:31:38.329 #01755]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:31:38.329 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:42.330 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:42.330 #01755]     INFO -- : Running: systemctl status corosync.service
I, [2019-10-17T17:31:42.331 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:42.331 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:42.332 #01755]     INFO -- : Running: systemctl is-enabled corosync.service
I, [2019-10-17T17:31:42.332 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:47.333 #01755]     INFO -- : Return Value: 1
I, [2019-10-17T17:31:47.333 #01755]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:31:47.334 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:51.334 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:51.335 #01755]     INFO -- : Running: systemctl status pcsd.service
I, [2019-10-17T17:31:51.335 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:56.336 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:56.347 #01755]     INFO -- : Running: systemctl is-enabled pcsd.service
I, [2019-10-17T17:31:56.347 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:56.348 #01755]     INFO -- : Return Value: 1
I, [2019-10-17T17:31:56.348 #01755]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:31:56.348 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:00.349 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:00.349 #01755]     INFO -- : Running: systemctl status sbd.service
I, [2019-10-17T17:32:00.350 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:01.350 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:01.351 #01755]     INFO -- : Running: systemctl is-enabled sbd.service
I, [2019-10-17T17:32:01.351 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:05.352 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:05.352 #01755]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:05.353 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:14.353 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:14.354 #01755]     INFO -- : Running: /usr/sbin/corosync-cmapctl -g runtime.votequorum.this_node_id
I, [2019-10-17T17:32:14.354 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:14.355 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:15.355 #01755]     INFO -- : Running: /usr/sbin/pcs stonith sbd local_config_in_json
I, [2019-10-17T17:32:15.356 #01755]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:17.359 #01755]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:17.360 #01755]     INFO -- : SRWT Node: envision2 Request: check_auth
I, [2019-10-17T17:32:17.361 #01755]     INFO -- : Connecting to: https://envision2:2224/remote/check_auth?
I, [2019-10-17T17:32:17.361 #01755]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:32:17.362 #01755]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:32:20.362 #01755]     INFO -- : No response from: envision2 request: check_auth, error: operation_timedout
I, [2019-10-17T17:32:20.363 #01755]     INFO -- : No response from: envision1 request: check_auth, error: operation_timedout
I, [2019-10-17T17:32:21.389 #00000]     INFO -- : 200 GET /remote/status?version=2&operations=1 (192.168.1.10) 64490.82ms
I, [2019-10-17T17:32:24.051 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.10) 6198.59ms
I, [2019-10-17T17:32:24.729 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.9) 6896.37ms
I, [2019-10-17T17:32:17.804 #01756]     INFO -- : SRWT Node: envision1 Request: get_configs
I, [2019-10-17T17:32:17.805 #01756]     INFO -- : Connecting to: https://envision1:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:32:17.805 #01756]     INFO -- : SRWT Node: envision2 Request: get_configs
I, [2019-10-17T17:32:17.806 #01756]     INFO -- : Connecting to: https://envision2:2224/remote/get_configs?cluster_name=envision
I, [2019-10-17T17:32:25.110 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.9) 6617.36ms
I, [2019-10-17T17:31:29.156 #01757]     INFO -- : Running: /usr/sbin/cibadmin -Q -l
I, [2019-10-17T17:31:29.157 #01757]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:31:29.158 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:30.158 #01757]     INFO -- : Running: /usr/sbin/crm_mon --one-shot -r --as-xml
I, [2019-10-17T17:31:30.158 #01757]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:31:31.159 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:32.159 #01757]     INFO -- : Running: /usr/sbin/pcs alert get_all_alerts
I, [2019-10-17T17:31:32.159 #01757]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:31:35.160 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:35.160 #01757]     INFO -- : Running: /usr/sbin/pcs status nodes both
I, [2019-10-17T17:31:35.160 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:38.161 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:39.161 #01757]     INFO -- : Running: systemctl status pacemaker.service
I, [2019-10-17T17:31:39.161 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:42.162 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:42.162 #01757]     INFO -- : Running: systemctl is-enabled pacemaker.service
I, [2019-10-17T17:31:42.162 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:42.162 #01757]     INFO -- : Return Value: 1
I, [2019-10-17T17:31:42.163 #01757]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:31:42.163 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:47.163 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:47.164 #01757]     INFO -- : Running: systemctl status pacemaker_remote.service
I, [2019-10-17T17:31:47.164 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:47.164 #01757]     INFO -- : Return Value: 4
I, [2019-10-17T17:31:47.165 #01757]     INFO -- : Running: systemctl is-enabled pacemaker_remote.service
I, [2019-10-17T17:31:47.165 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:51.165 #01757]     INFO -- : Return Value: 1
I, [2019-10-17T17:31:51.166 #01757]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:31:51.166 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:56.166 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:56.167 #01757]     INFO -- : Running: systemctl status corosync.service
I, [2019-10-17T17:31:56.167 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:56.168 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:56.168 #01757]     INFO -- : Running: systemctl is-enabled corosync.service
I, [2019-10-17T17:31:56.168 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:00.168 #01757]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:00.169 #01757]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:00.169 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:05.169 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:05.170 #01757]     INFO -- : Running: systemctl status pcsd.service
I, [2019-10-17T17:32:05.170 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:05.170 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:05.171 #01757]     INFO -- : Running: systemctl is-enabled pcsd.service
I, [2019-10-17T17:32:05.171 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:14.171 #01757]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:14.172 #01757]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:14.172 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:18.172 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:18.173 #01757]     INFO -- : Running: systemctl status sbd.service
I, [2019-10-17T17:32:18.173 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:19.173 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:19.174 #01757]     INFO -- : Running: systemctl is-enabled sbd.service
I, [2019-10-17T17:32:19.174 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:27.174 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:27.174 #01757]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:27.175 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:31.175 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:31.175 #01757]     INFO -- : Running: /usr/sbin/corosync-cmapctl -g runtime.votequorum.this_node_id
I, [2019-10-17T17:32:31.176 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:31.176 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:32.176 #01757]     INFO -- : Running: /usr/sbin/pcs stonith sbd local_config_in_json
I, [2019-10-17T17:32:32.177 #01757]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:33.177 #01757]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:33.177 #01757]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:32:33.178 #01757]     INFO -- : SRWT Node: envision2 Request: check_auth
I, [2019-10-17T17:32:33.178 #01757]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:32:33.178 #01757]     INFO -- : Connecting to: https://envision2:2224/remote/check_auth?
I, [2019-10-17T17:32:36.179 #01757]     INFO -- : No response from: envision2 request: check_auth, error: operation_timedout
I, [2019-10-17T17:32:36.179 #01757]     INFO -- : No response from: envision1 request: check_auth, error: operation_timedout
I, [2019-10-17T17:32:37.189 #00000]     INFO -- : 200 GET /remote/status?version=2&operations=1 (192.168.1.9) 72590.90ms
I, [2019-10-17T17:32:38.422 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.9) 4170.71ms
I, [2019-10-17T17:32:49.939 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.10) 4439.55ms
I, [2019-10-17T17:32:52.826 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.9) 4267.95ms
I, [2019-10-17T17:33:03.019 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.10) 5078.18ms
I, [2019-10-17T17:33:06.444 #00000]     INFO -- : 200 GET /remote/get_configs?cluster_name=envision (192.168.1.10) 5450.12ms
I, [2019-10-17T17:32:48.369 #01758]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:32:48.369 #01758]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:32:48.370 #01758]     INFO -- : SRWT Node: envision2 Request: check_auth
I, [2019-10-17T17:32:48.370 #01758]     INFO -- : Connecting to: https://envision2:2224/remote/check_auth?
I, [2019-10-17T17:32:51.371 #01758]     INFO -- : No response from: envision1 request: check_auth, error: operation_timedout
I, [2019-10-17T17:32:51.371 #01758]     INFO -- : No response from: envision2 request: check_auth, error: operation_timedout
I, [2019-10-17T17:32:51.372 #01758]     INFO -- : SRWT Node: envision1 Request: cluster_status
I, [2019-10-17T17:32:51.372 #01758]     INFO -- : Connecting to: https://envision1:2224/remote/cluster_status
I, [2019-10-17T17:32:59.373 #01758]     INFO -- : No response from: envision1 request: cluster_status, error: operation_timedout
I, [2019-10-17T17:32:59.373 #01758]     INFO -- : SRWT Node: envision2 Request: cluster_status
I, [2019-10-17T17:32:59.374 #01758]     INFO -- : Connecting to: https://envision2:2224/remote/cluster_status
I, [2019-10-17T17:33:07.376 #00000]     INFO -- : 200 GET /clusters_overview (217.141.50.226) 48173.98ms
I, [2019-10-17T17:31:47.143 #01759]     INFO -- : Running: /usr/sbin/cibadmin -Q -l
I, [2019-10-17T17:31:47.143 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:47.144 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:48.144 #01759]     INFO -- : Running: /usr/sbin/crm_mon --one-shot -r --as-xml
I, [2019-10-17T17:31:48.144 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:49.145 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:50.145 #01759]     INFO -- : Running: /usr/sbin/pcs alert get_all_alerts
I, [2019-10-17T17:31:50.146 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:52.155 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:52.155 #01759]     INFO -- : Running: /usr/sbin/pcs status nodes both
I, [2019-10-17T17:31:52.156 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:31:56.156 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:31:57.157 #01759]     INFO -- : Running: systemctl status pacemaker.service
I, [2019-10-17T17:31:57.157 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:00.158 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:00.158 #01759]     INFO -- : Running: systemctl is-enabled pacemaker.service
I, [2019-10-17T17:32:00.159 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:05.159 #01759]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:05.160 #01759]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:05.160 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:09.161 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:09.161 #01759]     INFO -- : Running: systemctl status pacemaker_remote.service
I, [2019-10-17T17:32:09.162 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:14.164 #01759]     INFO -- : Return Value: 4
I, [2019-10-17T17:32:14.167 #01759]     INFO -- : Running: systemctl is-enabled pacemaker_remote.service
I, [2019-10-17T17:32:14.167 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:18.168 #01759]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:18.169 #01759]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:18.169 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:23.169 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:23.170 #01759]     INFO -- : Running: systemctl status corosync.service
I, [2019-10-17T17:32:23.170 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:27.171 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:27.171 #01759]     INFO -- : Running: systemctl is-enabled corosync.service
I, [2019-10-17T17:32:27.172 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:31.172 #01759]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:31.173 #01759]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:31.173 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:35.174 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:35.174 #01759]     INFO -- : Running: systemctl status pcsd.service
I, [2019-10-17T17:32:35.175 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:43.175 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:43.175 #01759]     INFO -- : Running: systemctl is-enabled pcsd.service
I, [2019-10-17T17:32:43.176 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:43.176 #01759]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:43.177 #01759]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:43.177 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:48.177 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:48.178 #01759]     INFO -- : Running: systemctl status sbd.service
I, [2019-10-17T17:32:48.178 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:56.179 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:56.179 #01759]     INFO -- : Running: systemctl is-enabled sbd.service
I, [2019-10-17T17:32:56.180 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:56.181 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:56.181 #01759]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:56.181 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:01.182 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:01.182 #01759]     INFO -- : Running: /usr/sbin/corosync-cmapctl -g runtime.votequorum.this_node_id
I, [2019-10-17T17:33:01.183 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:01.183 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:02.184 #01759]     INFO -- : Running: /usr/sbin/pcs stonith sbd local_config_in_json
I, [2019-10-17T17:33:02.184 #01759]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:04.185 #01759]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:04.185 #01759]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:33:04.186 #01759]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:33:04.188 #01759]     INFO -- : SRWT Node: envision2 Request: check_auth
I, [2019-10-17T17:33:04.188 #01759]     INFO -- : Connecting to: https://envision2:2224/remote/check_auth?
I, [2019-10-17T17:33:07.189 #01759]     INFO -- : No response from: envision1 request: check_auth, error: operation_timedout
I, [2019-10-17T17:33:07.189 #01759]     INFO -- : No response from: envision2 request: check_auth, error: operation_timedout
I, [2019-10-17T17:33:08.197 #00000]     INFO -- : 200 GET /remote/status? (192.168.1.9) 85208.50ms
I, [2019-10-17T17:33:10.781 #00000]     INFO -- : 200 GET /remote/check_auth? (192.168.1.9) 5116.41ms
I, [2019-10-17T17:32:55.130 #01760]     INFO -- : Running: /usr/sbin/pcs status nodes both
I, [2019-10-17T17:32:55.131 #01760]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:58.131 #01760]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:58.132 #01760]     INFO -- : SRWT Node: envision1 Request: status
I, [2019-10-17T17:32:58.132 #01760]     INFO -- : SRWT Node: envision2 Request: status
I, [2019-10-17T17:32:58.133 #01760]     INFO -- : Connecting to: https://envision1:2224/remote/status?version=2&operations=1
I, [2019-10-17T17:32:58.134 #01760]     INFO -- : Connecting to: https://envision2:2224/remote/status?version=2&operations=1
I, [2019-10-17T17:33:13.134 #01760]     INFO -- : No response from: envision2 request: status, error: operation_timedout
I, [2019-10-17T17:33:13.135 #01760]     INFO -- : No response from: envision1 request: status, error: operation_timedout
I, [2019-10-17T17:33:14.138 #00000]     INFO -- : 200 POST /remote/cluster_status (192.168.1.9) 22544.64ms
I, [2019-10-17T17:32:03.503 #01761]     INFO -- : Running: /usr/sbin/cibadmin -Q -l
I, [2019-10-17T17:32:03.504 #01761]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:32:03.505 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:04.505 #01761]     INFO -- : Running: /usr/sbin/crm_mon --one-shot -r --as-xml
I, [2019-10-17T17:32:04.506 #01761]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:32:05.517 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:06.518 #01761]     INFO -- : Running: /usr/sbin/pcs alert get_all_alerts
I, [2019-10-17T17:32:06.518 #01761]     INFO -- : CIB USER: hacluster, groups: haclient haclient
I, [2019-10-17T17:32:08.519 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:08.519 #01761]     INFO -- : Running: /usr/sbin/pcs status nodes both
I, [2019-10-17T17:32:08.519 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:12.520 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:13.520 #01761]     INFO -- : Running: systemctl status pacemaker.service
I, [2019-10-17T17:32:13.521 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:14.521 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:14.522 #01761]     INFO -- : Running: systemctl is-enabled pacemaker.service
I, [2019-10-17T17:32:14.522 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:18.523 #01761]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:18.523 #01761]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:18.524 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:27.536 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:27.537 #01761]     INFO -- : Running: systemctl status pacemaker_remote.service
I, [2019-10-17T17:32:27.538 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:27.538 #01761]     INFO -- : Return Value: 4
I, [2019-10-17T17:32:27.539 #01761]     INFO -- : Running: systemctl is-enabled pacemaker_remote.service
I, [2019-10-17T17:32:27.540 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:31.540 #01761]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:31.541 #01761]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:31.541 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:39.542 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:39.542 #01761]     INFO -- : Running: systemctl status corosync.service
I, [2019-10-17T17:32:39.543 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:43.543 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:43.543 #01761]     INFO -- : Running: systemctl is-enabled corosync.service
I, [2019-10-17T17:32:43.544 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:44.544 #01761]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:44.545 #01761]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:44.545 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:52.546 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:52.549 #01761]     INFO -- : Running: systemctl status pcsd.service
I, [2019-10-17T17:32:52.550 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:56.550 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:32:56.551 #01761]     INFO -- : Running: systemctl is-enabled pcsd.service
I, [2019-10-17T17:32:56.551 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:32:56.552 #01761]     INFO -- : Return Value: 1
I, [2019-10-17T17:32:56.552 #01761]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:32:56.553 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:05.553 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:05.554 #01761]     INFO -- : Running: systemctl status sbd.service
I, [2019-10-17T17:33:05.554 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:10.555 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:10.555 #01761]     INFO -- : Running: systemctl is-enabled sbd.service
I, [2019-10-17T17:33:10.556 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:10.566 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:10.567 #01761]     INFO -- : Running: systemctl list-unit-files --full
I, [2019-10-17T17:33:10.567 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:14.568 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:14.569 #01761]     INFO -- : Running: /usr/sbin/corosync-cmapctl -g runtime.votequorum.this_node_id
I, [2019-10-17T17:33:14.569 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:14.570 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:15.570 #01761]     INFO -- : Running: /usr/sbin/pcs stonith sbd local_config_in_json
I, [2019-10-17T17:33:15.571 #01761]     INFO -- : CIB USER: hacluster, groups:
I, [2019-10-17T17:33:18.571 #01761]     INFO -- : Return Value: 0
I, [2019-10-17T17:33:18.571 #01761]     INFO -- : SRWT Node: envision2 Request: check_auth
I, [2019-10-17T17:33:18.572 #01761]     INFO -- : Connecting to: https://envision2:2224/remote/check_auth?
I, [2019-10-17T17:33:18.572 #01761]     INFO -- : SRWT Node: envision1 Request: check_auth
I, [2019-10-17T17:33:18.573 #01761]     INFO -- : Connecting to: https://envision1:2224/remote/check_auth?
I, [2019-10-17T17:33:21.573 #01761]     INFO -- : No response from: envision2 request: check_auth, error: operation_timedout
I, [2019-10-17T17:33:21.574 #01761]     INFO -- : No response from: envision1 request: check_auth, error: operation_timedout
I, [2019-10-17T17:33:22.594 #00000]     INFO -- : 200 GET /remote/status?version=2&operations=1 (192.168.1.9) 83061.78ms
Ogekuri commented 5 years ago

Update, I've backported the 10.3 version from debian sid to debian buster but the issue is the same.


[ ogekuri@envision1:~/pcs-10-3 ] $ dpkg-deb -I pcs_0.10.3-1_all.deb
 new Debian package, version 2.0.
 size 810112 bytes: control archive=9648 bytes.
      73 bytes,     4 lines      conffiles
     965 bytes,    20 lines      control
   26643 bytes,   313 lines      md5sums
    2957 bytes,    94 lines   *  postinst             #!/bin/sh
    1745 bytes,    66 lines   *  postrm               #!/bin/sh
     779 bytes,    22 lines   *  prerm                #!/bin/sh
 Package: pcs
 Version: 0.10.3-1
 Architecture: all
 Maintainer: Debian HA Maintainers <debian-ha-maintainers@lists.alioth.debian.org>
 Installed-Size: 4419
 Pre-Depends: init-system-helpers (>= 1.54~)
 Depends: lsb-base (>= 3.0-6), psmisc, fonts-dejavu-core, fonts-liberation, python3:any, python3-lxml, python3-openssl, python3-pkg-resources, python3-pycurl, python3-tornado, ruby, ruby-backports, ruby-ethon, ruby-json, ruby-open4, ruby-sinatra, rubygems-integration
 Recommends: pacemaker (>= 2.0)
 Conflicts: python3-pcs
 Breaks: pacemaker (<< 2.0)
 Section: admin
 Priority: optional
 Homepage: https://github.com/ClusterLabs/pcs
 Description: Pacemaker Configuration System
  pcs is a corosync and pacemaker configuration tool. It permits
  users to easily view, modify and create pacemaker based clusters.
  .
  pcs also provides pcsd, which operates as a GUI and remote server
  for pcs. Together pcs and pcsd form the recommended configuration
  tool for use with pacemaker.

[ ogekuri@envision1:~/pcs-10-3 ] $ sudo dpkg -i pcs_0.10.3-1_all.deb
(Reading database ... 251233 files and directories currently installed.)
Preparing to unpack pcs_0.10.3-1_all.deb ...
Unpacking pcs (0.10.3-1) over (0.10.1-2) ...
Setting up pcs (0.10.3-1) ...
Installing new version of config file /etc/default/pcsd ...
insserv: warning: current start runlevel(s) (empty) of script `pcsd' overrides LSB defaults (2 3 4 5).
insserv: warning: current stop runlevel(s) (0 1 2 3 4 5 6) of script `pcsd' overrides LSB defaults (0 1 6).
pcsd.service is a disabled or a static unit not running, not starting it.
Processing triggers for systemd (241-7~deb10u1+rpi1) ...
Processing triggers for man-db (2.8.5-2) ...
tomjelinek commented 4 years ago

It looks like you are doing everything right. The problem is most of the requests time out. For some reason pcsd backend is responding slowly and the javascript part gives up waiting on responses before they arrive. That is why you cannot do anything in the web UI. That may be caused by your nodes not being powerful enough as we did not encounter such issues ourselves.

Unfortunately there is no easy fix for this nor options you could tweak to make it work. A workaround is to get more powerful cluster nodes. (It would be nice to know your nodes' HW configuration, at least CPU and number of cores, RAM.)

We can make the timeouts longer as a temporary workaround until the proper fix is delivered.

Ogekuri commented 4 years ago

Make sense.

I can change the startup script if I know with enviroment I have to set.

I'm also able to patch sources and build a custom deb for my istallation, if you can provide me which files I have to patch.

I'm on the 0.10.3 now.

Thanks!

tomjelinek commented 4 years ago

Here is a simple patch which makes some timeouts longer. It may not work 100% in your case but I think it's worth a try.

Meanwhile, we are working on a proper fix. Stay tuned.

--- /usr/lib/pcsd/pcs.rb.orig   2020-01-09 09:26:36.606675049 +0000
+++ /usr/lib/pcsd/pcs.rb        2020-01-09 09:26:45.998948171 +0000
@@ -479,7 +479,7 @@
   timeout_ms = 30000
   begin
     if timeout
-      timeout_ms = (Float(timeout) * 1000).to_i
+      timeout_ms = (Float(timeout) * 1000).to_i * 2
     elsif ENV['PCSD_NETWORK_TIMEOUT']
        timeout_ms = (Float(ENV['PCSD_NETWORK_TIMEOUT']) * 1000).to_i
     end
tomjelinek commented 4 years ago

In https://github.com/ClusterLabs/pcs/commit/2acfe05341ba15f689582445684442bc83fb592a, we have daemonized the ruby part of the pcsd daemon thus avoiding an extra load and delay caused by spawning sinatra_cmdline_wrapper.rb processes. I believe this should help you. In order for the fix to work, pcs on all your cluster nodes should be updated.

Let us know if this resolves your issue.