gluster / glusterfs

Gluster Filesystem : Build your distributed storage in minutes
https://www.gluster.org
GNU General Public License v2.0
4.74k stars 1.08k forks source link

Possibility of running GlusterFS as a non-root user #3170

Closed FSangouard closed 2 years ago

FSangouard commented 2 years ago

Description of problem: I'm trying to make GlusterFS run as a non-root user (not just Geo-replication, but also the glusterd daemon, the glusterfsd processes handling the volumes, everything). I managed to get glusterd running as non-root and to create a volume with the CLI using sudo. However, I am stuck when I try to start the volume. The error doesn't tell me much.

Is this possible at all ? Or is it not supported ?

The exact command to reproduce the issue:

sudo gluster --glusterd-sock=/run/gluster/glusterd.socket volume start volume1

The full output of the command that failed:

volume start: volume1: failed: Commit failed on localhost. Please check log file for details.

Expected results:

volume start: volume1: success

Mandatory info: - The output of the gluster volume info command:

sudo gluster --glusterd-sock=/run/gluster/glusterd.socket volume info

Volume Name: volume1
Type: Replicate
Volume ID: 7f1f2426-7f69-4154-899f-df2a721f9b8a
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: <node1>:/applis/apsu/data/glusterfs/volume1/brick1/brick
Brick2: <node2>:/applis/apsu/data/glusterfs/volume1/brick1/brick
Brick3: <node3>:/applis/apsu/data/glusterfs/volume1/brick1/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on

- The output of the gluster volume status command:

sudo gluster --glusterd-sock=/run/gluster/glusterd.socket volume status
Volume volume1 is not started

- The output of the gluster volume heal command:

sudo gluster --glusterd-sock=/run/gluster/glusterd.socket volume heal volume1
Launching heal operation to perform index self heal on volume volume1 has been unsuccessful:
Volume volume1 is not started.

- Provide logs present on following locations of client and server nodes - /var/log/glusterfs

In glusterd.log :

``` [2022-01-24 17:42:14.600590 +0000] I [glusterd-utils.c:6970:glusterd_brick_start] 0-management: starting a fresh brick process for brick /applis/apsu/data/glusterfs/volume1/brick1/brick [2022-01-24 17:42:14.600728 +0000] D [run.c:242:runner_log] (-->/usr/lib64/glusterfs/9.4/xlator/mgmt/glusterd.so(+0x5bd26) [0x7f497b187d26] -->/usr/lib64/glusterfs/9.4/xlator/mgmt/glusterd.so(+0x5912e) [0x7f497b18512e] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f49876a1d15] ) 0-: Starting GlusterFS: /usr/sbin/glusterfsd -s --volfile-id volume1..applis-apsu-data-glusterfs-volume1-brick1-brick -p /var/run/gluster/vols/volume1/-applis-apsu-data-glusterfs-volume1-brick1-brick.pid -S /var/run/gluster/f330b3250f5fd355.socket --brick-name /applis/apsu/data/glusterfs/volume1/brick1/brick -l /var/log/glusterfs/bricks/applis-apsu-data-glusterfs-volume1-brick1-brick.log --xlator-option *-posix.glusterd-uuid=67fdec2f-4689-4488-8137-df512e354476 --process-name brick --brick-port 49152 --xlator-option volume1-server.listen-port=49152 [2022-01-24 17:42:14.605656 +0000] E [MSGID: 106005] [glusterd-utils.c:6975:glusterd_brick_start] 0-management: Unable to start brick :/applis/apsu/data/glusterfs/volume1/brick1/brick [2022-01-24 17:42:14.605791 +0000] D [MSGID: 0] [glusterd-utils.c:6985:glusterd_brick_start] 0-management: returning -255 [2022-01-24 17:42:14.605812 +0000] E [MSGID: 106122] [glusterd-mgmt.c:341:gd_mgmt_v3_commit_fn] 0-management: Volume start commit failed. [2022-01-24 17:42:14.605822 +0000] D [MSGID: 0] [glusterd-mgmt.c:408:gd_mgmt_v3_commit_fn] 0-management: OP = 5. Returning -255 [2022-01-24 17:42:14.605832 +0000] E [MSGID: 106122] [glusterd-mgmt.c:1687:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Start on local node [2022-01-24 17:42:14.605844 +0000] D [MSGID: 0] [glusterd-op-sm.c:5063:glusterd_op_modify_op_ctx] 0-management: op_ctx modification not required [2022-01-24 17:42:14.605854 +0000] E [MSGID: 106122] [glusterd-mgmt.c:2721:glusterd_mgmt_v3_initiate_all_phases] 0-management: Commit Op Failed [2022-01-24 17:42:14.605944 +0000] D [rpc-clnt-ping.c:298:rpc_clnt_start_ping] 0-management: ping timeout is 0, returning [2022-01-24 17:42:14.605985 +0000] D [rpc-clnt-ping.c:298:rpc_clnt_start_ping] 0-management: ping timeout is 0, returning [2022-01-24 17:42:14.606430 +0000] D [MSGID: 0] [glusterd-mgmt.c:2447:glusterd_mgmt_v3_release_peer_locks] 0-management: Sent unlock op req for Start to 2 peers. Returning 0 [2022-01-24 17:42:14.606467 +0000] D [MSGID: 0] [glusterd-locks.c:796:glusterd_mgmt_v3_unlock] 0-management: Trying to release lock of vol volume1 for 67fdec2f-4689-4488-8137-df512e354476 as volume1_vol [2022-01-24 17:42:14.606488 +0000] D [MSGID: 0] [glusterd-locks.c:848:glusterd_mgmt_v3_unlock] 0-management: Lock for vol volume1 successfully released [2022-01-24 17:42:14.606504 +0000] D [MSGID: 0] [glusterd-utils.c:1790:glusterd_volinfo_find] 0-management: Volume volume1 found [2022-01-24 17:42:14.606514 +0000] D [MSGID: 0] [glusterd-utils.c:1797:glusterd_volinfo_find] 0-management: Returning 0 [2022-01-24 17:42:14.606525 +0000] D [MSGID: 0] [glusterd-locks.c:457:glusterd_multiple_mgmt_v3_unlock] 0-management: Returning 0 [2022-01-24 17:42:14.606584 +0000] D [MSGID: 0] [glusterd-rpc-ops.c:203:glusterd_op_send_cli_response] 0-management: Returning 0 [2022-01-24 17:42:14.607665 +0000] D [socket.c:2978:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:12) (non-SSL) ```

More logs can be provided if necessary.

- Is there any crash ? Provide the backtrace and coredump No crash

Additional info:

Steps taken until now to make GlusterFS run as non-root (on all 3 nodes) : `systemctl stop glusterd.service` In `/usr/lib/systemd/system/glusterd.service` added/edited these properties : ``` User=gluster Group=gluster ExecStart=/usr/sbin/glusterd -p /var/run/gluster/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS PIDFile=/var/run/gluster/glusterd.pid AmbientCapabilities=CAP_SYS_ADMIN ``` In `/etc/glusterfs/glusterd.vol` added this option : `option glusterd-sockfile /run/gluster/glusterd.socket` `chown -R gluster:gluster /var/lib/glusterd/ /var/log/glusterfs/` `systemctl daemon-reload` `systemctl start glusterd.service`

- The operating system / glusterfs version:

CentOS 7.8 GlusterFS 9.4

pbhatia22 commented 2 years ago

This shows that brick failed to start : Unable to start brick :/applis/apsu/data/glusterfs/volume1/brick1/brick Share the brick log.

FSangouard commented 2 years ago

I didn't even think to look there, thanks. I was able to move forward in my deployment. For the record, here's what I've done so far :

After reviewing the brick logs, the problem was with the permissions inside the brick directory. By default, the hidden directory .glusterfs and some of its subdirectories (indices, changelogs, unlink) are created with permissions 600, which doesn't include execute permission. Without this permission, a non-root user cannot do much inside these directories, so I had to add execute permission on the .glusterfs directory recursively (all subdirectories inside other than those 3 already have that permission, so it didn't change anything for them). I actually had to create the directories beforehand since volume startup couldn't create them.

After that, I was able to start the volume and mount it on a client, but creating a file failed because the user who created it on the client was different from the user running GlusterFS on the server.

I had to :

I am currently able to CRUD files from the client as a different user than the one running GlusterFS. I think it looks good, but if anything of what I did seems suspicious to you, please let me know.

mykaul commented 2 years ago

Hi @FSangouard - all looks very interesting. If you search for the code we have many places where '0600' is hard-coded into the sys_mkdir() calls or other calls. I think enumerating them and understanding what the gaps are is a great first step.

FSangouard commented 2 years ago

Yes I have seen the hardcoded '0600' when looking for a way to configure those default permissions, which is why I decided to manually create the directories with the correct permissions on the machines.

Unfortunately, I'm really not a C programmer, so even if I had the time, I would not be able to contribute a way to make this configurable.

I still want to do some more tests (like geo-replication), I will report here if I run in any troubles.

FSangouard commented 2 years ago

I'm hitting a road block right now with geo-replication, I'm not sure what the problem is. Every time I start a geo-replication session, it instantly becomes Faulty, and on the slaves I see the volume being mounted multiple times in a loop, until I stop the session.

Example (contents of /etc/mtab about a minute after session startup):

localhost:volume1 /var/mountbroker-root/user2006/mtpt-georep-AestaA fuse.glusterfs rw,relatime,user_id=2000,group_id=2000,allow_other,max_read=131072 0 0
localhost:volume1 /var/mountbroker-root/user2006/mtpt-georep-EAnKg0 fuse.glusterfs rw,relatime,user_id=2000,group_id=2000,allow_other,max_read=131072 0 0
localhost:volume1 /var/mountbroker-root/user2006/mtpt-georep-nn4Z5J fuse.glusterfs rw,relatime,user_id=2000,group_id=2000,allow_other,max_read=131072 0 0
localhost:volume1 /var/mountbroker-root/user2006/mtpt-georep-1JGbHN fuse.glusterfs rw,relatime,user_id=2000,group_id=2000,allow_other,max_read=131072 0 0

Before I reached this point, I first had to add CAP_DAC_OVERRIDE to the capabilities of the glusterd service because the mountbroker service explicitly checks for uid 0 as owner and no write permission for group/others on the mountbroker root directory, so the only way for glusterd to start as a non-root user and still have access to this directory is to add that capability.

However, it appears that it is not enough to have a fully functionning geo-replication session. From what I've seen, there are 2 problems:

``` if (geteuid () == 0) { fuse_mnt_umount ("fuse", mountpoint, mountpoint, 1); return; } else { GFFUSE_LOGERR ("fuse: Effective-uid: %d", geteuid()); } ```

I tried setting user_allow_other in /etc/fuse.conf on the slaves and restarting the session, but it didn't change anything.

I have some doubts about the feasability of geo-replication with glusterd running as non-root, because even if we resolve the first problem, I don't see a way around the second problem.

Here are the logs in DEBUG:

Master:

In /var/log/glusterfs/geo-replication/volume1_<ip_slave1>_volume1/gsyncd.log:

``` [2022-02-07 10:03:22.277241] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:29.985351] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:30.285010] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:30.354213] D [gsyncd(config-set):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:30.424670] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:38.505162] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:38.800091] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:38.862684] D [gsyncd(config-set):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:38.929032] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:45.770226] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:45.881616] D [gsyncd(status):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:51.610609] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:51.718092] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:52.91358] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:52.197235] D [gsyncd(monitor):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:52.325705] D [monitor(monitor):304:distribute] : master bricks: [{'host': '', 'uuid': 'd79016f4-d408-4ff4-b957-0039361685f0', 'dir': '/applis/apsu/data/glusterfs/volume1/brick1/brick'}, {'host': '', 'uuid': 'ccb9be91-a933-46ca-9c9b-dc6fe6882204', 'dir': '/applis/apsu/data/glusterfs/volume1/brick1/brick'}, {'host': '', 'uuid': '50603c27-24bc-4b30-a3e9-c9080b4538d0', 'dir': '/applis/apsu/data/glusterfs/volume1/brick1/brick'}] [2022-02-07 10:03:52.325989] D [monitor(monitor):314:distribute] : slave SSH gateway: georep@ [2022-02-07 10:03:52.596872] D [monitor(monitor):334:distribute] : slave bricks: [{'host': '', 'uuid': '7b712811-dbff-4ff6-85f3-5980824e4bc5', 'dir': '/applis/apsu/data/glusterfs/volume1/brick1/brick'}, {'host': '', 'uuid': 'e70b2e88-d277-49f0-b67b-a5c054e716dd', 'dir': '/applis/apsu/data/glusterfs/volume1/brick1/brick'}, {'host': '', 'uuid': 'f3e4fc1b-82a4-4b4d-97a0-7f864f0b7513', 'dir': '/applis/apsu/data/glusterfs/volume1/brick1/brick'}] [2022-02-07 10:03:52.597438] D [syncdutils(monitor):945:is_hot] Volinfo: brickpath: ':/applis/apsu/data/glusterfs/volume1/brick1/brick' [2022-02-07 10:03:52.597956] D [monitor(monitor):348:distribute] : worker specs: [({'host': '', 'uuid': 'd79016f4-d408-4ff4-b957-0039361685f0', 'dir': '/applis/apsu/data/glusterfs/volume1/brick1/brick'}, ('georep@', '7b712811-dbff-4ff6-85f3-5980824e4bc5'), '1', False)] [2022-02-07 10:03:52.603276] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}] [2022-02-07 10:03:52.603414] I [monitor(monitor):160:monitor] Monitor: starting gsyncd worker [{brick=/applis/apsu/data/glusterfs/volume1/brick1/brick}, {slave_node=}] [2022-02-07 10:03:52.607542] D [monitor(monitor):195:monitor] Monitor: Worker would mount volume privately [2022-02-07 10:03:52.664715] D [gsyncd(worker /applis/apsu/data/glusterfs/volume1/brick1/brick):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:03:52.689609] I [resource(worker /applis/apsu/data/glusterfs/volume1/brick1/brick):1387:connect_remote] SSH: Initializing SSH connection between master and slave... [2022-02-07 10:03:52.733026] D [repce(worker /applis/apsu/data/glusterfs/volume1/brick1/brick):195:push] RepceClient: call 4484:140535483389760:1644228232.73 __repce_version__() ... [2022-02-07 10:03:54.224843] E [syncdutils(worker /applis/apsu/data/glusterfs/volume1/brick1/brick):325:log_raise_exception] : connection to peer is broken [2022-02-07 10:03:54.225368] E [syncdutils(worker /applis/apsu/data/glusterfs/volume1/brick1/brick):847:errlog] Popen: command returned error [{cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-65lueZ/1381357a9626780b90fa59f0afe8a6e8.sock georep@ /nonexistent/gsyncd slave volume1 georep@::volume1 --master-node --master-node-id d79016f4-d408-4ff4-b957-0039361685f0 --master-brick /applis/apsu/data/glusterfs/volume1/brick1/brick --local-node --local-node-id 7b712811-dbff-4ff6-85f3-5980824e4bc5 --slave-timeout 120 --slave-log-level DEBUG --slave-gluster-log-level DEBUG --slave-gluster-command-dir /usr/sbin --master-dist-count 1}, {error=1}] [2022-02-07 10:03:54.225562] E [syncdutils(worker /applis/apsu/data/glusterfs/volume1/brick1/brick):851:logerr] Popen: ssh> failure: cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mntl2Pc7Y failed with status 1 [2022-02-07 10:03:54.225675] E [syncdutils(worker /applis/apsu/data/glusterfs/volume1/brick1/brick):851:logerr] Popen: ssh> failed with GsyncdError: cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mntl2Pc7Y failed with status 1. [2022-02-07 10:03:54.231666] I [monitor(monitor):220:monitor] Monitor: worker died before establishing connection [{brick=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:04:05.870737] I [monitor(monitor):220:monitor] Monitor: worker died before establishing connection [{brick=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:04:05.871990] I [gsyncdstatus(monitor):248:set_worker_status] GeorepStatus: Worker Status Change [{status=Faulty}] [2022-02-07 10:04:07.89481] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:04:07.150922] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:04:07.476850] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:04:07.538193] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:04:08.603760] D [gsyncd(monitor-status):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:04:08.615083] I [subcmds(monitor-status):29:subcmd_monitor_status] : Monitor Status Change [{status=Stopped}] [2022-02-07 10:04:11.582360] D [gsyncd(config-get):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] [2022-02-07 10:04:11.688359] D [gsyncd(status):303:main] : Using session config file [{path=/var/lib/glusterd/geo-replication/volume1__volume1/gsyncd.conf}] ```

Slave:

In /var/log/glusterfs/geo-replication-slaves/volume1_<ip_slave1>_volume1/gsyncd.log:

``` [2022-02-07 10:03:55.575324] I [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):1116:connect] GLUSTER: Mounting gluster volume locally... [2022-02-07 10:03:55.676748] D [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):880:inhibit] MountbrokerMounter: auxiliary glusterfs mount in place [2022-02-07 10:03:56.747661] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):847:errlog] Popen: command returned error [{cmd=/usr/sbin/gluster --remote-host=localhost system:: umount /var/mountbroker-root/mb_hive/mntl2Pc7Y lazy}, {error=1}] [2022-02-07 10:03:56.748090] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):851:logerr] Popen: /usr/sbin/gluster> umount failed [2022-02-07 10:03:56.748245] D [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):947:inhibit] MountbrokerMounter: Lazy umount done: /var/mountbroker-root/mb_hive/mntl2Pc7Y [2022-02-07 10:03:56.749159] W [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):905:inhibit] MountbrokerMounter: stale mount possibly left behind [{path=/var/mountbroker-root/mb_hive/mntl2Pc7Y}] [2022-02-07 10:03:56.749371] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):320:log_raise_exception] : cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mntl2Pc7Y failed with status 1 [2022-02-07 10:03:56.749494] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):363:log_raise_exception] : FULL EXCEPTION TRACE: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 317, in main func(args) File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 96, in subcmd_slave local.connect() File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1137, in connect self.mounter.inhibit(label) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 908, in inhibit (d, rv)) GsyncdError: cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mntl2Pc7Y failed with status 1 [2022-02-07 10:04:07.214184] I [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):1116:connect] GLUSTER: Mounting gluster volume locally... [2022-02-07 10:04:07.313560] D [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):880:inhibit] MountbrokerMounter: auxiliary glusterfs mount in place [2022-02-07 10:04:08.384382] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):847:errlog] Popen: command returned error [{cmd=/usr/sbin/gluster --remote-host=localhost system:: umount /var/mountbroker-root/mb_hive/mntSN4ahC lazy}, {error=1}] [2022-02-07 10:04:08.384749] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):851:logerr] Popen: /usr/sbin/gluster> umount failed [2022-02-07 10:04:08.384876] D [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):947:inhibit] MountbrokerMounter: Lazy umount done: /var/mountbroker-root/mb_hive/mntSN4ahC [2022-02-07 10:04:08.385706] W [resource(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):905:inhibit] MountbrokerMounter: stale mount possibly left behind [{path=/var/mountbroker-root/mb_hive/mntSN4ahC}] [2022-02-07 10:04:08.385917] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):320:log_raise_exception] : cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mntSN4ahC failed with status 1 [2022-02-07 10:04:08.386039] E [syncdutils(slave /applis/apsu/data/glusterfs/volume1/brick1/brick):363:log_raise_exception] : FULL EXCEPTION TRACE: Traceback (most recent call last): File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 317, in main func(args) File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 96, in subcmd_slave local.connect() File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1137, in connect self.mounter.inhibit(label) File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 908, in inhibit (d, rv)) GsyncdError: cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mntSN4ahC failed with status 1 ```

In /var/log/glusterfs/geo-replication-slaves/volume1_<ip_slave1>_volume1/mnt-<master1>-applis-apsu-data-glusterfs-volume1-brick1-brick.log:

``` [2022-02-07 10:03:55.645951 +0000] I [MSGID: 100030] [glusterfsd.c:2683:main] 0-/usr/sbin/glusterfs: Started running version [{arg=/usr/sbin/glusterfs}, {version=9.4}, {cmdlinestr=/usr/sbin/glusterfs --user-map-root=georep --aux-gfid-mount --acl --log-level=DEBUG --log-file=/var/log/glusterfs/geo-replication-slaves/volume1__volume1/mnt--applis-apsu-data-glusterfs-volume1-brick1-brick.log --volfile-server=localhost --volfile-id=volume1 --client-pid=-1 /var/mountbroker-root/user2006/mtpt-georep-nn4Z5J}] [2022-02-07 10:03:55.646317 +0000] D [MSGID: 0] [glusterfsd.c:415:set_fuse_mount_options] 0-glusterfsd: fopen-keep-cache mode 2 [2022-02-07 10:03:55.646337 +0000] D [MSGID: 0] [glusterfsd.c:459:set_fuse_mount_options] 0-glusterfsd: fuse direct io type 2 [2022-02-07 10:03:55.646353 +0000] D [MSGID: 0] [glusterfsd.c:472:set_fuse_mount_options] 0-glusterfsd: fuse no-root-squash mode 0 [2022-02-07 10:03:55.646364 +0000] D [MSGID: 0] [glusterfsd.c:513:set_fuse_mount_options] 0-glusterfsd: kernel-writeback-cache mode 2 [2022-02-07 10:03:55.646373 +0000] D [MSGID: 0] [glusterfsd.c:531:set_fuse_mount_options] 0-glusterfsd: fuse-flush-handle-interrupt mode 2 [2022-02-07 10:03:55.646785 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xae6c) [0x7fb9d0ab8e6c] -->/lib64/libglusterfs.so.0(xlator_option_init_bool+0x60) [0x7fb9d9f9f570] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key auto-invalidation, string type asked, has unsigned integer type [Invalid argument] [2022-02-07 10:03:55.646889 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-fuse: option auto-invalidation using set value 0 [2022-02-07 10:03:55.646952 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xaeca) [0x7fb9d0ab8eca] -->/lib64/libglusterfs.so.0(xlator_option_init_double+0x60) [0x7fb9d9f9fce0] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key negative-timeout, string type asked, has float type [Invalid argument] [2022-02-07 10:03:55.646969 +0000] D [MSGID: 0] [options.c:1233:xlator_option_init_double] 0-fuse: option negative-timeout using set value 0.000000 [2022-02-07 10:03:55.647016 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xaefc) [0x7fb9d0ab8efc] -->/lib64/libglusterfs.so.0(xlator_option_init_int32+0x60) [0x7fb9d9f9e850] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key client-pid, string type asked, has integer type [Invalid argument] [2022-02-07 10:03:55.647030 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-fuse: option client-pid using set value -1 [2022-02-07 10:03:55.647076 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xaf48) [0x7fb9d0ab8f48] -->/lib64/libglusterfs.so.0(xlator_option_init_uint32+0x60) [0x7fb9d9f9e5b0] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key uid-map-root, string type asked, has integer type [Invalid argument] [2022-02-07 10:03:55.647091 +0000] D [MSGID: 0] [options.c:1224:xlator_option_init_uint32] 0-fuse: option uid-map-root using set value 2006 [2022-02-07 10:03:55.647138 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xafcd) [0x7fb9d0ab8fcd] -->/lib64/libglusterfs.so.0(xlator_option_init_bool+0x60) [0x7fb9d9f9f570] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key acl, string type asked, has pointer type [Invalid argument] [2022-02-07 10:03:55.647153 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-fuse: option acl using set value on [2022-02-07 10:03:55.647212 +0000] D [dict.c:2432:dict_get_str] (-->/lib64/libglusterfs.so.0(xlator_init+0xc1) [0x7fb9d9f41261] -->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xb158) [0x7fb9d0ab9158] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key sync-to-mount, string type asked, has pointer type [Invalid argument] [2022-02-07 10:03:55.647293 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xb2f7) [0x7fb9d0ab92f7] -->/lib64/libglusterfs.so.0(xlator_option_init_bool+0x60) [0x7fb9d9f9f570] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key no-root-squash, string type asked, has pointer type [Invalid argument] [2022-02-07 10:03:55.647314 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-fuse: option no-root-squash using set value disable [2022-02-07 10:03:55.647367 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xb371) [0x7fb9d0ab9371] -->/lib64/libglusterfs.so.0(xlator_option_init_uint32+0x60) [0x7fb9d9f9e5b0] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7fb9d9f3ad82] ) 0-dict: key invalidate-limit, string type asked, has integer type [Invalid argument] [2022-02-07 10:03:55.647382 +0000] D [MSGID: 0] [options.c:1224:xlator_option_init_uint32] 0-fuse: option invalidate-limit using set value 0 [2022-02-07 10:03:55.647916 +0000] I [glusterfsd.c:2418:daemonize] 0-glusterfs: Pid of current running process is 13711 [2022-02-07 10:03:55.647994 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:03:55.648575 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-glusterfs: defaulting frame-timeout to 30mins [2022-02-07 10:03:55.648592 +0000] D [rpc-clnt.c:1024:rpc_clnt_connection_init] 0-glusterfs: disable ping-timeout [2022-02-07 10:03:55.648617 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:03:55.651279 +0000] D [MSGID: 101233] [options.c:975:xl_opt_validate] 0-glusterfs: option is deprecated, continuing with correction [{key=address-family}, {preferred=transport.address-family}] [2022-02-07 10:03:55.652810 +0000] D [socket.c:4570:socket_init] 0-glusterfs: Configured transport.tcp-user-timeout=42 [2022-02-07 10:03:55.652830 +0000] D [socket.c:4590:socket_init] 0-glusterfs: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:03:55.652857 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-glusterfs: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:03:55.652965 +0000] D [dict.c:1248:data_to_uint16] (-->/usr/lib64/glusterfs/9.4/rpc-transport/socket.so(+0x5c5e) [0x7fb9cd678c5e] -->/usr/lib64/glusterfs/9.4/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0x1c4) [0x7fb9cd681304] -->/lib64/libglusterfs.so.0(data_to_uint16+0x1e1) [0x7fb9d9f386a1] ) 0-dict: key null, unsigned integer type asked, has integer type [Invalid argument] [2022-02-07 10:03:55.655756 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip-127.0.0.1 (port-24007) for hostname: localhost and port: 24007 [2022-02-07 10:03:55.655786 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-glusterfs: disabling SSL for portmapper connection [2022-02-07 10:03:55.656200 +0000] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] [2022-02-07 10:03:55.656294 +0000] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=1}] [2022-02-07 10:03:55.656462 +0000] D [rpc-clnt-ping.c:298:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning [2022-02-07 10:03:55.656790 +0000] I [glusterfsd-mgmt.c:2171:mgmt_getspec_cbk] 0-glusterfs: Received list of available volfile servers: :24007 :24007 [2022-02-07 10:03:55.659792 +0000] I [io-stats.c:3708:ios_sample_buf_size_configure] 0-volume1: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0 [2022-02-07 10:03:55.659880 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-md-cache: option cache-posix-acl using set value true [2022-02-07 10:03:55.660045 +0000] D [MSGID: 0] [quick-read.c:1170:check_cache_size_ok] 0-volume1-quick-read: Max cache size is 8200593408 [2022-02-07 10:03:55.660147 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-utime: option noatime using set value on [2022-02-07 10:03:55.660229 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-dht: option lock-migration using set value off [2022-02-07 10:03:55.660254 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-dht: option force-migration using set value off [2022-02-07 10:03:55.660330 +0000] D [MSGID: 0] [dht-shared.c:329:dht_init_regex] 0-volume1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$ [2022-02-07 10:03:55.660480 +0000] D [MSGID: 0] [options.c:1221:xlator_option_init_str] 0-volume1-replicate-0: option data-self-heal using set value on [2022-02-07 10:03:55.660508 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option metadata-self-heal using set value on [2022-02-07 10:03:55.660546 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option entry-self-heal using set value on [2022-02-07 10:03:55.660606 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option granular-entry-heal using set value on [2022-02-07 10:03:55.660707 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option use-anonymous-inode using set value yes [2022-02-07 10:03:55.660728 +0000] D [MSGID: 0] [options.c:1221:xlator_option_init_str] 0-volume1-replicate-0: option afr-pending-xattr using set value volume1-client-0,volume1-client-1,volume1-client-2 [2022-02-07 10:03:55.661640 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-volume1-client-2: option ping-timeout using set value 42 [2022-02-07 10:03:55.661660 +0000] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-volume1-client-2: option remote-subvolume using set value /applis/apsu/data/glusterfs/volume1/brick1/brick [2022-02-07 10:03:55.661677 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-2: option send-gids using set value true [2022-02-07 10:03:55.661694 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-2: option strict-locks using set value off [2022-02-07 10:03:55.661708 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-volume1-client-2: defaulting frame-timeout to 30mins [2022-02-07 10:03:55.661717 +0000] D [rpc-clnt.c:1020:rpc_clnt_connection_init] 0-volume1-client-2: setting ping-timeout to 42 [2022-02-07 10:03:55.661729 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:03:55.661839 +0000] D [socket.c:4570:socket_init] 0-volume1-client-2: Configured transport.tcp-user-timeout=0 [2022-02-07 10:03:55.661850 +0000] D [socket.c:4590:socket_init] 0-volume1-client-2: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:03:55.661862 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-volume1-client-2: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:03:55.661870 +0000] D [MSGID: 0] [client.c:2507:client_init_rpc] 0-volume1-client-2: client init successful [2022-02-07 10:03:55.661916 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-volume1-client-1: option ping-timeout using set value 42 [2022-02-07 10:03:55.661933 +0000] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-volume1-client-1: option remote-subvolume using set value /applis/apsu/data/glusterfs/volume1/brick1/brick [2022-02-07 10:03:55.661950 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-1: option send-gids using set value true [2022-02-07 10:03:55.661966 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-1: option strict-locks using set value off [2022-02-07 10:03:55.661984 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-volume1-client-1: defaulting frame-timeout to 30mins [2022-02-07 10:03:55.661994 +0000] D [rpc-clnt.c:1020:rpc_clnt_connection_init] 0-volume1-client-1: setting ping-timeout to 42 [2022-02-07 10:03:55.662004 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:03:55.662099 +0000] D [socket.c:4570:socket_init] 0-volume1-client-1: Configured transport.tcp-user-timeout=0 [2022-02-07 10:03:55.662113 +0000] D [socket.c:4590:socket_init] 0-volume1-client-1: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:03:55.662124 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-volume1-client-1: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:03:55.662132 +0000] D [MSGID: 0] [client.c:2507:client_init_rpc] 0-volume1-client-1: client init successful [2022-02-07 10:03:55.662165 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-volume1-client-0: option ping-timeout using set value 42 [2022-02-07 10:03:55.662179 +0000] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-volume1-client-0: option remote-subvolume using set value /applis/apsu/data/glusterfs/volume1/brick1/brick [2022-02-07 10:03:55.662196 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-0: option send-gids using set value true [2022-02-07 10:03:55.662212 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-0: option strict-locks using set value off [2022-02-07 10:03:55.662224 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-volume1-client-0: defaulting frame-timeout to 30mins [2022-02-07 10:03:55.662232 +0000] D [rpc-clnt.c:1020:rpc_clnt_connection_init] 0-volume1-client-0: setting ping-timeout to 42 [2022-02-07 10:03:55.662242 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:03:55.662343 +0000] D [socket.c:4570:socket_init] 0-volume1-client-0: Configured transport.tcp-user-timeout=0 [2022-02-07 10:03:55.662354 +0000] D [socket.c:4590:socket_init] 0-volume1-client-0: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:03:55.662364 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-volume1-client-0: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:03:55.662372 +0000] D [MSGID: 0] [client.c:2507:client_init_rpc] 0-volume1-client-0: client init successful [2022-02-07 10:03:55.662419 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-replicate-0: option 'volume-id' is not recognized [2022-02-07 10:03:55.662467 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.address-family' is not recognized [2022-02-07 10:03:55.662481 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.ssl-enabled' is not recognized [2022-02-07 10:03:55.662494 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.tcp-user-timeout' is not recognized [2022-02-07 10:03:55.662507 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.keepalive-time' is not recognized [2022-02-07 10:03:55.662530 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.keepalive-interval' is not recognized [2022-02-07 10:03:55.662544 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.keepalive-count' is not recognized [2022-02-07 10:03:55.662565 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.address-family' is not recognized [2022-02-07 10:03:55.662579 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.ssl-enabled' is not recognized [2022-02-07 10:03:55.662591 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.tcp-user-timeout' is not recognized [2022-02-07 10:03:55.662604 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.keepalive-time' is not recognized [2022-02-07 10:03:55.662616 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.keepalive-interval' is not recognized [2022-02-07 10:03:55.662632 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.keepalive-count' is not recognized [2022-02-07 10:03:55.662654 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.address-family' is not recognized [2022-02-07 10:03:55.662667 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.ssl-enabled' is not recognized [2022-02-07 10:03:55.662679 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.tcp-user-timeout' is not recognized [2022-02-07 10:03:55.662692 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.keepalive-time' is not recognized [2022-02-07 10:03:55.662705 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.keepalive-interval' is not recognized [2022-02-07 10:03:55.662717 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.keepalive-count' is not recognized [2022-02-07 10:03:55.662740 +0000] D [fuse-bridge.c:6425:notify] 0-fuse: got event 12 on graph 0 [2022-02-07 10:03:55.662770 +0000] D [MSGID: 0] [afr-common.c:5843:__afr_launch_notify_timer] 0-volume1-replicate-0: Initiating child-down timer [2022-02-07 10:03:55.662788 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-volume1-client-0: parent translators are ready, attempting connect on transport [] [2022-02-07 10:03:55.665252 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:03:55.665273 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-volume1-client-0: disabling SSL for portmapper connection [2022-02-07 10:03:55.665409 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-volume1-client-1: parent translators are ready, attempting connect on transport [] [2022-02-07 10:03:55.665557 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-0: got RPC_CLNT_CONNECT [2022-02-07 10:03:55.666421 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :24007: ping timer event already removed [2022-02-07 10:03:55.666499 +0000] D [MSGID: 0] [client-handshake.c:1150:server_has_portmap] 0-volume1-client-0: detected portmapper on server [2022-02-07 10:03:55.666655 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-0: Ping latency is 0ms [2022-02-07 10:03:55.666699 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 0-volume1-client-0: changing port to 49153 (from 0) [2022-02-07 10:03:55.666733 +0000] I [socket.c:849:__socket_shutdown] 0-volume1-client-0: intentional socket shutdown(12) [2022-02-07 10:03:55.666755 +0000] D [socket.c:2978:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:12) (non-SSL) [2022-02-07 10:03:55.666767 +0000] D [MSGID: 0] [client.c:2220:client_rpc_notify] 0-volume1-client-0: got RPC_CLNT_DISCONNECT [2022-02-07 10:03:55.666778 +0000] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-volume1-client-0: disconnected (skipped notify) [2022-02-07 10:03:55.668828 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:03:55.668849 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-volume1-client-1: disabling SSL for portmapper connection [2022-02-07 10:03:55.668949 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-volume1-client-2: parent translators are ready, attempting connect on transport [] [2022-02-07 10:03:55.671265 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:03:55.671385 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-1: got RPC_CLNT_CONNECT [2022-02-07 10:03:55.671882 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :24007: ping timer event already removed [2022-02-07 10:03:55.671945 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-0: got RPC_CLNT_CONNECT [2022-02-07 10:03:55.672427 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :49153: ping timer event already removed [2022-02-07 10:03:55.672485 +0000] D [MSGID: 0] [client-handshake.c:1150:server_has_portmap] 0-volume1-client-1: detected portmapper on server [2022-02-07 10:03:55.672583 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-volume1-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-02-07 10:03:55.672680 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-1: Ping latency is 0ms [2022-02-07 10:03:55.673120 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-0: Ping latency is 0ms [2022-02-07 10:03:55.673151 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 0-volume1-client-1: changing port to 49152 (from 0) [2022-02-07 10:03:55.673185 +0000] I [socket.c:849:__socket_shutdown] 0-volume1-client-1: intentional socket shutdown(13) [2022-02-07 10:03:55.673208 +0000] D [socket.c:2978:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:13) (non-SSL) [2022-02-07 10:03:55.673220 +0000] D [MSGID: 0] [client.c:2220:client_rpc_notify] 0-volume1-client-1: got RPC_CLNT_DISCONNECT [2022-02-07 10:03:55.673231 +0000] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-volume1-client-1: disconnected (skipped notify) [2022-02-07 10:03:55.673854 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:03:55.673875 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-volume1-client-2: disabling SSL for portmapper connection Final graph: +------------------------------------------------------------------------------+ 1: volume volume1-client-0 2: type protocol/client 3: option opversion 90000 4: option clnt-lk-version 1 5: option volfile-checksum 0 6: option volfile-key volume1 7: option client-version 9.4 8: option process-uuid CTX_ID:18d96044-951f-4f70-bb73-09f4c5b0a409-GRAPH_ID:0-PID:13711-HOST:-PC_NAME:volume1-client-0-RECON_NO:-0 9: option fops-version 1298437 10: option ping-timeout 42 11: option remote-host 12: option remote-subvolume /applis/apsu/data/glusterfs/volume1/brick1/brick 13: option transport-type socket 14: option transport.address-family inet 15: option username 9971722f-d5f0-4d53-be81-f16da372717c 16: option password 7af51272-a23b-46ec-bb64-144f920fdf99 17: option transport.socket.ssl-enabled off 18: option transport.tcp-user-timeout 0 19: option transport.socket.keepalive-time 20 20: option transport.socket.keepalive-interval 2 21: option transport.socket.keepalive-count 9 22: option strict-locks off 23: option send-gids true 24: end-volume 25: 26: volume volume1-client-1 27: type protocol/client 28: option ping-timeout 42 29: option remote-host 30: option remote-subvolume /applis/apsu/data/glusterfs/volume1/brick1/brick 31: option transport-type socket 32: option transport.address-family inet 33: option username 9971722f-d5f0-4d53-be81-f16da372717c 34: option password 7af51272-a23b-46ec-bb64-144f920fdf99 35: option transport.socket.ssl-enabled off 36: option transport.tcp-user-timeout 0 37: option transport.socket.keepalive-time 20 38: option transport.socket.keepalive-interval 2 39: option transport.socket.keepalive-count 9 40: option strict-locks off 41: option send-gids true 42: end-volume 43: 44: volume volume1-client-2 45: type protocol/client 46: option ping-timeout 42 47: option remote-host 48: option remote-subvolume /applis/apsu/data/glusterfs/volume1/brick1/brick 49: option transport-type socket 50: option transport.address-family inet 51: option username 9971722f-d5f0-4d53-be81-f16da372717c 52: option password 7af51272-a23b-46ec-bb64-144f920fdf99 53: option transport.socket.ssl-enabled off 54: option transport.tcp-user-timeout 0 55: option transport.socket.keepalive-time 20 56: option transport.socket.keepalive-interval 2 57: option transport.socket.keepalive-count 9 58: option strict-locks off 59: option send-gids true 60: end-volume 61: 62: volume volume1-replicate-0 63: type cluster/replicate 64: option afr-pending-xattr volume1-client-0,volume1-client-1,volume1-client-2 65: option volume-id c5a46a9e-f15f-40f1-a0f1-3482269262c6 66: option metadata-self-heal on 67: option data-self-heal on 68: option entry-self-heal on 69: option granular-entry-heal on 70: option use-compound-fops off 71: option use-anonymous-inode yes 72: subvolumes volume1-client-0 volume1-client-1 volume1-client-2 73: end-volume 74: 75: volume volume1-dht 76: type cluster/distribute 77: option lock-migration off 78: option force-migration off 79: subvolumes volume1-replicate-0 80: end-volume 81: 82: volume volume1-utime 83: type features/utime 84: option noatime on 85: subvolumes volume1-dht 86: end-volume 87: 88: volume volume1-write-behind 89: type performance/write-behind 90: subvolumes volume1-utime 91: end-volume 92: 93: volume volume1-open-behind 94: type performance/open-behind 95: subvolumes volume1-write-behind 96: end-volume 97: 98: volume volume1-quick-read 99: type performance/quick-read 100: subvolumes volume1-open-behind 101: end-volume 102: 103: volume volume1-md-cache 104: type performance/md-cache 105: option cache-posix-acl true 106: subvolumes volume1-quick-read 107: end-volume 108: 109: volume volume1 110: type debug/io-stats 111: option log-level INFO 112: option threads 16 113: option latency-measurement off 114: option count-fop-hits off 115: option global-threading off 116: subvolumes volume1-md-cache 117: end-volume 118: 119: volume posix-acl-autoload 120: type system/posix-acl 121: subvolumes volume1 122: end-volume 123: 124: volume gfid-access-autoload 125: type features/gfid-access 126: subvolumes posix-acl-autoload 127: end-volume 128: 129: volume meta-autoload 130: type meta 131: subvolumes gfid-access-autoload 132: end-volume 133: +------------------------------------------------------------------------------+ [2022-02-07 10:03:55.674687 +0000] D [glusterfsd-mgmt.c:3027:glusterfs_mgmt_pmap_signin] 0-fsd-mgmt: portmapper signin arguments not given [2022-02-07 10:03:55.674784 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-volume1-client-0: Connected, attached to remote volume [{conn-name=volume1-client-0}, {remote_subvol=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:03:55.674805 +0000] D [MSGID: 0] [client-handshake.c:677:client_post_handshake] 0-volume1-client-0: No fds to open - notifying all parents child up [2022-02-07 10:03:55.674824 +0000] D [MSGID: 0] [afr-common.c:5978:afr_get_halo_latency] 0-volume1-replicate-0: Using halo latency 5 [2022-02-07 10:03:55.674835 +0000] I [MSGID: 108005] [afr-common.c:6065:__afr_handle_child_up_event] 0-volume1-replicate-0: Subvolume 'volume1-client-0' came back up; going online. [2022-02-07 10:03:55.674994 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-2: got RPC_CLNT_CONNECT [2022-02-07 10:03:55.675046 +0000] D [MSGID: 0] [stack.h:510:copy_frame] 0-stack: groups is null (ngrps: 0) [Invalid argument] [2022-02-07 10:03:55.676010 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :24007: ping timer event already removed [2022-02-07 10:03:55.676555 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:03:55.676675 +0000] D [MSGID: 0] [client-handshake.c:1150:server_has_portmap] 0-volume1-client-2: detected portmapper on server [2022-02-07 10:03:55.676732 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-2: Ping latency is 0ms [2022-02-07 10:03:55.677098 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-1: got RPC_CLNT_CONNECT [2022-02-07 10:03:55.677588 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:03:55.677658 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 0-volume1-client-2: changing port to 49152 (from 0) [2022-02-07 10:03:55.677688 +0000] I [socket.c:849:__socket_shutdown] 0-volume1-client-2: intentional socket shutdown(15) [2022-02-07 10:03:55.677722 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-volume1-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-02-07 10:03:55.677789 +0000] D [socket.c:2978:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:15) (non-SSL) [2022-02-07 10:03:55.677835 +0000] D [MSGID: 0] [client.c:2220:client_rpc_notify] 0-volume1-client-2: got RPC_CLNT_DISCONNECT [2022-02-07 10:03:55.677848 +0000] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-volume1-client-2: disconnected (skipped notify) [2022-02-07 10:03:55.677962 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-1: Ping latency is 0ms [2022-02-07 10:03:55.678866 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-volume1-client-1: Connected, attached to remote volume [{conn-name=volume1-client-1}, {remote_subvol=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:03:55.678885 +0000] D [MSGID: 0] [client-handshake.c:677:client_post_handshake] 0-volume1-client-1: No fds to open - notifying all parents child up [2022-02-07 10:03:55.678898 +0000] D [MSGID: 0] [afr-common.c:5978:afr_get_halo_latency] 0-volume1-replicate-0: Using halo latency 5 [2022-02-07 10:03:55.678934 +0000] I [MSGID: 108002] [afr-common.c:6435:afr_notify] 0-volume1-replicate-0: Client-quorum is met [2022-02-07 10:03:55.679554 +0000] D [MSGID: 0] [stack.h:510:copy_frame] 0-stack: groups is null (ngrps: 0) [Invalid argument] [2022-02-07 10:03:55.681085 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:03:55.681328 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-2: got RPC_CLNT_CONNECT [2022-02-07 10:03:55.681778 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:03:55.681847 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-volume1-client-2: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-02-07 10:03:55.681971 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-2: Ping latency is 0ms [2022-02-07 10:03:55.682854 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-volume1-client-2: Connected, attached to remote volume [{conn-name=volume1-client-2}, {remote_subvol=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:03:55.682872 +0000] D [MSGID: 0] [client-handshake.c:677:client_post_handshake] 0-volume1-client-2: No fds to open - notifying all parents child up [2022-02-07 10:03:55.682884 +0000] D [MSGID: 0] [afr-common.c:5978:afr_get_halo_latency] 0-volume1-replicate-0: Using halo latency 5 [2022-02-07 10:03:55.683101 +0000] D [fuse-bridge.c:6425:notify] 0-fuse: got event 5 on graph 0 [2022-02-07 10:03:55.683615 +0000] D [MSGID: 0] [stack.h:510:copy_frame] 0-stack: groups is null (ngrps: 0) [Invalid argument] [2022-02-07 10:03:55.684049 +0000] D [fuse-bridge.c:5958:fuse_get_mount_status] 0-fuse: mount status is 0 [2022-02-07 10:03:55.684252 +0000] D [fuse-bridge.c:5234:fuse_init] 0-glusterfs-fuse: Detected support for FUSE_AUTO_INVAL_DATA. Enabling fopen_keep_cache automatically. [2022-02-07 10:03:55.684303 +0000] I [fuse-bridge.c:5296:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23 [2022-02-07 10:03:55.684316 +0000] I [fuse-bridge.c:5924:fuse_graph_sync] 0-fuse: switched to graph 0 [2022-02-07 10:03:55.684327 +0000] D [MSGID: 0] [dht-diskusage.c:90:dht_du_info_cbk] 0-volume1-dht: subvolume 'volume1-replicate-0': avail_percent is: 98.00 and avail_space is: 21214810112 and avail_inodes is: 99.00 [2022-02-07 10:03:55.685251 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:917:client4_0_getxattr_cbk] 0-volume1-client-0: resetting op_ret to 0 from 149 [2022-02-07 10:03:55.685275 +0000] I [MSGID: 108031] [afr-common.c:3203:afr_local_discovery_cbk] 0-volume1-replicate-0: selecting local read_child volume1-client-0 [2022-02-07 10:03:55.685462 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:917:client4_0_getxattr_cbk] 0-volume1-client-2: resetting op_ret to 0 from 149 [2022-02-07 10:03:55.687074 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:917:client4_0_getxattr_cbk] 0-volume1-client-1: resetting op_ret to 0 from 149 [2022-02-07 10:04:07.281086 +0000] I [MSGID: 100030] [glusterfsd.c:2683:main] 0-/usr/sbin/glusterfs: Started running version [{arg=/usr/sbin/glusterfs}, {version=9.4}, {cmdlinestr=/usr/sbin/glusterfs --user-map-root=georep --aux-gfid-mount --acl --log-level=DEBUG --log-file=/var/log/glusterfs/geo-replication-slaves/volume1__volume1/mnt--applis-apsu-data-glusterfs-volume1-brick1-brick.log --volfile-server=localhost --volfile-id=volume1 --client-pid=-1 /var/mountbroker-root/user2006/mtpt-georep-1JGbHN}] [2022-02-07 10:04:07.281387 +0000] D [MSGID: 0] [glusterfsd.c:415:set_fuse_mount_options] 0-glusterfsd: fopen-keep-cache mode 2 [2022-02-07 10:04:07.281413 +0000] D [MSGID: 0] [glusterfsd.c:459:set_fuse_mount_options] 0-glusterfsd: fuse direct io type 2 [2022-02-07 10:04:07.281427 +0000] D [MSGID: 0] [glusterfsd.c:472:set_fuse_mount_options] 0-glusterfsd: fuse no-root-squash mode 0 [2022-02-07 10:04:07.281438 +0000] D [MSGID: 0] [glusterfsd.c:513:set_fuse_mount_options] 0-glusterfsd: kernel-writeback-cache mode 2 [2022-02-07 10:04:07.281447 +0000] D [MSGID: 0] [glusterfsd.c:531:set_fuse_mount_options] 0-glusterfsd: fuse-flush-handle-interrupt mode 2 [2022-02-07 10:04:07.281816 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xae6c) [0x7f55ba017e6c] -->/lib64/libglusterfs.so.0(xlator_option_init_bool+0x60) [0x7f55c34fe570] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key auto-invalidation, string type asked, has unsigned integer type [Invalid argument] [2022-02-07 10:04:07.281919 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-fuse: option auto-invalidation using set value 0 [2022-02-07 10:04:07.281982 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xaeca) [0x7f55ba017eca] -->/lib64/libglusterfs.so.0(xlator_option_init_double+0x60) [0x7f55c34fece0] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key negative-timeout, string type asked, has float type [Invalid argument] [2022-02-07 10:04:07.281999 +0000] D [MSGID: 0] [options.c:1233:xlator_option_init_double] 0-fuse: option negative-timeout using set value 0.000000 [2022-02-07 10:04:07.282046 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xaefc) [0x7f55ba017efc] -->/lib64/libglusterfs.so.0(xlator_option_init_int32+0x60) [0x7f55c34fd850] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key client-pid, string type asked, has integer type [Invalid argument] [2022-02-07 10:04:07.282060 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-fuse: option client-pid using set value -1 [2022-02-07 10:04:07.282107 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xaf48) [0x7f55ba017f48] -->/lib64/libglusterfs.so.0(xlator_option_init_uint32+0x60) [0x7f55c34fd5b0] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key uid-map-root, string type asked, has integer type [Invalid argument] [2022-02-07 10:04:07.282121 +0000] D [MSGID: 0] [options.c:1224:xlator_option_init_uint32] 0-fuse: option uid-map-root using set value 2006 [2022-02-07 10:04:07.282170 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xafcd) [0x7f55ba017fcd] -->/lib64/libglusterfs.so.0(xlator_option_init_bool+0x60) [0x7f55c34fe570] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key acl, string type asked, has pointer type [Invalid argument] [2022-02-07 10:04:07.282184 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-fuse: option acl using set value on [2022-02-07 10:04:07.282244 +0000] D [dict.c:2432:dict_get_str] (-->/lib64/libglusterfs.so.0(xlator_init+0xc1) [0x7f55c34a0261] -->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xb158) [0x7f55ba018158] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key sync-to-mount, string type asked, has pointer type [Invalid argument] [2022-02-07 10:04:07.282314 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xb2f7) [0x7f55ba0182f7] -->/lib64/libglusterfs.so.0(xlator_option_init_bool+0x60) [0x7f55c34fe570] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key no-root-squash, string type asked, has pointer type [Invalid argument] [2022-02-07 10:04:07.282331 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-fuse: option no-root-squash using set value disable [2022-02-07 10:04:07.282383 +0000] D [dict.c:2432:dict_get_str] (-->/usr/lib64/glusterfs/9.4/xlator/mount/fuse.so(+0xb371) [0x7f55ba018371] -->/lib64/libglusterfs.so.0(xlator_option_init_uint32+0x60) [0x7f55c34fd5b0] -->/lib64/libglusterfs.so.0(dict_get_str+0x102) [0x7f55c3499d82] ) 0-dict: key invalidate-limit, string type asked, has integer type [Invalid argument] [2022-02-07 10:04:07.282401 +0000] D [MSGID: 0] [options.c:1224:xlator_option_init_uint32] 0-fuse: option invalidate-limit using set value 0 [2022-02-07 10:04:07.282990 +0000] I [glusterfsd.c:2418:daemonize] 0-glusterfs: Pid of current running process is 13786 [2022-02-07 10:04:07.283104 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:04:07.283794 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-glusterfs: defaulting frame-timeout to 30mins [2022-02-07 10:04:07.283815 +0000] D [rpc-clnt.c:1024:rpc_clnt_connection_init] 0-glusterfs: disable ping-timeout [2022-02-07 10:04:07.283851 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:04:07.287761 +0000] D [MSGID: 101233] [options.c:975:xl_opt_validate] 0-glusterfs: option is deprecated, continuing with correction [{key=address-family}, {preferred=transport.address-family}] [2022-02-07 10:04:07.289275 +0000] D [socket.c:4570:socket_init] 0-glusterfs: Configured transport.tcp-user-timeout=42 [2022-02-07 10:04:07.289295 +0000] D [socket.c:4590:socket_init] 0-glusterfs: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:04:07.289321 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-glusterfs: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:04:07.289411 +0000] D [dict.c:1248:data_to_uint16] (-->/usr/lib64/glusterfs/9.4/rpc-transport/socket.so(+0x5c5e) [0x7f55b6bd7c5e] -->/usr/lib64/glusterfs/9.4/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0x1c4) [0x7f55b6be0304] -->/lib64/libglusterfs.so.0(data_to_uint16+0x1e1) [0x7f55c34976a1] ) 0-dict: key null, unsigned integer type asked, has integer type [Invalid argument] [2022-02-07 10:04:07.292257 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip-127.0.0.1 (port-24007) for hostname: localhost and port: 24007 [2022-02-07 10:04:07.292281 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-glusterfs: disabling SSL for portmapper connection [2022-02-07 10:04:07.292735 +0000] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=0}] [2022-02-07 10:04:07.292787 +0000] I [MSGID: 101190] [event-epoll.c:670:event_dispatch_epoll_worker] 0-epoll: Started thread with index [{index=1}] [2022-02-07 10:04:07.292939 +0000] D [rpc-clnt-ping.c:298:rpc_clnt_start_ping] 0-glusterfs: ping timeout is 0, returning [2022-02-07 10:04:07.293220 +0000] I [glusterfsd-mgmt.c:2171:mgmt_getspec_cbk] 0-glusterfs: Received list of available volfile servers: :24007 :24007 [2022-02-07 10:04:07.296249 +0000] I [io-stats.c:3708:ios_sample_buf_size_configure] 0-volume1: Configure ios_sample_buf size is 1024 because ios_sample_interval is 0 [2022-02-07 10:04:07.296331 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-md-cache: option cache-posix-acl using set value true [2022-02-07 10:04:07.296481 +0000] D [MSGID: 0] [quick-read.c:1170:check_cache_size_ok] 0-volume1-quick-read: Max cache size is 8200593408 [2022-02-07 10:04:07.296575 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-utime: option noatime using set value on [2022-02-07 10:04:07.296643 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-dht: option lock-migration using set value off [2022-02-07 10:04:07.296662 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-dht: option force-migration using set value off [2022-02-07 10:04:07.296729 +0000] D [MSGID: 0] [dht-shared.c:329:dht_init_regex] 0-volume1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$ [2022-02-07 10:04:07.296844 +0000] D [MSGID: 0] [options.c:1221:xlator_option_init_str] 0-volume1-replicate-0: option data-self-heal using set value on [2022-02-07 10:04:07.296874 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option metadata-self-heal using set value on [2022-02-07 10:04:07.296895 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option entry-self-heal using set value on [2022-02-07 10:04:07.296954 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option granular-entry-heal using set value on [2022-02-07 10:04:07.297054 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-replicate-0: option use-anonymous-inode using set value yes [2022-02-07 10:04:07.297074 +0000] D [MSGID: 0] [options.c:1221:xlator_option_init_str] 0-volume1-replicate-0: option afr-pending-xattr using set value volume1-client-0,volume1-client-1,volume1-client-2 [2022-02-07 10:04:07.297842 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-volume1-client-2: option ping-timeout using set value 42 [2022-02-07 10:04:07.297861 +0000] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-volume1-client-2: option remote-subvolume using set value /applis/apsu/data/glusterfs/volume1/brick1/brick [2022-02-07 10:04:07.297878 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-2: option send-gids using set value true [2022-02-07 10:04:07.297895 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-2: option strict-locks using set value off [2022-02-07 10:04:07.297909 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-volume1-client-2: defaulting frame-timeout to 30mins [2022-02-07 10:04:07.297918 +0000] D [rpc-clnt.c:1020:rpc_clnt_connection_init] 0-volume1-client-2: setting ping-timeout to 42 [2022-02-07 10:04:07.297930 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:04:07.298038 +0000] D [socket.c:4570:socket_init] 0-volume1-client-2: Configured transport.tcp-user-timeout=0 [2022-02-07 10:04:07.298049 +0000] D [socket.c:4590:socket_init] 0-volume1-client-2: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:04:07.298060 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-volume1-client-2: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:04:07.298069 +0000] D [MSGID: 0] [client.c:2507:client_init_rpc] 0-volume1-client-2: client init successful [2022-02-07 10:04:07.298106 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-volume1-client-1: option ping-timeout using set value 42 [2022-02-07 10:04:07.298120 +0000] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-volume1-client-1: option remote-subvolume using set value /applis/apsu/data/glusterfs/volume1/brick1/brick [2022-02-07 10:04:07.298137 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-1: option send-gids using set value true [2022-02-07 10:04:07.298153 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-1: option strict-locks using set value off [2022-02-07 10:04:07.298171 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-volume1-client-1: defaulting frame-timeout to 30mins [2022-02-07 10:04:07.298181 +0000] D [rpc-clnt.c:1020:rpc_clnt_connection_init] 0-volume1-client-1: setting ping-timeout to 42 [2022-02-07 10:04:07.298191 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:04:07.298286 +0000] D [socket.c:4570:socket_init] 0-volume1-client-1: Configured transport.tcp-user-timeout=0 [2022-02-07 10:04:07.298296 +0000] D [socket.c:4590:socket_init] 0-volume1-client-1: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:04:07.298305 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-volume1-client-1: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:04:07.298313 +0000] D [MSGID: 0] [client.c:2507:client_init_rpc] 0-volume1-client-1: client init successful [2022-02-07 10:04:07.298348 +0000] D [MSGID: 0] [options.c:1225:xlator_option_init_int32] 0-volume1-client-0: option ping-timeout using set value 42 [2022-02-07 10:04:07.298363 +0000] D [MSGID: 0] [options.c:1232:xlator_option_init_path] 0-volume1-client-0: option remote-subvolume using set value /applis/apsu/data/glusterfs/volume1/brick1/brick [2022-02-07 10:04:07.298379 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-0: option send-gids using set value true [2022-02-07 10:04:07.298396 +0000] D [MSGID: 0] [options.c:1230:xlator_option_init_bool] 0-volume1-client-0: option strict-locks using set value off [2022-02-07 10:04:07.298408 +0000] D [rpc-clnt.c:1012:rpc_clnt_connection_init] 0-volume1-client-0: defaulting frame-timeout to 30mins [2022-02-07 10:04:07.298416 +0000] D [rpc-clnt.c:1020:rpc_clnt_connection_init] 0-volume1-client-0: setting ping-timeout to 42 [2022-02-07 10:04:07.298426 +0000] D [rpc-transport.c:278:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/9.4/rpc-transport/socket.so [2022-02-07 10:04:07.298538 +0000] D [socket.c:4570:socket_init] 0-volume1-client-0: Configured transport.tcp-user-timeout=0 [2022-02-07 10:04:07.298550 +0000] D [socket.c:4590:socket_init] 0-volume1-client-0: Reconfigured transport.keepalivecnt=9 [2022-02-07 10:04:07.298559 +0000] D [rpc-clnt.c:1587:rpcclnt_cbk_program_register] 0-volume1-client-0: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2022-02-07 10:04:07.298567 +0000] D [MSGID: 0] [client.c:2507:client_init_rpc] 0-volume1-client-0: client init successful [2022-02-07 10:04:07.298614 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-replicate-0: option 'volume-id' is not recognized [2022-02-07 10:04:07.298662 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.address-family' is not recognized [2022-02-07 10:04:07.298676 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.ssl-enabled' is not recognized [2022-02-07 10:04:07.298689 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.tcp-user-timeout' is not recognized [2022-02-07 10:04:07.298702 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.keepalive-time' is not recognized [2022-02-07 10:04:07.298715 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.keepalive-interval' is not recognized [2022-02-07 10:04:07.298728 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-2: option 'transport.socket.keepalive-count' is not recognized [2022-02-07 10:04:07.298749 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.address-family' is not recognized [2022-02-07 10:04:07.298762 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.ssl-enabled' is not recognized [2022-02-07 10:04:07.298775 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.tcp-user-timeout' is not recognized [2022-02-07 10:04:07.298787 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.keepalive-time' is not recognized [2022-02-07 10:04:07.298800 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.keepalive-interval' is not recognized [2022-02-07 10:04:07.298813 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-1: option 'transport.socket.keepalive-count' is not recognized [2022-02-07 10:04:07.298834 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.address-family' is not recognized [2022-02-07 10:04:07.298847 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.ssl-enabled' is not recognized [2022-02-07 10:04:07.298863 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.tcp-user-timeout' is not recognized [2022-02-07 10:04:07.298876 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.keepalive-time' is not recognized [2022-02-07 10:04:07.298889 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.keepalive-interval' is not recognized [2022-02-07 10:04:07.298902 +0000] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-volume1-client-0: option 'transport.socket.keepalive-count' is not recognized [2022-02-07 10:04:07.298925 +0000] D [fuse-bridge.c:6425:notify] 0-fuse: got event 12 on graph 0 [2022-02-07 10:04:07.298946 +0000] D [MSGID: 0] [afr-common.c:5843:__afr_launch_notify_timer] 0-volume1-replicate-0: Initiating child-down timer [2022-02-07 10:04:07.298961 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-volume1-client-0: parent translators are ready, attempting connect on transport [] [2022-02-07 10:04:07.301415 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:04:07.301436 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-volume1-client-0: disabling SSL for portmapper connection [2022-02-07 10:04:07.301610 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-volume1-client-1: parent translators are ready, attempting connect on transport [] [2022-02-07 10:04:07.301645 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-0: got RPC_CLNT_CONNECT [2022-02-07 10:04:07.302347 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :24007: ping timer event already removed [2022-02-07 10:04:07.302442 +0000] D [MSGID: 0] [client-handshake.c:1150:server_has_portmap] 0-volume1-client-0: detected portmapper on server [2022-02-07 10:04:07.302536 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-0: Ping latency is 0ms [2022-02-07 10:04:07.302693 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 0-volume1-client-0: changing port to 49153 (from 0) [2022-02-07 10:04:07.302739 +0000] I [socket.c:849:__socket_shutdown] 0-volume1-client-0: intentional socket shutdown(12) [2022-02-07 10:04:07.302763 +0000] D [socket.c:2978:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:12) (non-SSL) [2022-02-07 10:04:07.302776 +0000] D [MSGID: 0] [client.c:2220:client_rpc_notify] 0-volume1-client-0: got RPC_CLNT_DISCONNECT [2022-02-07 10:04:07.302789 +0000] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-volume1-client-0: disconnected (skipped notify) [2022-02-07 10:04:07.304902 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:04:07.304922 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-volume1-client-1: disabling SSL for portmapper connection [2022-02-07 10:04:07.305013 +0000] I [MSGID: 114020] [client.c:2319:notify] 0-volume1-client-2: parent translators are ready, attempting connect on transport [] [2022-02-07 10:04:07.307407 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:04:07.307545 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-1: got RPC_CLNT_CONNECT [2022-02-07 10:04:07.308002 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :24007: ping timer event already removed [2022-02-07 10:04:07.308062 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-0: got RPC_CLNT_CONNECT [2022-02-07 10:04:07.308532 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :49153: ping timer event already removed [2022-02-07 10:04:07.308593 +0000] D [MSGID: 0] [client-handshake.c:1150:server_has_portmap] 0-volume1-client-1: detected portmapper on server [2022-02-07 10:04:07.308669 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-volume1-client-0: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-02-07 10:04:07.308748 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-1: Ping latency is 0ms [2022-02-07 10:04:07.308772 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-0: Ping latency is 0ms [2022-02-07 10:04:07.308880 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 0-volume1-client-1: changing port to 49152 (from 0) [2022-02-07 10:04:07.308942 +0000] I [socket.c:849:__socket_shutdown] 0-volume1-client-1: intentional socket shutdown(13) [2022-02-07 10:04:07.308966 +0000] D [socket.c:2978:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:13) (non-SSL) [2022-02-07 10:04:07.308979 +0000] D [MSGID: 0] [client.c:2220:client_rpc_notify] 0-volume1-client-1: got RPC_CLNT_DISCONNECT [2022-02-07 10:04:07.308991 +0000] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-volume1-client-1: disconnected (skipped notify) [2022-02-07 10:04:07.310137 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:04:07.310158 +0000] D [socket.c:3306:socket_fix_ssl_opts] 0-volume1-client-2: disabling SSL for portmapper connection Final graph: +------------------------------------------------------------------------------+ 1: volume volume1-client-0 2: type protocol/client 3: option opversion 90000 4: option clnt-lk-version 1 5: option volfile-checksum 0 6: option volfile-key volume1 7: option client-version 9.4 8: option process-uuid CTX_ID:092093d8-046d-40e1-ba79-e73793bc9242-GRAPH_ID:0-PID:13786-HOST:-PC_NAME:volume1-client-0-RECON_NO:-0 9: option fops-version 1298437 10: option ping-timeout 42 11: option remote-host 12: option remote-subvolume /applis/apsu/data/glusterfs/volume1/brick1/brick 13: option transport-type socket 14: option transport.address-family inet 15: option username 9971722f-d5f0-4d53-be81-f16da372717c 16: option password 7af51272-a23b-46ec-bb64-144f920fdf99 17: option transport.socket.ssl-enabled off 18: option transport.tcp-user-timeout 0 19: option transport.socket.keepalive-time 20 20: option transport.socket.keepalive-interval 2 21: option transport.socket.keepalive-count 9 22: option strict-locks off 23: option send-gids true 24: end-volume 25: 26: volume volume1-client-1 27: type protocol/client 28: option ping-timeout 42 29: option remote-host 30: option remote-subvolume /applis/apsu/data/glusterfs/volume1/brick1/brick 31: option transport-type socket 32: option transport.address-family inet 33: option username 9971722f-d5f0-4d53-be81-f16da372717c 34: option password 7af51272-a23b-46ec-bb64-144f920fdf99 35: option transport.socket.ssl-enabled off 36: option transport.tcp-user-timeout 0 37: option transport.socket.keepalive-time 20 38: option transport.socket.keepalive-interval 2 39: option transport.socket.keepalive-count 9 40: option strict-locks off 41: option send-gids true 42: end-volume 43: 44: volume volume1-client-2 45: type protocol/client 46: option ping-timeout 42 47: option remote-host 48: option remote-subvolume /applis/apsu/data/glusterfs/volume1/brick1/brick 49: option transport-type socket 50: option transport.address-family inet 51: option username 9971722f-d5f0-4d53-be81-f16da372717c 52: option password 7af51272-a23b-46ec-bb64-144f920fdf99 53: option transport.socket.ssl-enabled off 54: option transport.tcp-user-timeout 0 55: option transport.socket.keepalive-time 20 56: option transport.socket.keepalive-interval 2 57: option transport.socket.keepalive-count 9 58: option strict-locks off 59: option send-gids true 60: end-volume 61: 62: volume volume1-replicate-0 63: type cluster/replicate 64: option afr-pending-xattr volume1-client-0,volume1-client-1,volume1-client-2 65: option volume-id c5a46a9e-f15f-40f1-a0f1-3482269262c6 66: option metadata-self-heal on 67: option data-self-heal on 68: option entry-self-heal on 69: option granular-entry-heal on 70: option use-compound-fops off 71: option use-anonymous-inode yes 72: subvolumes volume1-client-0 volume1-client-1 volume1-client-2 73: end-volume 74: 75: volume volume1-dht 76: type cluster/distribute 77: option lock-migration off 78: option force-migration off 79: subvolumes volume1-replicate-0 80: end-volume 81: 82: volume volume1-utime 83: type features/utime 84: option noatime on 85: subvolumes volume1-dht 86: end-volume 87: 88: volume volume1-write-behind 89: type performance/write-behind 90: subvolumes volume1-utime 91: end-volume 92: 93: volume volume1-open-behind 94: type performance/open-behind 95: subvolumes volume1-write-behind 96: end-volume 97: 98: volume volume1-quick-read 99: type performance/quick-read 100: subvolumes volume1-open-behind 101: end-volume 102: 103: volume volume1-md-cache 104: type performance/md-cache 105: option cache-posix-acl true 106: subvolumes volume1-quick-read 107: end-volume 108: 109: volume volume1 110: type debug/io-stats 111: option log-level INFO 112: option threads 16 113: option latency-measurement off 114: option count-fop-hits off 115: option global-threading off 116: subvolumes volume1-md-cache 117: end-volume 118: 119: volume posix-acl-autoload 120: type system/posix-acl 121: subvolumes volume1 122: end-volume 123: 124: volume gfid-access-autoload 125: type features/gfid-access 126: subvolumes posix-acl-autoload 127: end-volume 128: 129: volume meta-autoload 130: type meta 131: subvolumes gfid-access-autoload 132: end-volume 133: +------------------------------------------------------------------------------+ [2022-02-07 10:04:07.311188 +0000] D [glusterfsd-mgmt.c:3027:glusterfs_mgmt_pmap_signin] 0-fsd-mgmt: portmapper signin arguments not given [2022-02-07 10:04:07.311286 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-volume1-client-0: Connected, attached to remote volume [{conn-name=volume1-client-0}, {remote_subvol=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:04:07.311305 +0000] D [MSGID: 0] [client-handshake.c:677:client_post_handshake] 0-volume1-client-0: No fds to open - notifying all parents child up [2022-02-07 10:04:07.311323 +0000] D [MSGID: 0] [afr-common.c:5978:afr_get_halo_latency] 0-volume1-replicate-0: Using halo latency 5 [2022-02-07 10:04:07.311338 +0000] I [MSGID: 108005] [afr-common.c:6065:__afr_handle_child_up_event] 0-volume1-replicate-0: Subvolume 'volume1-client-0' came back up; going online. [2022-02-07 10:04:07.311505 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-2: got RPC_CLNT_CONNECT [2022-02-07 10:04:07.311685 +0000] D [MSGID: 0] [stack.h:510:copy_frame] 0-stack: groups is null (ngrps: 0) [Invalid argument] [2022-02-07 10:04:07.312201 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :24007: ping timer event already removed [2022-02-07 10:04:07.312264 +0000] D [MSGID: 0] [client-handshake.c:1150:server_has_portmap] 0-volume1-client-2: detected portmapper on server [2022-02-07 10:04:07.312448 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-2: Ping latency is 0ms [2022-02-07 10:04:07.312649 +0000] I [rpc-clnt.c:1968:rpc_clnt_reconfig] 0-volume1-client-2: changing port to 49152 (from 0) [2022-02-07 10:04:07.312680 +0000] I [socket.c:849:__socket_shutdown] 0-volume1-client-2: intentional socket shutdown(12) [2022-02-07 10:04:07.312700 +0000] D [socket.c:2978:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:12) (non-SSL) [2022-02-07 10:04:07.312712 +0000] D [MSGID: 0] [client.c:2220:client_rpc_notify] 0-volume1-client-2: got RPC_CLNT_DISCONNECT [2022-02-07 10:04:07.312723 +0000] D [MSGID: 0] [client.c:2275:client_rpc_notify] 0-volume1-client-2: disconnected (skipped notify) [2022-02-07 10:04:07.312786 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:04:07.313045 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-1: got RPC_CLNT_CONNECT [2022-02-07 10:04:07.313591 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:04:07.313663 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-volume1-client-1: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-02-07 10:04:07.313795 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-1: Ping latency is 0ms [2022-02-07 10:04:07.314320 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-volume1-client-1: Connected, attached to remote volume [{conn-name=volume1-client-1}, {remote_subvol=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:04:07.314340 +0000] D [MSGID: 0] [client-handshake.c:677:client_post_handshake] 0-volume1-client-1: No fds to open - notifying all parents child up [2022-02-07 10:04:07.314352 +0000] D [MSGID: 0] [afr-common.c:5978:afr_get_halo_latency] 0-volume1-replicate-0: Using halo latency 5 [2022-02-07 10:04:07.314435 +0000] I [MSGID: 108002] [afr-common.c:6435:afr_notify] 0-volume1-replicate-0: Client-quorum is met [2022-02-07 10:04:07.314555 +0000] D [MSGID: 0] [stack.h:510:copy_frame] 0-stack: groups is null (ngrps: 0) [Invalid argument] [2022-02-07 10:04:07.315897 +0000] D [MSGID: 0] [common-utils.c:545:gf_resolve_ip6] 0-resolver: returning ip- (port-24007) for hostname: and port: 24007 [2022-02-07 10:04:07.316120 +0000] D [MSGID: 0] [client.c:2211:client_rpc_notify] 0-volume1-client-2: got RPC_CLNT_CONNECT [2022-02-07 10:04:07.316613 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:04:07.316687 +0000] I [MSGID: 114057] [client-handshake.c:1128:select_server_supported_programs] 0-volume1-client-2: Using Program [{Program-name=GlusterFS 4.x v1}, {Num=1298437}, {Version=400}] [2022-02-07 10:04:07.316862 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-2: Ping latency is 0ms [2022-02-07 10:04:07.317550 +0000] I [MSGID: 114046] [client-handshake.c:857:client_setvolume_cbk] 0-volume1-client-2: Connected, attached to remote volume [{conn-name=volume1-client-2}, {remote_subvol=/applis/apsu/data/glusterfs/volume1/brick1/brick}] [2022-02-07 10:04:07.317567 +0000] D [MSGID: 0] [client-handshake.c:677:client_post_handshake] 0-volume1-client-2: No fds to open - notifying all parents child up [2022-02-07 10:04:07.317579 +0000] D [MSGID: 0] [afr-common.c:5978:afr_get_halo_latency] 0-volume1-replicate-0: Using halo latency 5 [2022-02-07 10:04:07.317810 +0000] D [MSGID: 0] [stack.h:510:copy_frame] 0-stack: groups is null (ngrps: 0) [Invalid argument] [2022-02-07 10:04:07.317912 +0000] D [fuse-bridge.c:6425:notify] 0-fuse: got event 5 on graph 0 [2022-02-07 10:04:07.318201 +0000] D [MSGID: 0] [dht-diskusage.c:90:dht_du_info_cbk] 0-volume1-dht: subvolume 'volume1-replicate-0': avail_percent is: 98.00 and avail_space is: 21214830592 and avail_inodes is: 99.00 [2022-02-07 10:04:07.318779 +0000] D [fuse-bridge.c:5958:fuse_get_mount_status] 0-fuse: mount status is 0 [2022-02-07 10:04:07.318922 +0000] D [fuse-bridge.c:5234:fuse_init] 0-glusterfs-fuse: Detected support for FUSE_AUTO_INVAL_DATA. Enabling fopen_keep_cache automatically. [2022-02-07 10:04:07.318963 +0000] I [fuse-bridge.c:5296:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.23 [2022-02-07 10:04:07.318976 +0000] I [fuse-bridge.c:5924:fuse_graph_sync] 0-fuse: switched to graph 0 [2022-02-07 10:04:07.319702 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:917:client4_0_getxattr_cbk] 0-volume1-client-0: resetting op_ret to 0 from 149 [2022-02-07 10:04:07.319724 +0000] I [MSGID: 108031] [afr-common.c:3203:afr_local_discovery_cbk] 0-volume1-replicate-0: selecting local read_child volume1-client-0 [2022-02-07 10:04:07.320020 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:917:client4_0_getxattr_cbk] 0-volume1-client-1: resetting op_ret to 0 from 149 [2022-02-07 10:04:07.320069 +0000] D [MSGID: 0] [client-rpc-fops_v2.c:917:client4_0_getxattr_cbk] 0-volume1-client-2: resetting op_ret to 0 from 149 [2022-02-07 10:04:37.673272 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:04:37.678058 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:04:37.682064 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:04:49.308918 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:04:49.313937 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:04:49.316956 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:05:55.648239 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:05:55.648353 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:06:07.283346 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:06:07.283489 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:07:55.648501 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:07:55.648567 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:08:07.283616 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:08:07.283664 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:09:55.648714 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:09:55.648792 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:10:07.283798 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:10:07.283848 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:11:55.648933 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:11:55.648989 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:12:07.284012 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:12:07.284067 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:13:55.649168 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:13:55.649313 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:14:07.284207 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:14:07.284257 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:15:55.649478 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:15:55.649584 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:16:07.284375 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:16:07.284438 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:17:55.649695 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:17:55.649764 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:18:07.284551 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:18:07.284618 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:19:55.649925 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:19:55.649986 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:20:07.284750 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:20:07.284799 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:21:55.650114 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:21:55.650172 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:22:07.284953 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:22:07.285004 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:23:55.650315 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:23:55.650381 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:24:07.285127 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:24:07.285183 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:25:55.650757 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:25:55.650851 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:26:07.285302 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:26:07.285351 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:27:55.650983 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:27:55.651039 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:28:07.285491 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:28:07.285581 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:29:55.651161 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:29:55.651214 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:30:07.285741 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:30:07.285797 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:31:55.651329 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:31:55.651395 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:32:07.285926 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:32:07.285975 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:33:55.651498 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:33:55.651566 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:34:07.286125 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:34:07.286188 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:35:55.651683 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:35:55.651732 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:36:07.286383 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:36:07.286452 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:37:55.651890 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:37:55.651987 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:38:07.286582 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:38:07.286630 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:39:38.916727 +0000] I [MSGID: 108031] [afr-common.c:3203:afr_local_discovery_cbk] 0-volume1-replicate-0: selecting local read_child volume1-client-0 [2022-02-07 10:39:38.918237 +0000] I [MSGID: 108031] [afr-common.c:3203:afr_local_discovery_cbk] 0-volume1-replicate-0: selecting local read_child volume1-client-0 [2022-02-07 10:39:38.918846 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :49153: ping timer event already removed [2022-02-07 10:39:38.919006 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-0: Ping latency is 0ms [2022-02-07 10:39:38.919688 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:39:38.920081 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-1: Ping latency is 0ms [2022-02-07 10:39:38.921679 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7fb9d9f45ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7fb9d9cf066e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7fb9d9cf0e11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7fb9d9ced0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7fb9cb3577a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:39:38.921896 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-2: Ping latency is 0ms [2022-02-07 10:39:38.923590 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :49153: ping timer event already removed [2022-02-07 10:39:38.923756 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-0: Ping latency is 0ms [2022-02-07 10:39:38.924103 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:39:38.924367 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-1: Ping latency is 0ms [2022-02-07 10:39:38.924596 +0000] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x137)[0x7f55c34a4ec7] (--> /lib64/libgfrpc.so.0(+0x1366e)[0x7f55c324f66e] (--> /lib64/libgfrpc.so.0(+0x13e11)[0x7f55c324fe11] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x3d7)[0x7f55c324c0b7] (--> /usr/lib64/glusterfs/9.4/xlator/protocol/client.so(+0x157a2)[0x7f55b40b57a2] ))))) 0-: :49152: ping timer event already removed [2022-02-07 10:39:38.924766 +0000] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-volume1-client-2: Ping latency is 0ms [2022-02-07 10:39:55.652153 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:39:55.652227 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:40:07.286771 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:40:07.286824 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:40:20.919132 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:40:20.920202 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:40:20.921997 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:40:20.923865 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-0: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:40:20.924460 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-1: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:40:20.924852 +0000] D [rpc-clnt-ping.c:317:rpc_clnt_start_ping] 0-volume1-client-2: returning as transport is already disconnected OR there are no frames (0 || 0) [2022-02-07 10:41:55.652350 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:41:55.652405 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:42:07.287001 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:42:07.287053 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:43:55.652566 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:43:55.652621 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:44:07.287200 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:44:07.287250 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:45:55.652750 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:45:55.652819 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:46:07.287382 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:46:07.287446 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:47:55.652957 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:47:55.653020 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 [2022-02-07 10:48:07.287582 +0000] D [logging.c:1735:gf_log_flush_timeout_cbk] 0-logging-infra: Log timer timed out. About to flush outstanding messages if present [2022-02-07 10:48:07.287636 +0000] D [logging.c:1697:__gf_log_inject_timer_event] 0-logging-infra: Starting timer now. Timeout = 120, current buf size = 5 ```

If anyone has any pointer or idea, I'd be glad to hear them.

FSangouard commented 2 years ago

After some more digging, I'm under the impression that the first problem is actually a consequence of the second problem. What makes me think that is that when I run the gsyncd command directly on the slave as the georep user, I get the following error:

/usr/bin/python2 /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py slave volume1 georep@<ip_slave1>::volume1 --master-node <master1> --master-node-id d79016f4-d408-4ff4-b957-0039361685f0 --master-brick /applis/apsu/data/glusterfs/volume1/brick1/brick --local-node <slave1> --local-node-id 7b712811-dbff-4ff6-85f3-5980824e4bc5 --slave-timeout 120 --slave-log-level DEBUG --slave-gluster-log-level DEBUG --slave-gluster-command-dir /usr/sbin --master-dist-count 1
failure: cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mnt19ABQh failed with status 1
failed with GsyncdError: cleaning up temp mountpoint /var/mountbroker-root/mb_hive/mnt19ABQh failed with status 1.

But if I run the same command as root, it runs fine and I don't see any temporary mounts appear under /var/mountbroker-root, so I suppose these temporary mounts are created and deleted at regular intervals as part of the normal operations of the process, and since the non-root user cannot unmount them, it leaves us with more and more stale mounts, ie problem 1.

I'm still no closer to a workaround or a resolution, so all suggestions are welcome.

stale[bot] commented 2 years ago

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.

stale[bot] commented 2 years ago

Closing this issue as there was no update since my last update on issue. If this is an issue which is still valid, feel free to open it.

jvleminc commented 1 year ago

Any further updates on this? We received vulnerability alerts in our system due to this. :-(

FSangouard commented 11 months ago

I still haven't managed to combine georeplication and running as non-root fully.

I managed to at least have the brick processes run as non-root while the glusterd daemon and the rest run as root, by inserting a wrapper script written in python around the glusterfsd binary in /usr/sbin. This wrapper script drops root privileges but retains the necessary capabilities before running the actual program. If you want some more details, let me know.

That's not completely satisfactory but that's all I could do. If anyone else has anything to contribute, I'm still interested.

FSangouard commented 11 months ago

Update:

I managed to have georeplication working with only one process running as root. I dropped my wrapper script and instead I configured glusterd to run as non-root like before, so the brick processes, the self-heal daemons etc. are all running as non-root. And to avoid the problem I explained previously (stale mounts), I run a kind of "clone" of the glusterd daemon as root, which will be in charge of mounting/unmounting the local mounts for the georeplication sessions.

To do that, I simply created a copy of /etc/glusterfs/glusterd.vol as /etc/glusterfs/georepd.vol inside which I changed the bind address to localhost and the socket file to /var/run/gluster/georepd.socket, then I run the command glusterd --volfile /etc/glusterfs/georepd.vol to start it (turning that into a service is easy). Knowing that in my glusterd.vol file the bind address is the network interface of the machine and the socket file is /var/run/gluster/glusterd.socket, this allows both daemons to run in parallel without conflict. And since the gsyncd daemon uses localhost to contact the management daemon, it will automatically send the mount/unmount requests to the one running as root.

That way, the only process running as root is only listening on localhost, so that should be ok with our security team. It's not perfect but it's satisfactory.

The only thing that bothers me is that the gluster-mountbroker command is hardcoded to modify /etc/glusterfs/glusterd.vol, so if I want to make it act upon my new georepd.vol file, I must edit the GLUSTERD_VOLFILE variable in /usr/libexec/glusterfs/peer_mountbroker.py directly, which is not ideal. I could simply add the relevant options for georeplication manually, but I would like to preserve the command's functionality for the benefit of our Ops team.

Again, if anyone has a better solution, I'm all ears.