gluster / glusterfs

Gluster Filesystem : Build your distributed storage in minutes
https://www.gluster.org
GNU General Public License v2.0
4.7k stars 1.08k forks source link

gluster volume set => "volume set: failed: option : timeout.pp does not exist" #3916

Open UweAtWork opened 1 year ago

UweAtWork commented 1 year ago

The exact command to reproduce the issue:

gluster volume set testvol auth.allow *

The full output of the command that failed:

volume set: failed: option : timeout.pp does not exist
Did you mean ctime.noatime?

Mandatory info:

Volume Name: Gluster-KC-DOC
Type: Replicate
Volume ID: 0d70cad6-9bf1-450b-b33a-6863e47f4ab9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node-i.domain.local:/gluster/KC-DOC
Brick2: node-ii.domain.local:/gluster/KC-DOC
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
auth.allow: 192.168.1.*,127.0.0.1,node-iii.domain.local

- The output of the gluster volume status command:

Status of volume: Gluster-KC-DOC

Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node-i.domain.local:/gluster/
KC-DOC                                      N/A       N/A        Y       1745
Brick node-ii.domain.local:/gluster
/KC-DOC                                     57009     0          Y       3232
Self-heal Daemon on localhost               N/A       N/A        N       N/A
Self-heal Daemon on node-ii              N/A       N/A        Y       2076518

Task Status of Volume Gluster-KC-DOC
------------------------------------------------------------------------------
There are no active volume tasks

- The output of the gluster volume heal command:

gluster volume heal Gluster-KC-DOC info

Brick node-i.domain.local:/gluster/KC-DOC
Status: Connected
Number of entries: 0

Brick node-ii.domain.local:/gluster/KC-DOC
Status: Connected
Number of entries: 0

**- Provide logs present on following locations of client and server nodes - /var/log/glusterfs/glusterd.log

[2022-12-06 08:04:31.042342 +0000] I [glusterfsd.c:2448:daemonize] 0-glusterfs: Pid of current running process is 2296
[2022-12-06 08:04:31.043952 +0000] W [MSGID: 101248] [gf-io-uring.c:406:gf_io_uring_setup] 0-io: Current kernel doesn't support I/O URing interface. [Die angeforderte Funktion ist nicht implementiert]
[2022-12-06 08:04:31.055570 +0000] W [MSGID: 106204] [glusterd-store.c:3173:glusterd_store_update_volinfo] 0-management: Unknown key: stripe_count 
[2022-12-06 08:04:31.055608 +0000] W [MSGID: 106204] [glusterd-store.c:3173:glusterd_store_update_volinfo] 0-management: Unknown key: brick-0 
[2022-12-06 08:04:31.055615 +0000] W [MSGID: 106204] [glusterd-store.c:3173:glusterd_store_update_volinfo] 0-management: Unknown key: brick-1 
[2022-12-06 08:04:31.059584 +0000] E [MSGID: 106187] [glusterd-store.c:4802:glusterd_resolve_all_bricks] 0-glusterd: Failed to resolve brick /gluster/KC-DOC with host node-i.domain.local of volume Gluster-KC-DOC in restore 
[2022-12-06 08:04:31.059629 +0000] E [MSGID: 101019] [xlator.c:641:xlator_init] 0-management: Initialization of volume failed. review your volfile again. [{name=management}] 
[2022-12-06 08:04:31.059639 +0000] E [MSGID: 101066] [graph.c:425:glusterfs_graph_init] 0-management: initializing translator failed 
[2022-12-06 08:04:31.059645 +0000] E [MSGID: 101176] [graph.c:766:glusterfs_graph_activate] 0-graph: init failed 
[2022-12-06 08:04:31.059735 +0000] W [glusterfsd.c:1459:cleanup_and_exit] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xa0) [0x55d8837e8220] -->/usr/sbin/glusterd(glusterfs_process_volfp+0x243) [0x55d8837e8163] -->/usr/sbin/glusterd(cleanup_and_exit+0x58) [0x55d8837e37f8] ) 0-: received signum (-1), shutting down 
[2022-12-06 08:04:31.059770 +0000] W [mgmt-pmap.c:132:rpc_clnt_mgmt_pmap_signout] 0-glusterfs: failed to create XDR payload

**- Is there any crash ? Provide the backtrace and coredump No

- The operating system / glusterfs version:

dnf list installed | grep -i gluster

centos-release-gluster10.noarch       1.0-1.el8                                  @extras
glusterfs.x86_64                      10.3-1.el8s                                @centos-gluster10
glusterfs-cli.x86_64                  10.3-1.el8s                                @centos-gluster10
glusterfs-client-xlators.x86_64       10.3-1.el8s                                @centos-gluster10
glusterfs-coreutils.x86_64            0.3.1-2.el8s                               @centos-gluster10
glusterfs-fuse.x86_64                 10.3-1.el8s                                @centos-gluster10
glusterfs-selinux.noarch              2.0.1-1.el8s                               @centos-gluster10
glusterfs-server.x86_64               10.3-1.el8s                                @centos-gluster10
gperftools-libs.x86_64                2.9.1-1.el8s                               @centos-gluster10
libgfapi0.x86_64                      10.3-1.el8s                                @centos-gluster10
libgfchangelog0.x86_64                10.3-1.el8s                                @centos-gluster10
libgfrpc0.x86_64                      10.3-1.el8s                                @centos-gluster10
libgfxdr0.x86_64                      10.3-1.el8s                                @centos-gluster10
libglusterd0.x86_64                   10.3-1.el8s                                @centos-gluster10
libglusterfs0.x86_64                  10.3-1.el8s                                @centos-gluster10
libunwind.x86_64                      1.4.0-5.el8s                               @centos-gluster10

cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.6 (Green Obsidian)"

uname -r
4.18.0-372.32.1.el8_6.x86_64
UweAtWork commented 1 year ago

We upgraded all nodes from GlusterFS Version 3.6 and did heal the volume after that.

stale[bot] commented 1 year ago

Thank you for your contributions. Noticed that this issue is not having any activity in last ~6 months! We are marking this issue as stale because it has not had recent activity. It will be closed in 2 weeks if no one responds with a comment here.