gluster / glusterfs

Gluster Filesystem : Build your distributed storage in minutes
https://www.gluster.org
GNU General Public License v2.0
4.68k stars 1.08k forks source link

gfapi: program with gfapi, if program entering debug, gfapi hang #1963

Closed tangyla closed 3 years ago

tangyla commented 3 years ago

Description of problem: program with gfapi, if program entering debug, gfapi hang up

use gdb step debug, the ‘glfs_fini’ function will hang, also glfs_creat/glfs_lseek/glfs_close and so on.

Additional info: compile command gcc -Wall -ggdb3 -c gfapi.c && gcc -o gfapi gfapi.o -lgfapi

the test code is follow:

#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <fcntl.h>
#include <glusterfs/api/glfs.h>

int create_file(struct glfs* glfs) {
    int ret;
    int64_t current_length;
    struct glfs_fd* fd = NULL;
    fd = glfs_creat(glfs, "test.qcow2", O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
    fprintf(stderr, "glfs_creat fd = %p\n", fd);

    current_length = glfs_lseek(fd, 0, SEEK_END);
    fprintf(stderr, "glfs_lseek ret = %ld\n", current_length);

    if (fd) {
        ret = glfs_close(fd);
        fprintf(stderr, "glfs_close ret = %d\n", ret);
    }

    return EXIT_SUCCESS;
}

int main(int argc, char** argv) {
    int ret = 0;
    char* volume = "datastore";
    char* server = "node0";
    int port = 24007;
    struct glfs* glfs;
    time_t start, end, b_fini, e_fini;

    start = time(NULL);
    fprintf(stderr, "%s start time: %ld\n", argv[0], start);

    glfs = glfs_new(volume);
    ret = glfs_set_volfile_server(glfs, "tcp", server, port);
    fprintf(stderr, "glfs_set_volfile_server ret = %d\n", ret);
    ret = glfs_set_logging(glfs, "/dev/stderr", 4);
    fprintf(stderr, "glfs_set_logging ret = %d\n", ret);
    ret = glfs_init(glfs);
    fprintf(stderr, "glfs_init ret = %d\n", ret);
    create_file(glfs);

    b_fini = time(NULL);
    fprintf(stderr, "glfs_fini start: %ld, before consume seconds: %ld\n", b_fini, b_fini - start);
    ret = glfs_fini(glfs);
    e_fini = time(NULL);
    fprintf(stderr, "glfs_fini end: %ld, consume seconds: %ld\n", e_fini, e_fini - b_fini);
    fprintf(stderr, "glfs_fini ret = %d\n", ret);

    end = time(NULL);
    fprintf(stderr, "%s end time: %ld, consume seconds: %ld\n", argv[0], end, (end - start));

    return EXIT_SUCCESS;
}
mohit84 commented 3 years ago

Hi, I have tried to reproduce same on glusterfs-7 version on local vm, i am not facing any issue.

/usr/local/sbin/gluster --version glusterfs 7.8 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. https://www.gluster.org/ GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root@ip-172-31-10-141 ~]# ./gfapi ./gfapi start time: 1610005783 glfs_set_volfile_server ret = 0 glfs_set_logging ret = 0 glfs_init ret = 0 glfs_creat fd = 0x2028940 glfs_lseek ret = 0 glfs_close ret = 0 glfs_fini start: 1610005783, before consume seconds: 0 [2021-01-07 07:49:43.455566] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:49:43.455861] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-1: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:49:43.456022] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-2: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:49:53.320568] I [io-stats.c:4040:fini] 0-patchy: io-stats translator unloaded glfs_fini end: 1610005794, consume seconds: 11 glfs_fini ret = 0 ./gfapi end time: 1610005794, consume seconds: 11 [root@ip-172-31-10-141 ~]# ./gfapi ./gfapi start time: 1610005797 glfs_set_volfile_server ret = 0 glfs_set_logging ret = 0 glfs_init ret = 0 glfs_creat fd = 0x1389940 glfs_lseek ret = 0 glfs_close ret = 0 glfs_fini start: 1610005797, before consume seconds: 0 [2021-01-07 07:49:57.549342] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:49:57.549599] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-1: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:49:57.549758] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-2: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:50:07.415926] I [io-stats.c:4040:fini] 0-patchy: io-stats translator unloaded glfs_fini end: 1610005807, consume seconds: 10 glfs_fini ret = 0 ./gfapi end time: 1610005807, consume seconds: 10 [root@ip-172-31-10-141 ~]# ./gfapi ./gfapi start time: 1610005822 glfs_set_volfile_server ret = 0 glfs_set_logging ret = 0 glfs_init ret = 0 glfs_creat fd = 0x120f940 glfs_lseek ret = 0 glfs_close ret = 0 glfs_fini start: 1610005822, before consume seconds: 0 [2021-01-07 07:50:22.482248] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:50:22.482554] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-1: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:50:22.482729] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-2: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:50:32.340615] I [io-stats.c:4040:fini] 0-patchy: io-stats translator unloaded glfs_fini end: 1610005832, consume seconds: 10 glfs_fini ret = 0 ./gfapi end time: 1610005832, consume seconds: 10 [root@ip-172-31-10-141 ~]# ./gfapi ./gfapi start time: 1610005836 glfs_set_volfile_server ret = 0 glfs_set_logging ret = 0 glfs_init ret = 0 glfs_creat fd = 0x2317940 glfs_lseek ret = 0 glfs_close ret = 0 glfs_fini start: 1610005837, before consume seconds: 1 [2021-01-07 07:50:37.330411] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-0: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:50:37.330743] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-1: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:50:37.331111] E [MSGID: 108006] [afr-common.c:5308:afr_handle_child_down_event] 0-patchy-replicate-2: All subvolumes are down. Going offline until at least one of them comes back up. [2021-01-07 07:50:47.194427] I [io-stats.c:4040:fini] 0-patchy: io-stats translator unloaded glfs_fini end: 1610005848, consume seconds: 11 glfs_fini ret = 0 ./gfapi end time: 1610005848, consume seconds: 12

Can you share stack of the program while it is hanged.

Thanks, Mohit Agrawal

tangyla commented 3 years ago

direct execute gfapi not hanged. test this like fellow:

[root@node0 gfapi]# gdb gfapi GNU gdb (GDB) Red Hat Enterprise Linux 8.2-11.el8 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: http://www.gnu.org/software/gdb/bugs/. Find the GDB manual and other documentation resources online at: http://www.gnu.org/software/gdb/documentation/.

For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from gfapi...done. (gdb) b create_file Breakpoint 1 at 0x400952: file gfapi.c, line 10. (gdb) r Starting program: /root/qemu/gfapi/gfapi Missing separate debuginfos, use: yum debuginfo-install glibc-2.28-101.el8.x86_64 warning: Loadable section ".note.gnu.property" outside of ELF segments [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". warning: Loadable section ".note.gnu.property" outside of ELF segments /root/qemu/gfapi/gfapi start time: 1610012561 [New Thread 0x7ffff433c700 (LWP 22829)] [New Thread 0x7ffff303b700 (LWP 22830)] [New Thread 0x7ffff283a700 (LWP 22831)] [New Thread 0x7ffff1439700 (LWP 22832)] [New Thread 0x7ffff0c38700 (LWP 22833)] [New Thread 0x7ffff016c700 (LWP 22834)] glfs_set_volfile_server ret = 0 glfs_set_logging ret = 0 [New Thread 0x7fffef768700 (LWP 22835)] [New Thread 0x7fffeef67700 (LWP 22836)] [New Thread 0x7fffee2b2700 (LWP 22837)] glfs_init ret = 0

Thread 1 "gfapi" hit Breakpoint 1, create_file (glfs=0x603a80) at gfapi.c:10 10 struct glfs_fd* fd = NULL; Missing separate debuginfos, use: yum debuginfo-install zlib-1.2.11-16.el8_2.x86_64 (gdb) n 11 fd = glfs_creat(glfs, "test.qcow2", O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR); (gdb) n ^C Thread 1 "gfapi" received signal SIGINT, Interrupt. 0x00007ffff5e6f48c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 (gdb) where

0 0x00007ffff5e6f48c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

1 0x00007ffff6ecc2bf in syncop_lookup (subvol=subvol@entry=0x7fffe0021a30, loc=loc@entry=0x7fffffffdb80, iatt=iatt@entry=0x7fffffffdbc0, parent=parent@entry=0x0, xdata_in=xdata_in@entry=0x690628, xdata_out=xdata_out@entry=0x0) at syncop.c:1135

2 0x00007ffff7bc218e in glfs_resolve_component (fs=fs@entry=0x603a80, subvol=subvol@entry=0x7fffe0021a30, parent=parent@entry=0x7fffdc00e2f8, component=, component@entry=0x7fffdc007590 "test.qcow2", iatt=iatt@entry=0x7fffffffdd60,

force_lookup=<optimized out>) at glfs-resolve.c:405

3 0x00007ffff7bc25d3 in priv_glfs_resolve_at (fs=fs@entry=0x603a80, subvol=subvol@entry=0x7fffe0021a30, at=at@entry=0x7fffdc00e2f8, origpath=, origpath@entry=0x400c98 "test.qcow2", loc=loc@entry=0x7fffffffdf10, iatt=iatt@entry=0x7fffffffdf50,

follow=1, reval=0) at glfs-resolve.c:517

4 0x00007ffff7bc3f6b in glfs_resolve_path (fs=0x603a80, subvol=0x7fffe0021a30, origpath=0x400c98 "test.qcow2", loc=0x7fffffffdf10, iatt=0x7fffffffdf50, follow=1, reval=0) at glfs-resolve.c:640

5 0x00007ffff7bc4025 in priv_glfs_resolve (fs=fs@entry=0x603a80, subvol=subvol@entry=0x7fffe0021a30, origpath=origpath@entry=0x400c98 "test.qcow2", loc=loc@entry=0x7fffffffdf10, iatt=iatt@entry=0x7fffffffdf50, reval=reval@entry=0) at glfs-resolve.c:655

6 0x00007ffff7bb8dad in pub_glfs_creat (fs=0x603a80, path=0x400c98 "test.qcow2", flags=577, mode=384) at glfs-fops.c:810

7 0x0000000000400975 in create_file (glfs=0x603a80) at gfapi.c:11

8 0x0000000000400b18 in main (argc=1, argv=0x7fffffffe1d8) at gfapi.c:45

test environment detail:

OS: CentOS Linux release 8.2.2004 (Core)

gluster: glusterfs 7.8 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. https://www.gluster.org/ GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation.

gcc: gcc (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) Copyright (C) 2018 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

gdb: GNU gdb (GDB) Red Hat Enterprise Linux 8.2-11.el8 Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.

tangyla commented 3 years ago

other gfapis when debug program has same problem, e.g: glfs_fini/glfs_seek and so on.

mohit84 commented 3 years ago

Please share "thread apply all bt full " and brick logs also at the same time.

tangyla commented 3 years ago

(gdb) n 11 fd = glfs_creat(glfs, "test.qcow2", O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR); (gdb) ^C Thread 1 "gfapi" received signal SIGINT, Interrupt. 0x00007ffff5e6f48c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0 (gdb) thread apply all bt full

Thread 10 (Thread 0x7fffee2b2700 (LWP 23164)):

0 0x00007ffff78db1b7 in epoll_wait () from /lib64/libc.so.6

No symbol table info available.

1 0x00007ffff6eed774 in event_dispatch_epoll_worker (data=0x7fffe8000d60) at event-epoll.c:753

    event = {events = 1, data = {ptr = 0x100000003, fd = 3, u32 = 3, u64 = 4294967299}}
    ret = <optimized out>
    ev_data = 0x7fffe8000d60
    event_pool = 0x67a280
    myindex = 2
    timetodie = 0
    gen = 0
    poller_death_notify = {next = 0x0, prev = 0x0}
    slot = 0x0
    tmp = 0x0
    __FUNCTION__ = "event_dispatch_epoll_worker"

2 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

3 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 9 (Thread 0x7fffeef67700 (LWP 23163)):

0 0x00007ffff78db1b7 in epoll_wait () from /lib64/libc.so.6

No symbol table info available.

1 0x00007ffff6eed774 in event_dispatch_epoll_worker (data=0x7fffe8000ba0) at event-epoll.c:753

    event = {events = 1, data = {ptr = 0x400000001, fd = 1, u32 = 1, u64 = 17179869185}}
    ret = <optimized out>
    ev_data = 0x7fffe8000ba0
    event_pool = 0x67a280
    myindex = 1
    timetodie = 0
    gen = 0
    poller_death_notify = {next = 0x0, prev = 0x0}
    slot = 0x0
    tmp = 0x0
    __FUNCTION__ = "event_dispatch_epoll_worker"

2 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

3 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 8 (Thread 0x7fffef768700 (LWP 23162)):

0 0x00007ffff5e6a7cd in __pthread_timedjoin_ex () from /lib64/libpthread.so.0

No symbol table info available.

1 0x00007ffff6eecfa7 in event_dispatch_epoll (event_pool=0x67a280) at event-epoll.c:848

    i = <optimized out>
    t_id = 140737189193472
    pollercount = 2
    ret = 0
    ev_data = 0x7fffe8000d60
    __FUNCTION__ = "event_dispatch_epoll"

2 0x00007ffff7baa778 in glfs_poller (data=) at glfs.c:728

--Type for more, q to quit, c to continue without paging-- fs =

3 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

4 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 7 (Thread 0x7ffff016c700 (LWP 23161)):

0 0x00007ffff5e6f878 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

No symbol table info available.

1 0x00007ffff6e9aa31 in gf_timer_proc (data=0x688c70) at timer.c:141

    now = {tv_sec = 29443, tv_nsec = 987213188}
    reg = <optimized out>
    event = 0x6b5380
    tmp = 0x0
    old_THIS = <optimized out>

2 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

3 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 6 (Thread 0x7ffff0c38700 (LWP 23160)):

0 0x00007ffff5e6f7da in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

No symbol table info available.

1 0x00007ffff6eca96e in syncenv_task (proc=proc@entry=0x67f770) at syncop.c:524

    env = 0x67f390
    task = 0x0
    sleep_till = {tv_sec = 1610013977, tv_nsec = 0}
    ret = <optimized out>

2 0x00007ffff6ecb5f8 in syncenv_processor (thdata=0x67f770) at syncop.c:591

    env = 0x67f390
    proc = 0x67f770
    task = <optimized out>

3 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

4 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 5 (Thread 0x7ffff1439700 (LWP 23159)):

0 0x00007ffff5e6f7da in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

No symbol table info available.

1 0x00007ffff6eca96e in syncenv_task (proc=proc@entry=0x67f390) at syncop.c:524

    env = 0x67f390
    task = 0x0
    sleep_till = {tv_sec = 1610013977, tv_nsec = 0}
    ret = <optimized out>

2 0x00007ffff6ecb5f8 in syncenv_processor (thdata=0x67f390) at syncop.c:591

    env = 0x67f390
    proc = 0x67f390
    task = <optimized out>

3 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

4 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

--Type for more, q to quit, c to continue without paging-- No symbol table info available.

Thread 4 (Thread 0x7ffff283a700 (LWP 23158)):

0 0x00007ffff5e6f7da in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

No symbol table info available.

1 0x00007ffff6eca96e in syncenv_task (proc=proc@entry=0x640dc0) at syncop.c:524

    env = 0x6409e0
    task = 0x0
    sleep_till = {tv_sec = 1610013977, tv_nsec = 0}
    ret = <optimized out>

2 0x00007ffff6ecb5f8 in syncenv_processor (thdata=0x640dc0) at syncop.c:591

    env = 0x6409e0
    proc = 0x640dc0
    task = <optimized out>

3 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

4 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 3 (Thread 0x7ffff303b700 (LWP 23157)):

0 0x00007ffff5e6f7da in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

No symbol table info available.

1 0x00007ffff6eca96e in syncenv_task (proc=proc@entry=0x6409e0) at syncop.c:524

    env = 0x6409e0
    task = 0x0
    sleep_till = {tv_sec = 1610013977, tv_nsec = 0}
    ret = <optimized out>

2 0x00007ffff6ecb5f8 in syncenv_processor (thdata=0x6409e0) at syncop.c:591

    env = 0x6409e0
    proc = 0x6409e0
    task = <optimized out>

3 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

4 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 2 (Thread 0x7ffff433c700 (LWP 23156)):

0 0x00007ffff78a7238 in nanosleep () from /lib64/libc.so.6

No symbol table info available.

1 0x00007ffff78a713e in sleep () from /lib64/libc.so.6

No symbol table info available.

2 0x00007ffff6eb5775 in pool_sweeper (arg=) at mem-pool.c:444

    state = {cold_lists = {0x0 <repeats 1024 times>}, n_cold_lists = 0}
    pool_list = <optimized out>
    i = <optimized out>
    pending = <optimized out>

3 0x00007ffff5e692de in start_thread () from /lib64/libpthread.so.0

No symbol table info available.

4 0x00007ffff78dae83 in clone () from /lib64/libc.so.6

No symbol table info available.

Thread 1 (Thread 0x7ffff7fd65c0 (LWP 23152)): --Type for more, q to quit, c to continue without paging--

0 0x00007ffff5e6f48c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0

No symbol table info available.

1 0x00007ffff6ecc2bf in syncop_lookup (subvol=subvol@entry=0x7fffe0021a30, loc=loc@entry=0x7fffffffdb80, iatt=iatt@entry=0x7fffffffdbc0, parent=parent@entry=0x0, xdata_in=xdata_in@entry=0x690628, xdata_out=xdata_out@entry=0x0) at syncop.c:1135

    task = 0x0
    frame = 0x6d27e8
    args = {op_ret = 0, op_errno = 0, iatt1 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, 
        ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', 
            write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, iatt2 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, 
        ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, 
        ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {
            read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, iatt3 = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, 
        ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {
            read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, xattr = 0x0, statvfs_buf = {f_bsize = 0, f_frsize = 0, f_blocks = 0, 
        f_bfree = 0, f_bavail = 0, f_files = 0, f_ffree = 0, f_favail = 0, f_fsid = 0, f_flag = 0, f_namemax = 0, __f_spare = {0, 0, 0, 0, 0, 0}}, vector = 0x0, count = 0, iobref = 0x0, buffer = 0x0, xdata = 0x0, flock = {l_type = 0, l_whence = 0, l_start = 0, 
        l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, lease = {cmd = 0, lease_type = NONE, lease_id = '\000' <repeats 15 times>, lease_flags = 0}, dict_out = 0x0, uuid = '\000' <repeats 15 times>, errstr = 0x0, dict = 0x0, 
      lock_dict = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, barrier = {initialized = false, guard = {__data = {
            __lock = 0, __count = 0, __owner = 0, __nusers = 0, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 39 times>, __align = 0}, cond = {__data = {{__wseq = 0, __wseq32 = {__low = 0, __high = 0}}, {
              __g1_start = 0, __g1_start32 = {__low = 0, __high = 0}}, __g_refs = {0, 0}, __g_size = {0, 0}, __g1_orig_size = 0, __wrefs = 0, __g_signals = {0, 0}}, __size = '\000' <repeats 47 times>, __align = 0}, waitq = {next = 0x0, prev = 0x0}, count = 0, 
        waitfor = 0}, task = 0x0, mutex = {__data = {__lock = 0, __count = 0, __owner = 0, __nusers = 1, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = '\000' <repeats 12 times>, "\001", '\000' <repeats 26 times>, 
        __align = 0}, cond = {__data = {{__wseq = 2, __wseq32 = {__low = 2, __high = 0}}, {__g1_start = 0, __g1_start32 = {__low = 0, __high = 0}}, __g_refs = {2, 0}, __g_size = {0, 0}, __g1_orig_size = 0, __wrefs = 8, __g_signals = {0, 0}}, 
        __size = "\002", '\000' <repeats 15 times>, "\002", '\000' <repeats 19 times>, "\b\000\000\000\000\000\000\000\000\000\000", __align = 2}, done = 0, entries = {{list = {next = 0x0, prev = 0x0}, {next = 0x0, prev = 0x0}}, d_ino = 0, d_off = 0, d_len = 0, 
        d_type = 0, d_stat = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, 
          ia_ctime_nsec = 0, ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', 
              exec = 0 '\000'}, group = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}, dict = 0x0, inode = 0x0, d_name = 0x7fffffffd6c0 ""}, offset = 0, locklist = {list = {next = 0x0, 
          prev = 0x0}, flock = {l_type = 0, l_whence = 0, l_start = 0, l_len = 0, l_pid = 0, l_owner = {len = 0, data = '\000' <repeats 1023 times>}}, client_uid = 0x0, lk_flags = 0}}
    __FUNCTION__ = "syncop_lookup"

2 0x00007ffff7bc218e in glfs_resolve_component (fs=fs@entry=0x603a80, subvol=subvol@entry=0x7fffe0021a30, parent=parent@entry=0x7fffdc00cd58, component=, component@entry=0x7fffdc006150 "test.qcow2", iatt=iatt@entry=0x7fffffffdd60,

force_lookup=<optimized out>) at glfs-resolve.c:405
    loc = {path = 0x6da260 "/test.qcow2", name = 0x6da261 "test.qcow2", inode = 0x694958, parent = 0x7fffdc00cd58, gfid = '\000' <repeats 15 times>, pargfid = '\000' <repeats 15 times>, "\001"}
    inode = 0x0
    temp_parent = <optimized out>
    reval = 0
    ret = <optimized out>
    glret = <optimized out>
    ciatt = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, 
      ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {
          read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}
    gfid = ")\271\262\366s\325O\221\216X\314-m\332\310;"
    xattr_req = 0x690628
    ctx_value = 2
    __FUNCTION__ = "glfs_resolve_component"

3 0x00007ffff7bc25d3 in priv_glfs_resolve_at (fs=fs@entry=0x603a80, subvol=subvol@entry=0x7fffe0021a30, at=at@entry=0x7fffdc00cd58, origpath=, origpath@entry=0x400c98 "test.qcow2", loc=loc@entry=0x7fffffffdf10, iatt=iatt@entry=0x7fffffffdf50,

follow=1, reval=0) at glfs-resolve.c:517
    inode = 0x7fffdc00cd58
    parent = 0x7fffdc00cd58
    saveptr = 0x7fffdc00615a ""
    path = 0x7fffdc006150 "test.qcow2"
    component = <optimized out>
    next_component = <optimized out>
    ret = <optimized out>
    ciatt = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, 
      ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {

--Type for more, q to quit, c to continue without paging-- read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}} old_THIS = 0x6892c0

4 0x00007ffff7bc3f6b in glfs_resolve_path (fs=0x603a80, subvol=0x7fffe0021a30, origpath=0x400c98 "test.qcow2", loc=0x7fffffffdf10, iatt=0x7fffffffdf50, follow=1, reval=0) at glfs-resolve.c:640

    ret = -1
    cwd = 0x7fffdc00cd58
    __FUNCTION__ = "glfs_resolve_path"

5 0x00007ffff7bc4025 in priv_glfs_resolve (fs=fs@entry=0x603a80, subvol=subvol@entry=0x7fffe0021a30, origpath=origpath@entry=0x400c98 "test.qcow2", loc=loc@entry=0x7fffffffdf10, iatt=iatt@entry=0x7fffffffdf50, reval=reval@entry=0) at glfs-resolve.c:655

    ret = -1

6 0x00007ffff7bb8dad in pub_glfs_creat (fs=0x603a80, path=0x400c98 "test.qcow2", flags=577, mode=384) at glfs-fops.c:810

    ret = <optimized out>
    glfd = 0x6d7400
    subvol = 0x7fffe0021a30
    loc = {path = 0x0, name = 0x0, inode = 0x0, parent = 0x0, gfid = '\000' <repeats 15 times>, pargfid = '\000' <repeats 15 times>}
    iatt = {ia_flags = 0, ia_ino = 0, ia_dev = 0, ia_rdev = 0, ia_size = 0, ia_nlink = 0, ia_uid = 0, ia_gid = 0, ia_blksize = 0, ia_blocks = 0, ia_atime = 0, ia_mtime = 0, ia_ctime = 0, ia_btime = 0, ia_atime_nsec = 0, ia_mtime_nsec = 0, ia_ctime_nsec = 0, 
      ia_btime_nsec = 0, ia_attributes = 0, ia_attributes_mask = 0, ia_gfid = '\000' <repeats 15 times>, ia_type = IA_INVAL, ia_prot = {suid = 0 '\000', sgid = 0 '\000', sticky = 0 '\000', owner = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, group = {
          read = 0 '\000', write = 0 '\000', exec = 0 '\000'}, other = {read = 0 '\000', write = 0 '\000', exec = 0 '\000'}}}
    gfid = "\332\026\357\200\322qF?\240̢\344]\302\002z"
    xattr_req = 0x6af488
    reval = 0
    old_THIS = 0x7ffff717a3e0 <global_xlator>
    __FUNCTION__ = "pub_glfs_creat"

7 0x0000000000400975 in create_file (glfs=0x603a80) at gfapi.c:11

    ret = 0
    current_length = 140737488347600
    fd = 0x0

8 0x0000000000400b18 in main (argc=1, argv=0x7fffffffe1d8) at gfapi.c:45

    ret = 0
    volume = 0x400ce2 "datastore"
    server = 0x400cec "node0"
    port = 24007
    glfs = 0x603a80
    start = 1610013377
    end = 6304912
    b_fini = 4197453
    e_fini = 120

(gdb)

tangyla commented 3 years ago

[root@node0 bricks]# truncate --size -0 virtstack-storage-disk-datastore.log;tail -f virtstack-storage-disk-datastore.log [2021-01-07 09:42:09.520186] I [addr.c:54:compare_addr_and_update] 0-/virtstack/storage/disk/datastore: allowed = "", received addr = "172.16.88.30" [2021-01-07 09:42:09.520227] I [login.c:110:gf_auth] 0-auth/login: allowed user names: ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 [2021-01-07 09:42:09.520240] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-datastore-server: accepted client from CTX_ID:f1bafd56-bcbf-4786-b816-dc61c4628011-GRAPH_ID:0-PID:22785-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 (version: 7.8) with subvol /virtstack/storage/disk/datastore [2021-01-07 09:42:25.473566] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-datastore-server: disconnecting connection from CTX_ID:f1bafd56-bcbf-4786-b816-dc61c4628011-GRAPH_ID:0-PID:22785-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 [2021-01-07 09:42:25.474004] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-datastore-server: Shutting down connection CTX_ID:f1bafd56-bcbf-4786-b816-dc61c4628011-GRAPH_ID:0-PID:22785-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 [2021-01-07 09:42:41.688486] I [addr.c:54:compare_addr_and_update] 0-/virtstack/storage/disk/datastore: allowed = "", received addr = "172.16.88.30" [2021-01-07 09:42:41.688511] I [login.c:110:gf_auth] 0-auth/login: allowed user names: ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 [2021-01-07 09:42:41.688517] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-datastore-server: accepted client from CTX_ID:81fef853-c2cf-4cfc-b80a-f3fbebc1bb04-GRAPH_ID:0-PID:22825-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 (version: 7.8) with subvol /virtstack/storage/disk/datastore [2021-01-07 09:55:47.956527] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-datastore-server: disconnecting connection from CTX_ID:81fef853-c2cf-4cfc-b80a-f3fbebc1bb04-GRAPH_ID:0-PID:22825-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 [2021-01-07 09:55:47.956930] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-datastore-server: Shutting down connection CTX_ID:81fef853-c2cf-4cfc-b80a-f3fbebc1bb04-GRAPH_ID:0-PID:22825-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 ^C [root@node0 bricks]# clear;truncate --size -0 virtstack-storage-disk-datastore.log;tail -f virtstack-storage-disk-datastore.log [2021-01-07 09:42:09.520186] I [addr.c:54:compare_addr_and_update] 0-/virtstack/storage/disk/datastore: allowed = "", received addr = "172.16.88.30" [2021-01-07 09:42:09.520227] I [login.c:110:gf_auth] 0-auth/login: allowed user names: ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 [2021-01-07 09:42:09.520240] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-datastore-server: accepted client from CTX_ID:f1bafd56-bcbf-4786-b816-dc61c4628011-GRAPH_ID:0-PID:22785-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 (version: 7.8) with subvol /virtstack/storage/disk/datastore [2021-01-07 09:42:25.473566] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-datastore-server: disconnecting connection from CTX_ID:f1bafd56-bcbf-4786-b816-dc61c4628011-GRAPH_ID:0-PID:22785-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 [2021-01-07 09:42:25.474004] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-datastore-server: Shutting down connection CTX_ID:f1bafd56-bcbf-4786-b816-dc61c4628011-GRAPH_ID:0-PID:22785-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 [2021-01-07 09:42:41.688486] I [addr.c:54:compare_addr_and_update] 0-/virtstack/storage/disk/datastore: allowed = "", received addr = "172.16.88.30" [2021-01-07 09:42:41.688511] I [login.c:110:gf_auth] 0-auth/login: allowed user names: ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 [2021-01-07 09:42:41.688517] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-datastore-server: accepted client from CTX_ID:81fef853-c2cf-4cfc-b80a-f3fbebc1bb04-GRAPH_ID:0-PID:22825-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 (version: 7.8) with subvol /virtstack/storage/disk/datastore [2021-01-07 09:55:47.956527] I [MSGID: 115036] [server.c:501:server_rpc_notify] 0-datastore-server: disconnecting connection from CTX_ID:81fef853-c2cf-4cfc-b80a-f3fbebc1bb04-GRAPH_ID:0-PID:22825-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 [2021-01-07 09:55:47.956930] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-datastore-server: Shutting down connection CTX_ID:81fef853-c2cf-4cfc-b80a-f3fbebc1bb04-GRAPH_ID:0-PID:22825-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 [2021-01-07 09:56:17.237499] I [addr.c:54:compare_addr_and_update] 0-/virtstack/storage/disk/datastore: allowed = "*", received addr = "172.16.88.30" [2021-01-07 09:56:17.237558] I [login.c:110:gf_auth] 0-auth/login: allowed user names: ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 [2021-01-07 09:56:17.237572] I [MSGID: 115029] [server-handshake.c:552:server_setvolume] 0-datastore-server: accepted client from CTX_ID:87f024d3-5f2c-4433-a3db-a83c071a2fef-GRAPH_ID:0-PID:23152-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 (version: 7.8) with subvol /virtstack/storage/disk/datastore

tangyla commented 3 years ago

if need any other info,please tell me.

tangyla commented 3 years ago

another problem described with issues: https://github.com/gluster/glusterfs/issues/1962

gfapi library cleanup glfs_fini function consume about 11 seconds without any opertations, this function result in libvirt create qemu(build with gfapi) vm or image which storage backend glusterfs and use gfapi not mouned fuse very slow, start vm which have two image need about 1 minute.

mohit84 commented 3 years ago

As per backtrace it seems lookup is stuck, Can you please share below info 1) gluster v info 2) set highest log level and logfile name in glfs_set_logging api in your c program and share the log file. For ex ret = glfs_set_logging(glfs, "/var/log/glusterfs/gfapi.log", 9); 3) Generate brick statedump, to generate brick statedump send SIGUSR1 signal to the brick process , it will generate statedump automatically in /var/run/gluster location

tangyla commented 3 years ago

1. volume info [root@node0 ~]# gluster v info datastore

Volume Name: datastore Type: Replicate Volume ID: 040a6510-ccfc-4406-ae63-db93456cb330 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node0:/virtstack/storage/disk/datastore Brick2: node1:/virtstack/storage/disk/datastore Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off

2. log log is empty, nothing output

3. brick process status dump

DUMP-START-TIME: 2021-01-07 11:11:30.551436

[mallinfo] mallinfo_arena=6959104 mallinfo_ordblks=151 mallinfo_smblks=42 mallinfo_hblks=17 mallinfo_hblkhd=17350656 mallinfo_usmblks=0 mallinfo_fsmblks=4144 mallinfo_uordblks=1314336 mallinfo_fordblks=5644768 mallinfo_keepcost=121696

[global.glusterfs - Memory usage] num_types=127

[global.glusterfs - usage-type gf_common_mt_dnscache6 memusage] size=16 num_allocs=1 max_size=16 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_event_pool memusage] size=20696 num_allocs=3 max_size=20696 max_num_allocs=3 total_allocs=3

[global.glusterfs - usage-type gf_common_mt_xlator_t memusage] size=74592 num_allocs=18 max_size=74592 max_num_allocs=18 total_allocs=18

[global.glusterfs - usage-type gf_common_mt_xlator_list_t memusage] size=544 num_allocs=34 max_size=544 max_num_allocs=34 total_allocs=34

[global.glusterfs - usage-type gf_common_mt_volume_opt_list_t memusage] size=864 num_allocs=36 max_size=864 max_num_allocs=36 total_allocs=38

[global.glusterfs - usage-type gf_common_mt_gf_timer_t memusage] size=128 num_allocs=2 max_size=192 max_num_allocs=3 total_allocs=10

[global.glusterfs - usage-type gf_common_mt_gf_timer_registry_t memusage] size=120 num_allocs=1 max_size=120 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_iobuf memusage] size=200816 num_allocs=8 max_size=200816 max_num_allocs=8 total_allocs=8

[global.glusterfs - usage-type gf_common_mt_iobuf_arena memusage] size=2592 num_allocs=9 max_size=2592 max_num_allocs=9 total_allocs=9

[global.glusterfs - usage-type gf_common_mt_iobuf_pool memusage] size=1776 num_allocs=1 max_size=1776 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_asprintf memusage] size=43 num_allocs=2 max_size=686 max_num_allocs=4 total_allocs=34

[global.glusterfs - usage-type gf_common_mt_strdup memusage] size=3287 num_allocs=128 max_size=3401 max_num_allocs=131 total_allocs=316

[global.glusterfs - usage-type gf_common_mt_socket_private_t memusage] size=1992 num_allocs=3 max_size=1992 max_num_allocs=3 total_allocs=3

[global.glusterfs - usage-type gf_common_mt_char memusage] size=1391 num_allocs=71 max_size=24820 max_num_allocs=72 total_allocs=79

[global.glusterfs - usage-type gf_common_mt_mem_pool memusage] size=640 num_allocs=10 max_size=640 max_num_allocs=10 total_allocs=10

[global.glusterfs - usage-type gf_common_mt_rpcsvc_auth_list memusage] size=360 num_allocs=5 max_size=360 max_num_allocs=5 total_allocs=5

[global.glusterfs - usage-type gf_common_mt_rpcsvc_t memusage] size=216 num_allocs=1 max_size=216 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcsvc_program_t memusage] size=279248 num_allocs=2 max_size=279248 max_num_allocs=2 total_allocs=2

[global.glusterfs - usage-type gf_common_mt_rpcsvc_listener_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcclnt_t memusage] size=328 num_allocs=1 max_size=328 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcclnt_savedframe_t memusage] size=200 num_allocs=1 max_size=200 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpc_trans_t memusage] size=3840 num_allocs=3 max_size=3840 max_num_allocs=3 total_allocs=3

[global.glusterfs - usage-type gf_common_mt_glusterfs_graph_t memusage] size=296 num_allocs=1 max_size=296 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcclnt_cb_program_t memusage] size=88 num_allocs=1 max_size=88 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_cliententry_t memusage] size=2048 num_allocs=1 max_size=2048 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_clienttable_t memusage] size=72 num_allocs=1 max_size=72 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_ereg memusage] size=114688 num_allocs=1 max_size=114688 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_syncenv memusage] size=16040 num_allocs=1 max_size=16040 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_tw_ctx memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_volfile_t memusage] size=320 num_allocs=1 max_size=320 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_server_cmdline_t memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gfd_mt_xlator_cmdline_option_t memusage] size=80 num_allocs=2 max_size=80 max_num_allocs=2 total_allocs=2

[global.glusterfs - usage-type gfd_mt_char memusage] size=53 num_allocs=4 max_size=53 max_num_allocs=4 total_allocs=4

[global.glusterfs - usage-type gfd_mt_call_pool_t memusage] size=88 num_allocs=1 max_size=88 max_num_allocs=1 total_allocs=1

[mempool] -----=----- pool-name=fd_t active-count=0 sizeof-type=120 padded-sizeof=256 size=0 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=dentry_t active-count=1 sizeof-type=56 padded-sizeof=128 size=128 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=inode_t active-count=5 sizeof-type=168 padded-sizeof=256 size=1280 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=fd_t active-count=0 sizeof-type=120 padded-sizeof=256 size=0 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=dentry_t active-count=0 sizeof-type=56 padded-sizeof=128 size=0 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=inode_t active-count=1 sizeof-type=168 padded-sizeof=256 size=256 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=trash_local_t active-count=0 sizeof-type=8688 padded-sizeof=16384 size=0 shared-pool=0x7f9ea7d681b8 -----=----- pool-name=rpcsvc_request_t active-count=0 sizeof-type=2824 padded-sizeof=4096 size=0 shared-pool=0x7f9ea7d68168 -----=----- pool-name=changelog_local_t active-count=0 sizeof-type=88 padded-sizeof=128 size=0 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=br_stub_local_t active-count=0 sizeof-type=56 padded-sizeof=128 size=0 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=pl_local_t active-count=0 sizeof-type=184 padded-sizeof=256 size=0 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=read_only_priv_t active-count=1 sizeof-type=40 padded-sizeof=128 size=128 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=upcall_local_t active-count=0 sizeof-type=152 padded-sizeof=256 size=0 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=selinux_priv_t active-count=0 sizeof-type=1 padded-sizeof=128 size=0 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=marker_local_t active-count=0 sizeof-type=504 padded-sizeof=1024 size=0 shared-pool=0x7f9ea7d68118 -----=----- pool-name=index_local_t active-count=0 sizeof-type=16 padded-sizeof=128 size=0 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=quota_local_t active-count=0 sizeof-type=496 padded-sizeof=1024 size=0 shared-pool=0x7f9ea7d68118 -----=----- pool-name=rpcsvc_request_t active-count=0 sizeof-type=2824 padded-sizeof=4096 size=0 shared-pool=0x7f9ea7d68168 -----=----- pool-name=struct saved_frame active-count=0 sizeof-type=96 padded-sizeof=256 size=0 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=struct rpc_req active-count=0 sizeof-type=560 padded-sizeof=1024 size=0 shared-pool=0x7f9ea7d68118 -----=----- pool-name=rpcsvc_request_t active-count=0 sizeof-type=2824 padded-sizeof=4096 size=0 shared-pool=0x7f9ea7d68168 -----=----- pool-name=log_buf_t active-count=5 sizeof-type=112 padded-sizeof=256 size=1280 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=data_t active-count=193 sizeof-type=72 padded-sizeof=128 size=24704 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=data_pair_t active-count=172 sizeof-type=48 padded-sizeof=128 size=22016 shared-pool=0x7f9ea7d680a0 -----=----- pool-name=dict_t active-count=32 sizeof-type=160 padded-sizeof=256 size=8192 shared-pool=0x7f9ea7d680c8 -----=----- pool-name=call_stub_t active-count=0 sizeof-type=6376 padded-sizeof=8192 size=0 shared-pool=0x7f9ea7d68190 -----=----- pool-name=call_stack_t active-count=0 sizeof-type=1864 padded-sizeof=2048 size=0 shared-pool=0x7f9ea7d68140 -----=----- pool-name=call_frame_t active-count=0 sizeof-type=184 padded-sizeof=256 size=0 shared-pool=0x7f9ea7d680c8

[iobuf.global] iobuf_pool=0x55f48b6e2680 iobuf_pool.default_page_size=131072 iobuf_pool.arena_size=12976128 iobuf_pool.arena_cnt=8 iobuf_pool.request_misses=0

[purge.1] purge.1.mem_base=0x7f9ea7f54000 purge.1.active_cnt=0 purge.1.passive_cnt=1024 purge.1.alloc_cnt=72 purge.1.max_active=4 purge.1.page_size=128

[purge.2] purge.2.mem_base=0x7f9ea7f14000 purge.2.active_cnt=0 purge.2.passive_cnt=512 purge.2.alloc_cnt=31 purge.2.max_active=2 purge.2.page_size=512

[purge.3] purge.3.mem_base=0x7f9ea7e14000 purge.3.active_cnt=0 purge.3.passive_cnt=512 purge.3.alloc_cnt=9 purge.3.max_active=2 purge.3.page_size=2048

[purge.4] purge.4.mem_base=0x7f9ea4817000 purge.4.active_cnt=0 purge.4.passive_cnt=128 purge.4.alloc_cnt=1 purge.4.max_active=1 purge.4.page_size=8192

[arena.5] arena.5.mem_base=0x7f9ea4617000 arena.5.active_cnt=0 arena.5.passive_cnt=64 arena.5.alloc_cnt=0 arena.5.max_active=0 arena.5.page_size=32768

[arena.6] arena.6.mem_base=0x7f9ea4217000 arena.6.active_cnt=0 arena.6.passive_cnt=32 arena.6.alloc_cnt=0 arena.6.max_active=0 arena.6.page_size=131072

[arena.7] arena.7.mem_base=0x7f9ea4017000 arena.7.active_cnt=0 arena.7.passive_cnt=8 arena.7.alloc_cnt=0 arena.7.max_active=0 arena.7.page_size=262144

[arena.8] arena.8.mem_base=0x7f9ea3e17000 arena.8.active_cnt=0 arena.8.passive_cnt=2 arena.8.alloc_cnt=0 arena.8.max_active=0 arena.8.page_size=1048576

[global.callpool] callpool_address=0x55f48b719eb0 callpool.cnt=0

[dict] max-pairs-per-dict=23 total-pairs-used=240 total-dicts-used=54 average-pairs-per-dict=4

[active graph - 1]

[datastore-server.latency] datastore-server.latency.STATFS=176764.000,1,176764.000 datastore-server.latency.GETXATTR=1519118.167,6,9114709.000 datastore-server.latency.OPENDIR=31156.667,3,93470.000 datastore-server.latency.LOOKUP=297870.200,10,2978702.000 datastore-server.latency.READDIR=1258438.167,6,7550629.000

[protocol/server.datastore-server - Memory usage] num_types=130

[protocol/server.datastore-server - usage-type gf_common_mt_fdentry_t memusage] size=10240 num_allocs=5 max_size=10240 max_num_allocs=5 total_allocs=5

[protocol/server.datastore-server - usage-type gf_common_mt_fdtable_t memusage] size=400 num_allocs=5 max_size=400 max_num_allocs=5 total_allocs=5

[protocol/server.datastore-server - usage-type gf_common_mt_inode_ctx memusage] size=3040 num_allocs=5 max_size=3040 max_num_allocs=5 total_allocs=5

[protocol/server.datastore-server - usage-type gf_common_mt_list_head memusage] size=1273488 num_allocs=2 max_size=1273488 max_num_allocs=2 total_allocs=2

[protocol/server.datastore-server - usage-type gf_common_mt_inode_table_t memusage] size=240 num_allocs=1 max_size=240 max_num_allocs=1 total_allocs=1

[protocol/server.datastore-server - usage-type gf_common_mt_volume_opt_list_t memusage] size=48 num_allocs=2 max_size=72 max_num_allocs=3 total_allocs=3

[protocol/server.datastore-server - usage-type gf_common_mt_auth_handle_t memusage] size=48 num_allocs=2 max_size=48 max_num_allocs=2 total_allocs=2

[protocol/server.datastore-server - usage-type gf_common_mt_memdup memusage] size=5720 num_allocs=108 max_size=5720 max_num_allocs=108 total_allocs=108

[protocol/server.datastore-server - usage-type gf_common_mt_asprintf memusage] size=40 num_allocs=1 max_size=586 max_num_allocs=13 total_allocs=120

[protocol/server.datastore-server - usage-type gf_common_mt_strdup memusage] size=2133 num_allocs=57 max_size=2371 max_num_allocs=60 total_allocs=111

[protocol/server.datastore-server - usage-type gf_common_mt_socket_private_t memusage] size=3984 num_allocs=6 max_size=3984 max_num_allocs=6 total_allocs=6

[protocol/server.datastore-server - usage-type gf_common_mt_char memusage] size=2036 num_allocs=115 max_size=3497 max_num_allocs=143 total_allocs=330

[protocol/server.datastore-server - usage-type gf_common_mt_mem_pool memusage] size=256 num_allocs=4 max_size=256 max_num_allocs=4 total_allocs=4

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_auth_list memusage] size=360 num_allocs=5 max_size=360 max_num_allocs=5 total_allocs=5

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_t memusage] size=216 num_allocs=1 max_size=216 max_num_allocs=1 total_allocs=1

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_program_t memusage] size=558496 num_allocs=4 max_size=558496 max_num_allocs=4 total_allocs=4

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_listener_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[protocol/server.datastore-server - usage-type gf_common_mt_rpc_trans_t memusage] size=7680 num_allocs=6 max_size=7680 max_num_allocs=6 total_allocs=6

[protocol/server.datastore-server - usage-type gf_common_mt_client_t memusage] size=1200 num_allocs=10 max_size=1200 max_num_allocs=10 total_allocs=10

[protocol/server.datastore-server - usage-type gf_common_mt_client_ctx memusage] size=640 num_allocs=5 max_size=640 max_num_allocs=5 total_allocs=5

[protocol/server.datastore-server - usage-type gf_server_mt_server_conf_t memusage] size=49632 num_allocs=6 max_size=49632 max_num_allocs=6 total_allocs=6

[protocol/server.datastore-server - usage-type gf_server_mt_child_status memusage] size=64 num_allocs=2 max_size=64 max_num_allocs=2 total_allocs=2

[xlator.protocol.server.priv] server.total-bytes-read=9956 server.total-bytes-write=8380 conn.0.bound_xl./virtstack/storage/disk/datastore.hashsize=14057 conn.0.bound_xl./virtstack/storage/disk/datastore.name=/virtstack/storage/disk/datastore/inode conn.0.bound_xl./virtstack/storage/disk/datastore.lru_limit=16384 conn.0.bound_xl./virtstack/storage/disk/datastore.active_size=1 conn.0.bound_xl./virtstack/storage/disk/datastore.lru_size=4 conn.0.bound_xl./virtstack/storage/disk/datastore.purge_size=0 conn.0.bound_xl./virtstack/storage/disk/datastore.invalidate_size=0

[conn.0.bound_xl./virtstack/storage/disk/datastore.active.1] gfid=00000000-0000-0000-0000-000000000001 nlookup=0 fd-count=0 active-fd-count=0 ref=1 invalidate-sent=0 ia_type=2 ref_by_xl:.datastore-io-threads=1

[xlator.features.locks.datastore-locks.inode] path=/ mandatory=0

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.1] gfid=0436f216-2db7-479d-961d-981495b99a59 nlookup=2 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=2

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.2] gfid=75e88255-3205-4188-bc9c-e9b564b80447 nlookup=2 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=2

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.3] gfid=8ccfbe33-d4a3-4b3c-83df-59cfd83afa33 nlookup=2 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=2

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.4] gfid=6b237ba9-6505-4c56-aaac-d1d934e5a2ee nlookup=1 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=1

[xlator.features.locks.datastore-locks.inode] Unable to print lock state=(Lock acquisition failure) 6b237ba9-6505-4c56-aaac-d1d934e5a2ee conn.1.id=CTX_ID:11f87458-ecaa-4d24-8ef3-3d322bd94201-GRAPH_ID:0-PID:1456-HOST:node1-PC_NAME:datastore-client-0-RECON_NO:-1 conn.1.ref=1 conn.1.bound_xl=/virtstack/storage/disk/datastore conn.2.id=CTX_ID:d8f9f419-5073-4583-bb37-7d338340e5bf-GRAPH_ID:0-PID:1550-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-1 conn.2.ref=1 conn.2.bound_xl=/virtstack/storage/disk/datastore conn.3.id=CTX_ID:d5f3484a-a6be-4047-80a6-b4fe0585060f-GRAPH_ID:0-PID:5949-HOST:node1-PC_NAME:datastore-client-0-RECON_NO:-1 conn.3.ref=1 conn.3.bound_xl=/virtstack/storage/disk/datastore conn.4.id=CTX_ID:be72522a-c81c-4432-9912-3f88f7560352-GRAPH_ID:0-PID:24838-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 conn.4.ref=1 conn.4.bound_xl=/virtstack/storage/disk/datastore

[/virtstack/storage/disk/datastore.latency]

[debug/io-stats./virtstack/storage/disk/datastore - Memory usage] num_types=128

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_common_mt_dnscache memusage] size=16 num_allocs=1 max_size=16 max_num_allocs=1 total_allocs=1

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_conf memusage] size=6808 num_allocs=1 max_size=6808 max_num_allocs=1 total_allocs=1

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_stat memusage] size=320 num_allocs=10 max_size=320 max_num_allocs=10 total_allocs=10

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_sample_buf memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_sample memusage] size=9437040 num_allocs=1 max_size=9437040 max_num_allocs=1 total_allocs=1

[datastore-quota.latency] datastore-quota.latency.GETXATTR=1464239.833,6,8785439.000

[features/quota.datastore-quota - Memory usage] num_types=133

[features/quota.datastore-quota - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/quota.datastore-quota - usage-type gf_quota_mt_quota_priv_t memusage] size=136 num_allocs=1 max_size=136 max_num_allocs=1 total_allocs=1

[datastore-index.latency] datastore-index.latency.GETXATTR=1460659.500,6,8763957.000 datastore-index.latency.OPENDIR=3059.667,3,9179.000 datastore-index.latency.LOOKUP=206263.538,13,2681426.000 datastore-index.latency.READDIR=1236285.000,6,7417710.000

[features/index.datastore-index - Memory usage] num_types=126

[features/index.datastore-index - usage-type gf_common_mt_asprintf memusage] size=6 num_allocs=3 max_size=8 max_num_allocs=4 total_allocs=11

[features/index.datastore-index - usage-type gf_common_mt_char memusage] size=116 num_allocs=6 max_size=153 max_num_allocs=7 total_allocs=20

[features/index.datastore-index - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/index.datastore-index - usage-type gf_index_mt_priv_t memusage] size=288 num_allocs=1 max_size=288 max_num_allocs=1 total_allocs=1

[features/index.datastore-index - usage-type gf_index_inode_ctx_t memusage] size=112 num_allocs=2 max_size=112 max_num_allocs=2 total_allocs=2

[datastore-barrier.latency]

[features/barrier.datastore-barrier - Memory usage] num_types=123

[features/barrier.datastore-barrier - usage-type gf_barrier_mt_priv_t memusage] size=96 num_allocs=1 max_size=96 max_num_allocs=1 total_allocs=1

[xlator.features.barrier.priv] barrier.enabled=0 barrier.timeout=120

[datastore-marker.latency] datastore-marker.latency.GETXATTR=242583.000,3,727749.000 datastore-marker.latency.LOOKUP=278319.571,7,1948237.000

[features/marker.datastore-marker - Memory usage] num_types=131

[features/marker.datastore-marker - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/marker.datastore-marker - usage-type gf_marker_mt_marker_conf_t memusage] size=120 num_allocs=1 max_size=120 max_num_allocs=1 total_allocs=1

[datastore-selinux.latency] datastore-selinux.latency.GETXATTR=233534.000,3,700602.000

[features/selinux.datastore-selinux - Memory usage] num_types=123

[features/selinux.datastore-selinux - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/selinux.datastore-selinux - usage-type gf_selinux_mt_selinux_priv_t memusage] size=1 num_allocs=1 max_size=1 max_num_allocs=1 total_allocs=1

[datastore-io-threads.latency] datastore-io-threads.latency.STATFS=158123.000,1,158123.000 datastore-io-threads.latency.GETXATTR=230372.667,3,691118.000 datastore-io-threads.latency.LOOKUP=265826.857,7,1860788.000

[performance/io-threads.datastore-io-threads - Memory usage] num_types=124

[performance/io-threads.datastore-io-threads - usage-type gf_iot_mt_iot_conf_t memusage] size=472 num_allocs=1 max_size=472 max_num_allocs=1 total_allocs=1

[performance/io-threads.datastore-io-threads - usage-type gf_iot_mt_client_ctx_t memusage] size=384 num_allocs=3 max_size=384 max_num_allocs=3 total_allocs=3

[performance/io-threads.datastore-io-threads] maximum_threads_count=16 current_threads_count=1 sleep_count=1 idle_time=120 stack_size=262144 max_high_priority_threads=16 max_normal_priority_threads=16 max_low_priority_threads=16 max_least_priority_threads=1 current_high_priority_threads=0 current_normal_priority_threads=0 current_low_priority_threads=0 current_least_priority_threads=0

[datastore-upcall.latency] datastore-upcall.latency.STATFS=35296.000,1,35296.000 datastore-upcall.latency.GETXATTR=81645.333,3,244936.000 datastore-upcall.latency.LOOKUP=148659.143,7,1040614.000

[features/upcall.datastore-upcall - Memory usage] num_types=126

[features/upcall.datastore-upcall - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/upcall.datastore-upcall - usage-type gf_upcall_mt_private_t memusage] size=96 num_allocs=1 max_size=96 max_num_allocs=1 total_allocs=1

[datastore-leases.latency]

[features/leases.datastore-leases - Memory usage] num_types=130

[features/leases.datastore-leases - usage-type gf_leases_mt_private_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[datastore-read-only.latency]

[features/read-only.datastore-read-only - Memory usage] num_types=123

[features/read-only.datastore-read-only - usage-type gf_read_only_mt_priv_t memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[datastore-worm.latency]

[features/worm.datastore-worm - Memory usage] num_types=123

[features/worm.datastore-worm - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[datastore-locks.latency] datastore-locks.latency.STATFS=32145.000,1,32145.000 datastore-locks.latency.GETXATTR=59153.000,3,177459.000 datastore-locks.latency.LOOKUP=140539.714,7,983778.000

[features/locks.datastore-locks - Memory usage] num_types=131

[features/locks.datastore-locks - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/locks.datastore-locks - usage-type gf_locks_mt_pl_inode_t memusage] size=272 num_allocs=1 max_size=272 max_num_allocs=1 total_allocs=1

[features/locks.datastore-locks - usage-type gf_locks_mt_posix_locks_private_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[datastore-access-control.latency] datastore-access-control.latency.GETXATTR=53387.333,3,160162.000 datastore-access-control.latency.LOOKUP=123028.857,7,861202.000

[features/access-control.datastore-access-control - Memory usage] num_types=126

[features/access-control.datastore-access-control - usage-type gf_posix_acl_mt_ctx_t memusage] size=64 num_allocs=2 max_size=64 max_num_allocs=2 total_allocs=2

[features/access-control.datastore-access-control - usage-type gf_posix_acl_mt_posix_ace_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[features/access-control.datastore-access-control - usage-type gf_posix_acl_mt_conf_t memusage] size=56 num_allocs=1 max_size=56 max_num_allocs=1 total_allocs=1

[datastore-bitrot-stub.latency] datastore-bitrot-stub.latency.GETXATTR=50546.667,3,151640.000 datastore-bitrot-stub.latency.LOOKUP=111637.143,7,781460.000

[features/bitrot-stub.datastore-bitrot-stub - Memory usage] num_types=137

[features/bitrot-stub.datastore-bitrot-stub - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/bitrot-stub.datastore-bitrot-stub - usage-type gf_br_stub_mt_private_t memusage] size=9480 num_allocs=1 max_size=9480 max_num_allocs=1 total_allocs=1

[datastore-changelog.latency]

[features/changelog.datastore-changelog - Memory usage] num_types=136

[features/changelog.datastore-changelog - usage-type gf_common_mt_strdup memusage] size=241 num_allocs=8 max_size=255 max_num_allocs=10 total_allocs=10

[features/changelog.datastore-changelog - usage-type gf_common_mt_socket_private_t memusage] size=664 num_allocs=1 max_size=664 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_char memusage] size=180 num_allocs=8 max_size=180 max_num_allocs=8 total_allocs=8

[features/changelog.datastore-changelog - usage-type gf_common_mt_mem_pool memusage] size=128 num_allocs=2 max_size=128 max_num_allocs=2 total_allocs=2

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_auth_list memusage] size=360 num_allocs=5 max_size=360 max_num_allocs=5 total_allocs=5

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_t memusage] size=216 num_allocs=1 max_size=216 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_program_t memusage] size=279248 num_allocs=2 max_size=279248 max_num_allocs=2 total_allocs=2

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_listener_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpc_trans_t memusage] size=1280 num_allocs=1 max_size=1280 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rbuf_t memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rlist_t memusage] size=896 num_allocs=4 max_size=896 max_num_allocs=4 total_allocs=4

[features/changelog.datastore-changelog - usage-type gf_common_mt_rvec_t memusage] size=4194432 num_allocs=4 max_size=4194432 max_num_allocs=4 total_allocs=4

[features/changelog.datastore-changelog - usage-type gf_changelog_mt_priv_t memusage] size=1224 num_allocs=1 max_size=1224 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_changelog_mt_rt_t memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_changelog_mt_ev_dispatcher_t memusage] size=24 num_allocs=1 max_size=24 max_num_allocs=1 total_allocs=1

[datastore-trash.latency]

[features/trash.datastore-trash - Memory usage] num_types=126

[features/trash.datastore-trash - usage-type gf_common_mt_inode_ctx memusage] size=608 num_allocs=1 max_size=608 max_num_allocs=1 total_allocs=1

[features/trash.datastore-trash - usage-type gf_common_mt_list_head memusage] size=1273488 num_allocs=2 max_size=1273488 max_num_allocs=2 total_allocs=2

[features/trash.datastore-trash - usage-type gf_common_mt_inode_table_t memusage] size=240 num_allocs=1 max_size=240 max_num_allocs=1 total_allocs=1

[features/trash.datastore-trash - usage-type gf_common_mt_asprintf memusage] size=22 num_allocs=1 max_size=119 max_num_allocs=1 total_allocs=2

[features/trash.datastore-trash - usage-type gf_common_mt_strdup memusage] size=46 num_allocs=2 max_size=46 max_num_allocs=2 total_allocs=2

[features/trash.datastore-trash - usage-type gf_common_mt_mem_pool memusage] size=256 num_allocs=4 max_size=256 max_num_allocs=4 total_allocs=4

[features/trash.datastore-trash - usage-type gf_trash_mt_trash_private_t memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[datastore-posix.latency] datastore-posix.latency.STATFS=26466.000,1,26466.000 datastore-posix.latency.GETXATTR=46410.000,3,139230.000 datastore-posix.latency.LOOKUP=107713.429,7,753994.000

[storage/posix.datastore-posix - Memory usage] num_types=129

[storage/posix.datastore-posix - usage-type gf_common_mt_strdup memusage] size=34 num_allocs=1 max_size=34 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_common_mt_char memusage] size=256 num_allocs=1 max_size=316 max_num_allocs=4 total_allocs=9

[storage/posix.datastore-posix - usage-type gf_common_mt_tw_timer_list memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_posix_mt_posix_private memusage] size=576 num_allocs=1 max_size=576 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_posix_mt_trash_path memusage] size=54 num_allocs=1 max_size=54 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_posix_mt_mdata_attr memusage] size=128 num_allocs=2 max_size=192 max_num_allocs=3 total_allocs=8

[storage/posix.datastore-posix] base_path=/virtstack/storage/disk/datastore base_path_length=33 max_read=0 max_write=0

DUMP-END-TIME: 2021-01-07 11:11:30.553967

mohit84 commented 3 years ago

Can you please share latest c program along with terminal output, i want to see why a log file has not generated. I hope the directory location of the logfile is exist.

tangyla commented 3 years ago

sorry @mohit84 I have a mistake, after set log level 9, reset log level 4 and log file /dev/stderr, below is new code.

`#include

include

include

include

include <glusterfs/api/glfs.h>

int create_file(struct glfs glfs) { int ret; int64_t current_length; struct glfs_fd fd = NULL; fd = glfs_creat(glfs, "test.qcow2", O_WRONLY | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR); fprintf(stderr, "glfs_creat fd = %p\n", fd);

current_length = glfs_lseek(fd, 0, SEEK_END);
fprintf(stderr, "glfs_lseek ret = %ld\n", current_length);

if (fd) {
    ret = glfs_close(fd);
    fprintf(stderr, "glfs_close ret = %d\n", ret);
}

return EXIT_SUCCESS;

}

int main(int argc, char* argv) { int ret = 0; char volume = "datastore"; char server = "node0"; int port = 24007; struct glfs glfs; time_t start, end, b_fini, e_fini;

start = time(NULL);
fprintf(stderr, "%s start time: %ld\n", argv[0], start);

glfs = glfs_new(volume);
ret = glfs_set_logging(glfs, "/var/log/glusterfs/gfapi.log", 9);
fprintf(stderr, "glfs_set_logging ret = %d\n", ret);

ret = glfs_set_volfile_server(glfs, "tcp", server, port);
fprintf(stderr, "glfs_set_volfile_server ret = %d\n", ret);

ret = glfs_init(glfs);
fprintf(stderr, "glfs_init ret = %d\n", ret);
create_file(glfs);

b_fini = time(NULL);
fprintf(stderr, "glfs_fini start: %ld, before consume seconds: %ld\n", b_fini, b_fini - start);
ret = glfs_fini(glfs);
e_fini = time(NULL);
fprintf(stderr, "glfs_fini end: %ld, consume seconds: %ld\n", e_fini, e_fini - b_fini);
fprintf(stderr, "glfs_fini ret = %d\n", ret);

end = time(NULL);
fprintf(stderr, "%s end time: %ld, consume seconds: %ld\n", argv[0], end, (end - start));

return EXIT_SUCCESS;

} `

tangyla commented 3 years ago

this is new log and dump status

1. volume info

[root@node1 ~]# gluster volume info datastore

Volume Name: datastore Type: Replicate Volume ID: 040a6510-ccfc-4406-ae63-db93456cb330 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: node0:/virtstack/storage/disk/datastore Brick2: node1:/virtstack/storage/disk/datastore Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on performance.client-io-threads: off

2. log

[2021-01-08 10:10:46.665750] D [rpc-clnt.c:1016:rpc_clnt_connection_init] 0-gfapi: defaulting frame-timeout to 30mins [2021-01-08 10:10:46.665934] D [rpc-clnt.c:1028:rpc_clnt_connection_init] 0-gfapi: disable ping-timeout [2021-01-08 10:10:46.665955] D [rpc-transport.c:296:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/7.8/rpc-transport/socket.so [2021-01-08 10:10:46.720150] D [MSGID: 101002] [options.c:961:xl_opt_validate] 0-gfapi: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction [2021-01-08 10:10:46.720198] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-gfapi: no range check required for 'option remote-port 24007' [2021-01-08 10:10:46.721484] D [socket.c:4633:socket_init] 0-gfapi: Configued transport.tcp-user-timeout=42 [2021-01-08 10:10:46.721516] D [socket.c:4651:socket_init] 0-gfapi: Reconfigued transport.keepalivecnt=9 [2021-01-08 10:10:46.721523] D [socket.c:4347:ssl_setup_connection_params] 0-gfapi: SSL support on the I/O path is NOT enabled [2021-01-08 10:10:46.721527] D [socket.c:4350:ssl_setup_connection_params] 0-gfapi: SSL support for glusterd is NOT enabled [2021-01-08 10:10:46.721537] D [rpc-clnt.c:1591:rpcclnt_cbk_program_register] 0-gfapi: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2021-01-08 10:10:46.721541] T [rpc-clnt.c:398:rpc_clnt_reconnect] 0-gfapi: attempting reconnect [2021-01-08 10:10:46.721545] T [socket.c:3474:socket_connect] 0-gfapi: connecting 0x68e890, sock=-1 [2021-01-08 10:10:46.727626] D [dict.c:1176:data_to_uint16] (-->/usr/lib64/glusterfs/7.8/rpc-transport/socket.so(+0x7172) [0x7fffee54e172] -->/usr/lib64/glusterfs/7.8/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0xe3) [0x7fffee555b33] -->/lib64/libglusterfs.so.0(data_to_uint16+0xa6) [0x7ffff6e82076] ) 0-dict: key null, unsigned integer type asked, has integer type [Invalid argument] [2021-01-08 10:10:46.727680] T [MSGID: 0] [common-utils.c:491:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: node0 [2021-01-08 10:10:46.743382] D [MSGID: 0] [common-utils.c:532:gf_resolve_ip6] 0-resolver: returning ip-172.16.88.30 (port-24007) for hostname: node0 and port: 24007 [2021-01-08 10:10:46.743424] D [socket.c:3392:socket_fix_ssl_opts] 0-gfapi: disabling SSL for portmapper connection [2021-01-08 10:10:46.743472] T [socket.c:1045:__socket_nodelay] 0-gfapi: NODELAY enabled for socket 8 [2021-01-08 10:10:46.743483] T [socket.c:1134:socket_keepalive] 0-gfapi: Keep-alive enabled for socket: 8, (idle: 20, interval: 2, max-probes: 9, timeout: 42) [2021-01-08 10:10:46.743513] T [socket.c:3589:socket_connect] 0-gfapi: >>> connect() with non-blocking IO for ALL [2021-01-08 10:10:46.743523] T [socket.c:226:socket_dump_info] 0-gfapi: $$$ client: connecting to (af:2,sock:8) 172.16.88.30 non-SSL (errno:0:Success) [2021-01-08 10:10:46.743807] T [socket.c:3000:socket_event_handler] 0-gfapi: client (sock:8) in:0, out:4, err:0 [2021-01-08 10:10:46.743858] T [socket.c:3007:socket_event_handler] 0-gfapi: client (sock:8) socket is not connected, completing connection [2021-01-08 10:10:46.743915] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 148, payload: 84, rpc hdr: 64 [2021-01-08 10:10:46.744049] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 0, XID: 0x2, Program: GlusterFS Handshake, ProgVers: 2, Proc: 2) to rpc-transport (gfapi) [2021-01-08 10:10:46.744078] D [rpc-clnt-ping.c:298:rpc_clnt_start_ping] 0-gfapi: ping timeout is 0, returning [2021-01-08 10:10:46.744088] T [socket.c:2905:socket_handle_client_connection_attempt] 0-gfapi: socket_connect_finish() returned 0 [2021-01-08 10:10:46.744166] T [socket.c:3014:socket_event_handler] 0-gfapi: (sock:8) socket_complete_connection() returned 1 [2021-01-08 10:10:46.744184] T [socket.c:3000:socket_event_handler] 0-gfapi: client (sock:8) in:0, out:4, err:0 [2021-01-08 10:10:46.744224] T [socket.c:3026:socket_event_handler] 0-gfapi: Client socket (8) is already connected [2021-01-08 10:10:46.744235] T [socket.c:3036:socket_event_handler] 0-gfapi: (sock:8) socket_event_poll_out returned 0 [2021-01-08 10:10:46.744198] T [socket.c:3018:socket_event_handler] 0-gfapi: (sock:8) returning to wait on socket [2021-01-08 10:10:46.744609] T [socket.c:3000:socket_event_handler] 0-gfapi: client (sock:8) in:1, out:0, err:0 [2021-01-08 10:10:46.744669] T [socket.c:3026:socket_event_handler] 0-gfapi: Client socket (8) is already connected [2021-01-08 10:10:46.744678] T [socket.c:574:socket_ssl_readv] 0-gfapi: reading over non-SSL [2021-01-08 10:10:46.744685] T [socket.c:574:__socket_ssl_readv] 0-gfapi: reading over non-SSL [2021-01-08 10:10:46.744700] T [socket.c:574:__socket_ssl_readv] 0-gfapi: * reading over non-SSL [2021-01-08 10:10:46.744716] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-gfapi: received rpc message (RPC XID: 0x2 Program: GlusterFS Handshake, ProgVers: 2, Proc: 2) from rpc-transport (gfapi) [2021-01-08 10:10:46.744981] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-client-0' [2021-01-08 10:10:46.745009] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/protocol/client.so [2021-01-08 10:10:46.766152] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-client-0: method missing (dump_metrics) [2021-01-08 10:10:46.766198] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-client-0: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.766208] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-client-0:protocol/client [2021-01-08 10:10:46.766217] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:send-gids:true [2021-01-08 10:10:46.766244] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:transport.socket.keepalive-count:9 [2021-01-08 10:10:46.766249] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:transport.socket.keepalive-interval:2 [2021-01-08 10:10:46.766254] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:transport.socket.keepalive-time:20 [2021-01-08 10:10:46.766259] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:transport.tcp-user-timeout:0 [2021-01-08 10:10:46.766264] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:transport.socket.ssl-enabled:off [2021-01-08 10:10:46.766269] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:password:4377c65b-4052-4133-8066-d2edfc9281d1 [2021-01-08 10:10:46.766276] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:username:ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 [2021-01-08 10:10:46.766281] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:transport.address-family:inet [2021-01-08 10:10:46.766290] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:transport-type:tcp [2021-01-08 10:10:46.766295] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:remote-subvolume:/virtstack/storage/disk/datastore [2021-01-08 10:10:46.766300] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:remote-host:node0 [2021-01-08 10:10:46.766304] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-0:ping-timeout:42 [2021-01-08 10:10:46.766308] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-client-0 [2021-01-08 10:10:46.766316] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-client-1' [2021-01-08 10:10:46.766321] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/protocol/client.so [2021-01-08 10:10:46.766337] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-client-1: method missing (dump_metrics) [2021-01-08 10:10:46.766355] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-client-1: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.766369] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-client-1:protocol/client [2021-01-08 10:10:46.766374] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:send-gids:true [2021-01-08 10:10:46.766382] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:transport.socket.keepalive-count:9 [2021-01-08 10:10:46.766387] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:transport.socket.keepalive-interval:2 [2021-01-08 10:10:46.766392] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:transport.socket.keepalive-time:20 [2021-01-08 10:10:46.766396] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:transport.tcp-user-timeout:0 [2021-01-08 10:10:46.766401] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:transport.socket.ssl-enabled:off [2021-01-08 10:10:46.766406] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:password:4377c65b-4052-4133-8066-d2edfc9281d1 [2021-01-08 10:10:46.766411] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:username:ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 [2021-01-08 10:10:46.766418] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:transport.address-family:inet [2021-01-08 10:10:46.766423] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:transport-type:tcp [2021-01-08 10:10:46.766428] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:remote-subvolume:/virtstack/storage/disk/datastore [2021-01-08 10:10:46.766435] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:remote-host:node1 [2021-01-08 10:10:46.766440] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-client-1:ping-timeout:42 [2021-01-08 10:10:46.766444] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-client-1 [2021-01-08 10:10:46.766451] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-replicate-0' [2021-01-08 10:10:46.766457] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/cluster/replicate.so [2021-01-08 10:10:46.790004] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-replicate-0: method missing (dump_metrics) [2021-01-08 10:10:46.790059] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-replicate-0: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.790069] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-replicate-0:cluster/replicate [2021-01-08 10:10:46.790079] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-replicate-0:use-compound-fops:off [2021-01-08 10:10:46.790085] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-replicate-0:afr-pending-xattr:datastore-client-0,datastore-client-1 [2021-01-08 10:10:46.790091] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-replicate-0->datastore-client-0 [2021-01-08 10:10:46.790100] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-replicate-0->datastore-client-1 [2021-01-08 10:10:46.790104] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-replicate-0 [2021-01-08 10:10:46.790117] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-dht' [2021-01-08 10:10:46.790123] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/cluster/distribute.so [2021-01-08 10:10:46.812011] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-dht: method missing (dump_metrics) [2021-01-08 10:10:46.812061] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-dht:cluster/distribute [2021-01-08 10:10:46.812074] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-dht:force-migration:off [2021-01-08 10:10:46.812079] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-dht:lock-migration:off [2021-01-08 10:10:46.812104] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-dht->datastore-replicate-0 [2021-01-08 10:10:46.812112] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-dht [2021-01-08 10:10:46.812122] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-utime' [2021-01-08 10:10:46.812128] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/features/utime.so [2021-01-08 10:10:46.817323] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-utime: method missing (dump_metrics) [2021-01-08 10:10:46.817370] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-utime: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.817381] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-utime:features/utime [2021-01-08 10:10:46.817389] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore-utime:noatime:on [2021-01-08 10:10:46.817395] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-utime->datastore-dht [2021-01-08 10:10:46.817399] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-utime [2021-01-08 10:10:46.817409] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-write-behind' [2021-01-08 10:10:46.817415] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/performance/write-behind.so [2021-01-08 10:10:46.823746] T [MSGID: 0] [xlator.c:290:xlator_dynload_apis] 0-xlator: datastore-write-behind: method missing (notify) [2021-01-08 10:10:46.823793] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-write-behind: method missing (dump_metrics) [2021-01-08 10:10:46.823801] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-write-behind: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.823808] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-write-behind:performance/write-behind [2021-01-08 10:10:46.823821] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-write-behind->datastore-utime [2021-01-08 10:10:46.823826] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-write-behind [2021-01-08 10:10:46.823834] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-read-ahead' [2021-01-08 10:10:46.823841] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/performance/read-ahead.so [2021-01-08 10:10:46.828720] T [MSGID: 0] [xlator.c:290:xlator_dynload_apis] 0-xlator: datastore-read-ahead: method missing (notify) [2021-01-08 10:10:46.828767] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-read-ahead: method missing (dump_metrics) [2021-01-08 10:10:46.828775] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-read-ahead: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.828782] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-read-ahead:performance/read-ahead [2021-01-08 10:10:46.828789] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-read-ahead->datastore-write-behind [2021-01-08 10:10:46.828794] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-read-ahead [2021-01-08 10:10:46.828805] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-io-cache' [2021-01-08 10:10:46.828811] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/performance/io-cache.so [2021-01-08 10:10:46.835255] T [MSGID: 0] [xlator.c:290:xlator_dynload_apis] 0-xlator: datastore-io-cache: method missing (notify) [2021-01-08 10:10:46.835305] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-io-cache: method missing (dump_metrics) [2021-01-08 10:10:46.835313] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-io-cache: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.835327] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-io-cache:performance/io-cache [2021-01-08 10:10:46.835335] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-io-cache->datastore-read-ahead [2021-01-08 10:10:46.835340] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-io-cache [2021-01-08 10:10:46.835351] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-open-behind' [2021-01-08 10:10:46.835356] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/performance/open-behind.so [2021-01-08 10:10:46.841968] T [MSGID: 0] [xlator.c:290:xlator_dynload_apis] 0-xlator: datastore-open-behind: method missing (notify) [2021-01-08 10:10:46.842015] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore-open-behind: method missing (dump_metrics) [2021-01-08 10:10:46.842023] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-open-behind: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.842030] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-open-behind:performance/open-behind [2021-01-08 10:10:46.842051] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-open-behind->datastore-io-cache [2021-01-08 10:10:46.842056] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-open-behind [2021-01-08 10:10:46.842068] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-quick-read' [2021-01-08 10:10:46.842074] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/performance/quick-read.so [2021-01-08 10:10:46.847447] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-quick-read: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.847496] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-quick-read:performance/quick-read [2021-01-08 10:10:46.847507] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-quick-read->datastore-open-behind [2021-01-08 10:10:46.847512] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-quick-read [2021-01-08 10:10:46.847525] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore-md-cache' [2021-01-08 10:10:46.847532] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/performance/md-cache.so [2021-01-08 10:10:46.854499] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore-md-cache: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.854548] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore-md-cache:performance/md-cache [2021-01-08 10:10:46.854560] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore-md-cache->datastore-quick-read [2021-01-08 10:10:46.854565] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore-md-cache [2021-01-08 10:10:46.854580] T [MSGID: 0] [graph.y:188:new_volume] 0-parser: New node for 'datastore' [2021-01-08 10:10:46.854586] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/debug/io-stats.so [2021-01-08 10:10:46.863334] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: datastore: method missing (dump_metrics) [2021-01-08 10:10:46.863380] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: datastore: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.863391] T [MSGID: 0] [graph.y:219:volume_type] 0-parser: Type:datastore:debug/io-stats [2021-01-08 10:10:46.863400] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore:global-threading:off [2021-01-08 10:10:46.863405] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore:count-fop-hits:off [2021-01-08 10:10:46.863410] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore:latency-measurement:off [2021-01-08 10:10:46.863422] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore:threads:16 [2021-01-08 10:10:46.863427] T [MSGID: 0] [graph.y:254:volume_option] 0-parser: Option:datastore:log-level:INFO [2021-01-08 10:10:46.863432] T [MSGID: 0] [graph.y:308:volume_sub] 0-parser: child:datastore->datastore-md-cache [2021-01-08 10:10:46.863436] T [MSGID: 0] [graph.y:325:volume_end] 0-parser: end:datastore [2021-01-08 10:10:46.863494] T [MSGID: 0] [xlator.c:369:xlator_dynload] 0-xlator: attempt to load file /usr/lib64/glusterfs/7.8/xlator/meta.so [2021-01-08 10:10:46.873360] T [MSGID: 0] [xlator.c:286:xlator_dynload_apis] 0-xlator: meta-autoload: method missing (reconfigure) [2021-01-08 10:10:46.873407] T [MSGID: 0] [xlator.c:290:xlator_dynload_apis] 0-xlator: meta-autoload: method missing (notify) [2021-01-08 10:10:46.873416] T [MSGID: 0] [xlator.c:294:xlator_dynload_apis] 0-xlator: meta-autoload: method missing (dumpops) [2021-01-08 10:10:46.873420] T [MSGID: 0] [xlator.c:305:xlator_dynload_apis] 0-xlator: meta-autoload: method missing (dump_metrics) [2021-01-08 10:10:46.873424] T [MSGID: 0] [xlator.c:313:xlator_dynload_apis] 0-xlator: meta-autoload: method missing (pass_through_fops), falling back to default [2021-01-08 10:10:46.873464] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore: no range check required for 'option threads 16' [2021-01-08 10:10:46.873597] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-meta-autoload: option meta-dir-name using default value .meta [2021-01-08 10:10:46.873739] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-md-cache: option md-cache-timeout using default value 1 [2021-01-08 10:10:46.873769] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-selinux using default value false [2021-01-08 10:10:46.873815] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-capability-xattrs using default value true [2021-01-08 10:10:46.873826] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-ima-xattrs using default value true [2021-01-08 10:10:46.873832] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-posix-acl using default value false [2021-01-08 10:10:46.873837] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-glusterfs-acl using default value false [2021-01-08 10:10:46.873843] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-swift-metadata using default value true [2021-01-08 10:10:46.873848] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-samba-metadata using default value false [2021-01-08 10:10:46.873857] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option force-readdirp using default value true [2021-01-08 10:10:46.873863] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option cache-invalidation using default value false [2021-01-08 10:10:46.873870] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option global-cache-invalidation using default value true [2021-01-08 10:10:46.873884] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option pass-through using default value false [2021-01-08 10:10:46.873901] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-md-cache: option md-cache-statfs using default value off [2021-01-08 10:10:46.873915] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-md-cache: option xattr-cache-list using default value
[2021-01-08 10:10:46.874022] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-quick-read: option max-file-size using default value 64KB [2021-01-08 10:10:46.874084] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-quick-read: option cache-timeout using default value 1 [2021-01-08 10:10:46.874094] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-quick-read: no range check required for 'option cache-timeout 1' [2021-01-08 10:10:46.874109] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-quick-read: option cache-invalidation using default value false [2021-01-08 10:10:46.874124] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-quick-read: option cache-size using default value 128MB [2021-01-08 10:10:46.874180] D [MSGID: 0] [quick-read.c:1187:check_cache_size_ok] 0-datastore-quick-read: Max cache size is 3934281728 [2021-01-08 10:10:46.874191] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-quick-read: option ctime-invalidation using default value false [2021-01-08 10:10:46.874258] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-open-behind: option use-anonymous-fd using default value no [2021-01-08 10:10:46.874317] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-open-behind: option lazy-open using default value yes [2021-01-08 10:10:46.874334] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-open-behind: option read-after-open using default value yes [2021-01-08 10:10:46.874343] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-open-behind: option pass-through using default value false [2021-01-08 10:10:46.874366] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-io-cache: option pass-through using default value false [2021-01-08 10:10:46.874389] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-io-cache: option cache-size using default value 32MB [2021-01-08 10:10:46.874397] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-io-cache: option cache-timeout using default value 1 [2021-01-08 10:10:46.874403] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-io-cache: option min-file-size using default value 0 [2021-01-08 10:10:46.874407] T [MSGID: 0] [options.c:141:xlator_option_validate_sizet] 0-datastore-io-cache: no range check required for 'option min-file-size 0' [2021-01-08 10:10:46.874412] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-io-cache: option max-file-size using default value 0 [2021-01-08 10:10:46.874416] T [MSGID: 0] [options.c:141:xlator_option_validate_sizet] 0-datastore-io-cache: no range check required for 'option max-file-size 0' [2021-01-08 10:10:46.874425] D [MSGID: 0] [io-cache.c:1669:check_cache_size_ok] 0-datastore-io-cache: Max cache size is 3934281728 [2021-01-08 10:10:46.874445] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-read-ahead: option page-size using default value 131072 [2021-01-08 10:10:46.874466] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-read-ahead: option page-count using default value 4 [2021-01-08 10:10:46.874510] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-read-ahead: option force-atime-update using default value false [2021-01-08 10:10:46.874553] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-read-ahead: option pass-through using default value false [2021-01-08 10:10:46.874616] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-write-behind: option aggregate-size using default value 128KB [2021-01-08 10:10:46.874659] T [MSGID: 0] [options.c:141:xlator_option_validate_sizet] 0-datastore-write-behind: no range check required for 'option aggregate-size 128KB' [2021-01-08 10:10:46.874671] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-write-behind: option cache-size using default value 1MB [2021-01-08 10:10:46.874678] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-write-behind: option flush-behind using default value on [2021-01-08 10:10:46.874683] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-write-behind: option trickling-writes using default value on [2021-01-08 10:10:46.874688] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-write-behind: option strict-O_DIRECT using default value off [2021-01-08 10:10:46.874737] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-write-behind: option strict-write-ordering using default value off [2021-01-08 10:10:46.874778] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-write-behind: option resync-failed-syncs-after-fsync using default value off [2021-01-08 10:10:46.874826] D [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-utime: option noatime using set value on [2021-01-08 10:10:46.874851] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option lookup-optimize using default value on [2021-01-08 10:10:46.874860] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option unhashed-sticky-bit using default value off [2021-01-08 10:10:46.874866] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option use-readdirp using default value on [2021-01-08 10:10:46.874879] T [MSGID: 0] [options.c:1216:xlator_option_init_percent_or_size] 0-datastore-dht: option min-free-disk using default value 10% [2021-01-08 10:10:46.874895] T [MSGID: 0] [options.c:1215:xlator_option_init_percent] 0-datastore-dht: option min-free-inodes using default value 5% [2021-01-08 10:10:46.874905] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-dht: option directory-layout-spread not set [2021-01-08 10:10:46.874914] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option assert-no-child-down using default value off [2021-01-08 10:10:46.874978] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option readdir-optimize using default value off [2021-01-08 10:10:46.875019] D [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option lock-migration using set value off [2021-01-08 10:10:46.875040] D [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option force-migration using set value off [2021-01-08 10:10:46.875133] D [MSGID: 0] [dht-shared.c:358:dht_init_regex] 0-datastore-dht: using regex rsync-hash-regex = ^.(.+).[^.]+$ [2021-01-08 10:10:46.875262] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option randomize-hash-range-by-gfid using default value off [2021-01-08 10:10:46.875306] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-dht: option xattr-name using default value trusted.glusterfs.dht [2021-01-08 10:10:46.875319] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-dht: option weighted-rebalance using default value on [2021-01-08 10:10:46.875418] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option arbiter-count not set [2021-01-08 10:10:46.875462] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option thin-arbiter not set [2021-01-08 10:10:46.875523] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option afr-dirty-xattr using default value trusted.afr.dirty [2021-01-08 10:10:46.875545] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option metadata-splitbrain-forced-heal using default value off [2021-01-08 10:10:46.875555] T [MSGID: 0] [options.c:1218:xlator_option_init_xlator] 0-datastore-replicate-0: option read-subvolume not set [2021-01-08 10:10:46.875567] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-replicate-0: option read-subvolume-index using default value -1 [2021-01-08 10:10:46.875583] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-replicate-0: no range check required for 'option read-subvolume-index -1' [2021-01-08 10:10:46.875593] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option choose-local using default value true [2021-01-08 10:10:46.875612] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option read-hash-mode using default value 1 [2021-01-08 10:10:46.875640] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option favorite-child-policy using default value none [2021-01-08 10:10:46.875662] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option shd-max-threads using default value 1 [2021-01-08 10:10:46.875676] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option shd-wait-qlength using default value 1024 [2021-01-08 10:10:46.875683] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option background-self-heal-count using default value 8 [2021-01-08 10:10:46.875689] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option heal-wait-queue-length using default value 128 [2021-01-08 10:10:46.875695] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option data-self-heal using default value off [2021-01-08 10:10:46.875703] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option data-self-heal-algorithm not set [2021-01-08 10:10:46.875710] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option data-self-heal-window-size using default value 1 [2021-01-08 10:10:46.875718] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option metadata-self-heal using default value off [2021-01-08 10:10:46.875725] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option entry-self-heal using default value off [2021-01-08 10:10:46.875731] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option halo-shd-max-latency using default value 99999 [2021-01-08 10:10:46.875737] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option halo-max-latency using default value 5 [2021-01-08 10:10:46.875744] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option halo-max-replicas using default value 99999 [2021-01-08 10:10:46.875750] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option halo-min-replicas using default value 2 [2021-01-08 10:10:46.875757] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option halo-enabled using default value False [2021-01-08 10:10:46.875815] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option halo-nfsd-max-latency using default value 5 [2021-01-08 10:10:46.875829] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option iam-nfs-daemon using default value off [2021-01-08 10:10:46.875838] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option optimistic-change-log using default value on [2021-01-08 10:10:46.875892] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option pre-op-compat using default value on [2021-01-08 10:10:46.875910] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option locking-scheme using default value full [2021-01-08 10:10:46.875941] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option full-lock using default value yes [2021-01-08 10:10:46.876006] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option granular-entry-heal using default value no [2021-01-08 10:10:46.876019] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option eager-lock using default value on [2021-01-08 10:10:46.876034] T [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option quorum-type using default value none [2021-01-08 10:10:46.876142] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option quorum-count not set [2021-01-08 10:10:46.876159] T [MSGID: 0] [options.c:1214:xlator_option_init_size_uint64] 0-datastore-replicate-0: option self-heal-readdir-size using default value 1KB [2021-01-08 10:10:46.876175] T [MSGID: 0] [options.c:1211:xlator_option_init_uint32] 0-datastore-replicate-0: option post-op-delay-secs using default value 1 [2021-01-08 10:10:46.876258] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option ensure-durability using default value on [2021-01-08 10:10:46.876302] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option self-heal-daemon using default value on [2021-01-08 10:10:46.876335] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option iam-self-heal-daemon using default value off [2021-01-08 10:10:46.876346] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-replicate-0: option heal-timeout using default value 600 [2021-01-08 10:10:46.876358] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option consistent-metadata using default value no [2021-01-08 10:10:46.876370] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-replicate-0: option consistent-io using default value no [2021-01-08 10:10:46.876381] D [MSGID: 0] [options.c:1208:xlator_option_init_str] 0-datastore-replicate-0: option afr-pending-xattr using set value datastore-client-0,datastore-client-1 [2021-01-08 10:10:46.877202] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-1: option event-threads using default value 2 [2021-01-08 10:10:46.877244] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-1: option frame-timeout using default value 1800 [2021-01-08 10:10:46.877254] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-1: option remote-port not set [2021-01-08 10:10:46.877260] D [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-1: option ping-timeout using set value 42 [2021-01-08 10:10:46.877266] D [MSGID: 0] [options.c:1219:xlator_option_init_path] 0-datastore-client-1: option remote-subvolume using set value /virtstack/storage/disk/datastore [2021-01-08 10:10:46.877278] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-client-1: option filter-O_DIRECT using default value disable [2021-01-08 10:10:46.877285] D [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-client-1: option send-gids using set value true [2021-01-08 10:10:46.877292] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-client-1: option testing.old-protocol using default value off [2021-01-08 10:10:46.877316] D [rpc-clnt.c:1016:rpc_clnt_connection_init] 0-datastore-client-1: defaulting frame-timeout to 30mins [2021-01-08 10:10:46.877320] D [rpc-clnt.c:1024:rpc_clnt_connection_init] 0-datastore-client-1: setting ping-timeout to 42 [2021-01-08 10:10:46.877332] D [rpc-transport.c:296:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/7.8/rpc-transport/socket.so [2021-01-08 10:10:46.877405] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-1: no range check required for 'option transport.tcp-user-timeout 0' [2021-01-08 10:10:46.877431] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-1: no range check required for 'option transport.socket.keepalive-time 20' [2021-01-08 10:10:46.877444] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-1: no range check required for 'option transport.socket.keepalive-interval 2' [2021-01-08 10:10:46.877453] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-1: no range check required for 'option transport.socket.keepalive-count 9' [2021-01-08 10:10:46.877466] D [socket.c:4633:socket_init] 0-datastore-client-1: Configued transport.tcp-user-timeout=0 [2021-01-08 10:10:46.877470] D [socket.c:4651:socket_init] 0-datastore-client-1: Reconfigued transport.keepalivecnt=9 [2021-01-08 10:10:46.877474] D [socket.c:4347:ssl_setup_connection_params] 0-datastore-client-1: SSL support on the I/O path is NOT enabled [2021-01-08 10:10:46.877478] D [socket.c:4350:ssl_setup_connection_params] 0-datastore-client-1: SSL support for glusterd is NOT enabled [2021-01-08 10:10:46.877484] D [rpc-clnt.c:1591:rpcclnt_cbk_program_register] 0-datastore-client-1: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2021-01-08 10:10:46.877530] D [MSGID: 0] [client.c:2627:client_init_rpc] 0-datastore-client-1: client init successful [2021-01-08 10:10:46.877589] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-0: option event-threads using default value 2 [2021-01-08 10:10:46.877638] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-0: option frame-timeout using default value 1800 [2021-01-08 10:10:46.877676] T [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-0: option remote-port not set [2021-01-08 10:10:46.877689] D [MSGID: 0] [options.c:1212:xlator_option_init_int32] 0-datastore-client-0: option ping-timeout using set value 42 [2021-01-08 10:10:46.877696] D [MSGID: 0] [options.c:1219:xlator_option_init_path] 0-datastore-client-0: option remote-subvolume using set value /virtstack/storage/disk/datastore [2021-01-08 10:10:46.877702] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-client-0: option filter-O_DIRECT using default value disable [2021-01-08 10:10:46.877709] D [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-client-0: option send-gids using set value true [2021-01-08 10:10:46.877716] T [MSGID: 0] [options.c:1217:xlator_option_init_bool] 0-datastore-client-0: option testing.old-protocol using default value off [2021-01-08 10:10:46.877724] D [rpc-clnt.c:1016:rpc_clnt_connection_init] 0-datastore-client-0: defaulting frame-timeout to 30mins [2021-01-08 10:10:46.877728] D [rpc-clnt.c:1024:rpc_clnt_connection_init] 0-datastore-client-0: setting ping-timeout to 42 [2021-01-08 10:10:46.877734] D [rpc-transport.c:296:rpc_transport_load] 0-rpc-transport: attempt to load file /usr/lib64/glusterfs/7.8/rpc-transport/socket.so [2021-01-08 10:10:46.877790] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-0: no range check required for 'option transport.tcp-user-timeout 0' [2021-01-08 10:10:46.877857] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-0: no range check required for 'option transport.socket.keepalive-time 20' [2021-01-08 10:10:46.877876] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-0: no range check required for 'option transport.socket.keepalive-interval 2' [2021-01-08 10:10:46.877886] T [MSGID: 0] [options.c:79:xlator_option_validate_int] 0-datastore-client-0: no range check required for 'option transport.socket.keepalive-count 9' [2021-01-08 10:10:46.877905] D [socket.c:4633:socket_init] 0-datastore-client-0: Configued transport.tcp-user-timeout=0 [2021-01-08 10:10:46.877910] D [socket.c:4651:socket_init] 0-datastore-client-0: Reconfigued transport.keepalivecnt=9 [2021-01-08 10:10:46.877914] D [socket.c:4347:ssl_setup_connection_params] 0-datastore-client-0: SSL support on the I/O path is NOT enabled [2021-01-08 10:10:46.877917] D [socket.c:4350:ssl_setup_connection_params] 0-datastore-client-0: SSL support for glusterd is NOT enabled [2021-01-08 10:10:46.877949] D [rpc-clnt.c:1591:rpcclnt_cbk_program_register] 0-datastore-client-0: New program registered: GlusterFS Callback, Num: 52743234, Ver: 1 [2021-01-08 10:10:46.877960] D [MSGID: 0] [client.c:2627:client_init_rpc] 0-datastore-client-0: client init successful [2021-01-08 10:10:46.878082] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-1: option 'transport.address-family' is not recognized [2021-01-08 10:10:46.878158] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-1: option 'transport.socket.ssl-enabled' is not recognized [2021-01-08 10:10:46.878187] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-1: option 'transport.tcp-user-timeout' is not recognized [2021-01-08 10:10:46.878243] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-1: option 'transport.socket.keepalive-time' is not recognized [2021-01-08 10:10:46.878313] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-1: option 'transport.socket.keepalive-interval' is not recognized [2021-01-08 10:10:46.878340] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-1: option 'transport.socket.keepalive-count' is not recognized [2021-01-08 10:10:46.878362] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-0: option 'transport.address-family' is not recognized [2021-01-08 10:10:46.878421] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-0: option 'transport.socket.ssl-enabled' is not recognized [2021-01-08 10:10:46.878437] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-0: option 'transport.tcp-user-timeout' is not recognized [2021-01-08 10:10:46.878446] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-0: option 'transport.socket.keepalive-time' is not recognized [2021-01-08 10:10:46.878492] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-0: option 'transport.socket.keepalive-interval' is not recognized [2021-01-08 10:10:46.878510] D [MSGID: 101174] [graph.c:460:_log_if_unknown_option] 0-datastore-client-0: option 'transport.socket.keepalive-count' is not recognized [2021-01-08 10:10:46.878520] I [MSGID: 104045] [glfs-master.c:80:notify] 0-gfapi: New graph 6e6f6465-302d-3636-3536-38302d323032 (0) coming up [2021-01-08 10:10:46.878540] D [MSGID: 0] [afr-common.c:5016:afr_launch_notify_timer] 0-datastore-replicate-0: Initiating child-down timer [2021-01-08 10:10:46.878603] I [MSGID: 114020] [client.c:2436:notify] 0-datastore-client-0: parent translators are ready, attempting connect on transport [2021-01-08 10:10:46.878622] T [rpc-clnt.c:398:rpc_clnt_reconnect] 0-datastore-client-0: attempting reconnect [2021-01-08 10:10:46.878632] T [socket.c:3474:socket_connect] 0-datastore-client-0: connecting 0x7fffe004c5f0, sock=-1 [2021-01-08 10:10:46.878642] T [name.c:239:af_inet_client_get_remote_sockaddr] 0-datastore-client-0: option remote-port missing in volume datastore-client-0. Defaulting to 24007 [2021-01-08 10:10:46.880715] D [socket.c:3392:socket_fix_ssl_opts] 0-datastore-client-0: disabling SSL for portmapper connection [2021-01-08 10:10:46.880760] T [socket.c:1045:socket_nodelay] 0-datastore-client-0: NODELAY enabled for socket 10 [2021-01-08 10:10:46.880771] T [socket.c:1134:socket_keepalive] 0-datastore-client-0: Keep-alive enabled for socket: 10, (idle: 20, interval: 2, max-probes: 9, timeout: 0) [2021-01-08 10:10:46.880804] T [socket.c:3589:socket_connect] 0-datastore-client-0: >>> connect() with non-blocking IO for ALL [2021-01-08 10:10:46.880813] T [socket.c:226:socket_dump_info] 0-datastore-client-0: $$$ client: connecting to (af:2,sock:10) 172.16.88.30 non-SSL (errno:0:Success) [2021-01-08 10:10:46.880908] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk [2021-01-08 10:10:46.880713] D [MSGID: 0] [common-utils.c:532:gf_resolve_ip6] 0-resolver: returning ip-172.16.88.30 (port-24007) for hostname: node0 and port: 24007 [2021-01-08 10:10:46.880906] I [MSGID: 114020] [client.c:2436:notify] 0-datastore-client-1: parent translators are ready, attempting connect on transport [2021-01-08 10:10:46.880993] T [rpc-clnt.c:398:rpc_clnt_reconnect] 0-datastore-client-1: attempting reconnect [2021-01-08 10:10:46.881003] T [socket.c:3474:socket_connect] 0-datastore-client-1: connecting 0x7fffe0048bf0, sock=-1 [2021-01-08 10:10:46.881008] T [name.c:239:af_inet_client_get_remote_sockaddr] 0-datastore-client-1: option remote-port missing in volume datastore-client-1. Defaulting to 24007 [2021-01-08 10:10:46.881013] T [MSGID: 0] [common-utils.c:491:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: node1 [2021-01-08 10:10:46.880954] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:10) in:0, out:4, err:0 [2021-01-08 10:10:46.881045] T [socket.c:3007:socket_event_handler] 0-datastore-client-0: client (sock:10) socket is not connected, completing connection [2021-01-08 10:10:46.881119] D [MSGID: 0] [client.c:2323:client_rpc_notify] 0-datastore-client-0: got RPC_CLNT_CONNECT [2021-01-08 10:10:46.881165] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 72, payload: 8, rpc hdr: 64 [2021-01-08 10:10:46.881244] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 1, XID: 0x2, Program: GF-DUMP, ProgVers: 1, Proc: 1) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.881480] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x133)[0x7ffff6e8da93] (--> /lib64/libgfrpc.so.0(+0x130c8)[0x7ffff73aa0c8] (--> /lib64/libgfrpc.so.0(+0x138c1)[0x7ffff73aa8c1] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x338)[0x7ffff73a6b18] (--> /usr/lib64/glusterfs/7.8/xlator/protocol/client.so(+0x1381d)[0x7fffed20781d] ))))) 0-: 172.16.88.30:24007: ping timer event already removed [2021-01-08 10:10:46.881525] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 64, payload: 0, rpc hdr: 64 [2021-01-08 10:10:46.881580] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 2, XID: 0x3, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.881636] T [socket.c:2905:socket_handle_client_connection_attempt] 0-datastore-client-0: socket_connect_finish() returned 0 [2021-01-08 10:10:46.881648] T [socket.c:3014:socket_event_handler] 0-datastore-client-0: (sock:10) socket_complete_connection() returned 1 [2021-01-08 10:10:46.881655] T [socket.c:3018:socket_event_handler] 0-datastore-client-0: (sock:10) returning to wait on socket [2021-01-08 10:10:46.881662] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:10) in:1, out:4, err:0 [2021-01-08 10:10:46.881667] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (10) is already connected [2021-01-08 10:10:46.881674] T [socket.c:3036:socket_event_handler] 0-datastore-client-0: (sock:10) socket_event_poll_out returned 0 [2021-01-08 10:10:46.881680] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: *
reading over non-SSL [2021-01-08 10:10:46.881692] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: ***** reading over non-SSL [2021-01-08 10:10:46.881714] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x2 Program: GF-DUMP, ProgVers: 1, Proc: 1) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.881726] D [MSGID: 0] [client-handshake.c:1398:server_has_portmap] 0-datastore-client-0: detected portmapper on server [2021-01-08 10:10:46.881753] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 104, payload: 40, rpc hdr: 64 [2021-01-08 10:10:46.881800] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 3, XID: 0x4, Program: PORTMAP, ProgVers: 1, Proc: 1) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.881838] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:10) socket_event_poll_in returned 0 [2021-01-08 10:10:46.881854] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:10) in:1, out:0, err:0 [2021-01-08 10:10:46.881859] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (10) is already connected [2021-01-08 10:10:46.881864] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.881875] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.881888] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x3 Program: GF-DUMP, ProgVers: 1, Proc: 2) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.881895] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-datastore-client-0: Ping latency is 0ms [2021-01-08 10:10:46.881950] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:10) socket_event_poll_in returned 0 [2021-01-08 10:10:46.881971] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:10) in:1, out:0, err:0 [2021-01-08 10:10:46.881979] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (10) is already connected [2021-01-08 10:10:46.881986] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: ***** reading over non-SSL [2021-01-08 10:10:46.881996] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: * reading over non-SSL [2021-01-08 10:10:46.882011] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x4 Program: PORTMAP, ProgVers: 1, Proc: 1) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.882025] I [rpc-clnt.c:1963:rpc_clnt_reconfig] 0-datastore-client-0: changing port to 49152 (from 0) [2021-01-08 10:10:46.882034] T [socket.c:902:socket_disconnect] 0-datastore-client-0: disconnecting 0x7fffe004c5f0, sock=10 [2021-01-08 10:10:46.882219] T [socket.c:906:socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x133)[0x7ffff6e8da93] (--> /usr/lib64/glusterfs/7.8/rpc-transport/socket.so(+0x5f2f)[0x7fffee54cf2f] (--> /usr/lib64/glusterfs/7.8/rpc-transport/socket.so(+0x6523)[0x7fffee54d523] (--> /usr/lib64/glusterfs/7.8/xlator/protocol/client.so(+0x33c68)[0x7fffed227c68] (--> /lib64/libgfrpc.so.0(+0xf315)[0x7ffff73a6315] ))))) 0-datastore-client-0: tearing down socket connection [2021-01-08 10:10:46.882316] I [socket.c:865:socket_shutdown] 0-datastore-client-0: intentional socket shutdown(10) [2021-01-08 10:10:46.882338] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:10) socket_event_poll_in returned 0 [2021-01-08 10:10:46.882351] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:10) in:1, out:0, err:16 [2021-01-08 10:10:46.882367] T [socket.c:226:socket_dump_info] 0-datastore-client-0: $$$ client: disconnecting from (af:2,sock:10) 172.16.88.30 non-SSL (errno:0:Success) [2021-01-08 10:10:46.882379] D [socket.c:3062:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:10) (non-SSL) [2021-01-08 10:10:46.882392] D [MSGID: 0] [client.c:2334:client_rpc_notify] 0-datastore-client-0: got RPC_CLNT_DISCONNECT [2021-01-08 10:10:46.882408] D [MSGID: 0] [client.c:2394:client_rpc_notify] 0-datastore-client-0: disconnected (skipped notify) [2021-01-08 10:10:46.882422] T [rpc-clnt.c:398:rpc_clnt_reconnect] 0-datastore-client-0: attempting reconnect [2021-01-08 10:10:46.882430] T [socket.c:3474:socket_connect] 0-datastore-client-0: connecting 0x7fffe004c5f0, sock=-1 [2021-01-08 10:10:46.882440] T [name.c:239:af_inet_client_get_remote_sockaddr] 0-datastore-client-0: option remote-port missing in volume datastore-client-0. Defaulting to 24007 [2021-01-08 10:10:46.882457] T [MSGID: 0] [common-utils.c:483:gf_resolve_ip6] 0-resolver: flushing DNS cache [2021-01-08 10:10:46.884070] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "T [MSGID: 0] [common-utils.c:491:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: node0" repeated 2 times between [2021-01-08 10:10:46.727680] and [2021-01-08 10:10:46.882468] [2021-01-08 10:10:46.884068] D [MSGID: 0] [common-utils.c:532:gf_resolve_ip6] 0-resolver: returning ip-172.16.88.31 (port-24007) for hostname: node1 and port: 24007 [2021-01-08 10:10:46.884130] D [socket.c:3392:socket_fix_ssl_opts] 0-datastore-client-1: disabling SSL for portmapper connection [2021-01-08 10:10:46.884145] T [socket.c:1045:__socket_nodelay] 0-datastore-client-1: NODELAY enabled for socket 11 [2021-01-08 10:10:46.884153] T [socket.c:1134:socket_keepalive] 0-datastore-client-1: Keep-alive enabled for socket: 11, (idle: 20, interval: 2, max-probes: 9, timeout: 0) [2021-01-08 10:10:46.884193] T [socket.c:3589:socket_connect] 0-datastore-client-1: >>> connect() with non-blocking IO for ALL [2021-01-08 10:10:46.884220] T [socket.c:226:socket_dump_info] 0-datastore-client-1: $$$ client: connecting to (af:2,sock:11) 172.16.88.31 non-SSL (errno:0:Success) Final graph: +------------------------------------------------------------------------------+ 1: volume datastore-client-0 2: type protocol/client 3: option ping-timeout 42 4: option remote-host node0 5: option remote-subvolume /virtstack/storage/disk/datastore 6: option transport-type socket 7: option transport.address-family inet 8: option username ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 9: option password 4377c65b-4052-4133-8066-d2edfc9281d1 10: option transport.socket.ssl-enabled off 11: option transport.tcp-user-timeout 0 12: option transport.socket.keepalive-time 20 13: option transport.socket.keepalive-interval 2 14: option transport.socket.keepalive-count 9 15: option send-gids true 16: end-volume 17:
18: volume datastore-client-1 19: type protocol/client 20: option ping-timeout 42 21: option remote-host node1 22: option remote-subvolume /virtstack/storage/disk/datastore 23: option transport-type socket 24: option transport.address-family inet 25: option username ddf8d1fc-6bd7-4e5b-a791-25d11479fe50 26: option password 4377c65b-4052-4133-8066-d2edfc9281d1 27: option transport.socket.ssl-enabled off 28: option transport.tcp-user-timeout 0 29: option transport.socket.keepalive-time 20 30: option transport.socket.keepalive-interval 2 31: option transport.socket.keepalive-count 9 [2021-01-08 10:10:46.884710] D [MSGID: 0] [common-utils.c:532:gf_resolve_ip6] 0-resolver: returning ip-172.16.88.30 (port-24007) for hostname: node0 and port: 24007 32: option send-gids true 33: end-volume 34:
35: volume datastore-replicate-0 36: type cluster/replicate 37: option afr-pending-xattr datastore-client-0,datastore-client-1 38: option use-compound-fops off 39: subvolumes datastore-client-0 datastore-client-1 40: end-volume 41:
42: volume datastore-dht 43: type cluster/distribute 44: option lock-migration off 45: option force-migration off [2021-01-08 10:10:46.884753] T [socket.c:1045:__socket_nodelay] 0-datastore-client-0: NODELAY enabled for socket 12 [2021-01-08 10:10:46.884919] T [socket.c:1134:socket_keepalive] 0-datastore-client-0: Keep-alive enabled for socket: 12, (idle: 20, interval: 2, max-probes: 9, timeout: 0) [2021-01-08 10:10:46.885010] T [socket.c:3589:socket_connect] 0-datastore-client-0: >>> connect() with non-blocking IO for ALL [2021-01-08 10:10:46.885056] T [socket.c:226:socket_dump_info] 0-datastore-client-0: $$$ client: connecting to (af:2,sock:12) 172.16.88.30 non-SSL (errno:0:Success) 46: subvolumes datastore-replicate-0 47: end-volume 48:
49: volume datastore-utime 50: type features/utime 51: option noatime on 52: subvolumes datastore-dht 53: end-volume 54:
55: volume datastore-write-behind 56: type performance/write-behind [2021-01-08 10:10:46.885181] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:11) in:0, out:4, err:0 57: subvolumes datastore-utime 58: end-volume 59:
[2021-01-08 10:10:46.885452] T [socket.c:3007:socket_event_handler] 0-datastore-client-1: client (sock:11) socket is not connected, completing connection [2021-01-08 10:10:46.885546] D [MSGID: 0] [client.c:2323:client_rpc_notify] 0-datastore-client-1: got RPC_CLNT_CONNECT [2021-01-08 10:10:46.885578] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 72, payload: 8, rpc hdr: 64 [2021-01-08 10:10:46.885667] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 4, XID: 0x2, Program: GF-DUMP, ProgVers: 1, Proc: 1) to rpc-transport (datastore-client-1) 60: volume datastore-read-ahead 61: type performance/read-ahead 62: subvolumes datastore-write-behind 63: end-volume 64:
65: volume datastore-io-cache 66: type performance/io-cache 67: subvolumes datastore-read-ahead 68: end-volume 69:
70: volume datastore-open-behind 71: type performance/open-behind 72: subvolumes datastore-io-cache 73: end-volume 74:
75: volume datastore-quick-read 76: type performance/quick-read 77: subvolumes datastore-open-behind 78: end-volume 79:
80: volume datastore-md-cache 81: type performance/md-cache 82: subvolumes datastore-quick-read 83: end-volume 84:
85: volume datastore 86: type debug/io-stats 87: option log-level INFO 88: option threads 16 89: option latency-measurement off 90: option count-fop-hits off 91: option global-threading off 92: subvolumes datastore-md-cache [2021-01-08 10:10:46.886059] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x133)[0x7ffff6e8da93] (--> /lib64/libgfrpc.so.0(+0x130c8)[0x7ffff73aa0c8] (--> /lib64/libgfrpc.so.0(+0x138c1)[0x7ffff73aa8c1] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x338)[0x7ffff73a6b18] (--> /usr/lib64/glusterfs/7.8/xlator/protocol/client.so(+0x1381d)[0x7fffed20781d] ))))) 0-: 172.16.88.31:24007: ping timer event already removed 93: end-volume 94:
95: volume meta-autoload 96: type meta 97: subvolumes datastore 98: end-volume [2021-01-08 10:10:46.886325] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 64, payload: 0, rpc hdr: 64 99:
+------------------------------------------------------------------------------+ [2021-01-08 10:10:46.886483] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 5, XID: 0x3, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.886500] T [socket.c:2905:socket_handle_client_connection_attempt] 0-datastore-client-1: socket_connect_finish() returned 0 [2021-01-08 10:10:46.886461] T [socket.c:3044:socket_event_handler] 0-gfapi: (sock:8) socket_event_poll_in returned 0 [2021-01-08 10:10:46.886511] T [socket.c:3014:socket_event_handler] 0-datastore-client-1: (sock:11) socket_complete_connection() returned 1 [2021-01-08 10:10:46.886710] T [socket.c:3018:socket_event_handler] 0-datastore-client-1: (sock:11) returning to wait on socket [2021-01-08 10:10:46.886724] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:11) in:1, out:4, err:0 [2021-01-08 10:10:46.886730] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (11) is already connected [2021-01-08 10:10:46.886738] T [socket.c:3036:socket_event_handler] 0-datastore-client-1: (sock:11) socket_event_poll_out returned 0 [2021-01-08 10:10:46.886744] T [socket.c:574:
socket_ssl_readv] 0-datastore-client-1: *
reading over non-SSL [2021-01-08 10:10:46.886756] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: ***** reading over non-SSL [2021-01-08 10:10:46.886776] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x2 Program: GF-DUMP, ProgVers: 1, Proc: 1) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.886788] D [MSGID: 0] [client-handshake.c:1398:server_has_portmap] 0-datastore-client-1: detected portmapper on server [2021-01-08 10:10:46.886813] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 104, payload: 40, rpc hdr: 64 [2021-01-08 10:10:46.886790] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:12) in:0, out:4, err:0 [2021-01-08 10:10:46.886836] T [socket.c:3007:socket_event_handler] 0-datastore-client-0: client (sock:12) socket is not connected, completing connection [2021-01-08 10:10:46.886886] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 6, XID: 0x4, Program: PORTMAP, ProgVers: 1, Proc: 1) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.887258] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:11) socket_event_poll_in returned 0 [2021-01-08 10:10:46.887301] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:11) in:1, out:0, err:0 [2021-01-08 10:10:46.887309] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (11) is already connected [2021-01-08 10:10:46.887325] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.887333] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.887255] D [MSGID: 0] [client.c:2323:client_rpc_notify] 0-datastore-client-0: got RPC_CLNT_CONNECT [2021-01-08 10:10:46.887396] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 72, payload: 8, rpc hdr: 64 [2021-01-08 10:10:46.887346] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x3 Program: GF-DUMP, ProgVers: 1, Proc: 2) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.887487] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-datastore-client-1: Ping latency is 1ms [2021-01-08 10:10:46.887506] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:11) socket_event_poll_in returned 0 [2021-01-08 10:10:46.887557] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:11) in:1, out:0, err:0 [2021-01-08 10:10:46.887566] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (11) is already connected [2021-01-08 10:10:46.887573] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: ***** reading over non-SSL [2021-01-08 10:10:46.887584] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.887600] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 7, XID: 0x5, Program: GF-DUMP, ProgVers: 1, Proc: 1) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.887607] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x4 Program: PORTMAP, ProgVers: 1, Proc: 1) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.887625] I [rpc-clnt.c:1963:rpc_clnt_reconfig] 0-datastore-client-1: changing port to 49152 (from 0) [2021-01-08 10:10:46.887635] T [socket.c:902:__socket_disconnect] 0-datastore-client-1: disconnecting 0x7fffe0048bf0, sock=11 [2021-01-08 10:10:46.887939] T [socket.c:906:socket_disconnect] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x133)[0x7ffff6e8da93] (--> /usr/lib64/glusterfs/7.8/rpc-transport/socket.so(+0x5f2f)[0x7fffee54cf2f] (--> /usr/lib64/glusterfs/7.8/rpc-transport/socket.so(+0x6523)[0x7fffee54d523] (--> /usr/lib64/glusterfs/7.8/xlator/protocol/client.so(+0x33c68)[0x7fffed227c68] (--> /lib64/libgfrpc.so.0(+0xf315)[0x7ffff73a6315] ))))) 0-datastore-client-1: tearing down socket connection [2021-01-08 10:10:46.888073] I [socket.c:865:socket_shutdown] 0-datastore-client-1: intentional socket shutdown(11) [2021-01-08 10:10:46.888092] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:11) socket_event_poll_in returned 0 [2021-01-08 10:10:46.888091] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x133)[0x7ffff6e8da93] (--> /lib64/libgfrpc.so.0(+0x130c8)[0x7ffff73aa0c8] (--> /lib64/libgfrpc.so.0(+0x138c1)[0x7ffff73aa8c1] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x338)[0x7ffff73a6b18] (--> /usr/lib64/glusterfs/7.8/xlator/protocol/client.so(+0x1381d)[0x7fffed20781d] ))))) 0-: 172.16.88.30:49152: ping timer event already removed [2021-01-08 10:10:46.888102] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:11) in:1, out:0, err:16 [2021-01-08 10:10:46.888170] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 64, payload: 0, rpc hdr: 64 [2021-01-08 10:10:46.888238] T [socket.c:226:socket_dump_info] 0-datastore-client-1: $$$ client: disconnecting from (af:2,sock:11) 172.16.88.31 non-SSL (errno:0:Success) [2021-01-08 10:10:46.888315] D [socket.c:3062:socket_event_handler] 0-transport: EPOLLERR - disconnecting (sock:11) (non-SSL) [2021-01-08 10:10:46.888317] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 8, XID: 0x6, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.888496] D [MSGID: 0] [client.c:2334:client_rpc_notify] 0-datastore-client-1: got RPC_CLNT_DISCONNECT [2021-01-08 10:10:46.888546] D [MSGID: 0] [client.c:2394:client_rpc_notify] 0-datastore-client-1: disconnected (skipped notify) [2021-01-08 10:10:46.888566] T [rpc-clnt.c:398:rpc_clnt_reconnect] 0-datastore-client-1: attempting reconnect [2021-01-08 10:10:46.888576] T [socket.c:3474:socket_connect] 0-datastore-client-1: connecting 0x7fffe0048bf0, sock=-1 [2021-01-08 10:10:46.888588] T [name.c:239:af_inet_client_get_remote_sockaddr] 0-datastore-client-1: option remote-port missing in volume datastore-client-1. Defaulting to 24007 [2021-01-08 10:10:46.888601] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk [2021-01-08 10:10:46.888599] T [MSGID: 0] [common-utils.c:483:gf_resolve_ip6] 0-resolver: flushing DNS cache [2021-01-08 10:10:46.888600] T [MSGID: 0] [common-utils.c:491:gf_resolve_ip6] 0-resolver: DNS cache not present, freshly probing hostname: node1 [2021-01-08 10:10:46.888739] T [socket.c:2905:socket_handle_client_connection_attempt] 0-datastore-client-0: socket_connect_finish() returned 0 [2021-01-08 10:10:46.888777] T [socket.c:3014:socket_event_handler] 0-datastore-client-0: (sock:12) socket_complete_connection() returned 1 [2021-01-08 10:10:46.888787] T [socket.c:3018:socket_event_handler] 0-datastore-client-0: (sock:12) returning to wait on socket [2021-01-08 10:10:46.888870] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:12) in:1, out:4, err:0 [2021-01-08 10:10:46.889093] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (12) is already connected [2021-01-08 10:10:46.889217] T [socket.c:3036:socket_event_handler] 0-datastore-client-0: (sock:12) socket_event_poll_out returned 0 [2021-01-08 10:10:46.889253] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.889269] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.889285] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x5 Program: GF-DUMP, ProgVers: 1, Proc: 1) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.889330] I [MSGID: 114057] [client-handshake.c:1375:select_server_supported_programs] 0-datastore-client-0: Using Program GlusterFS 4.x v1, Num (1298437), Version (400) [2021-01-08 10:10:46.889423] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 940, payload: 860, rpc hdr: 80 [2021-01-08 10:10:46.889788] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 9, XID: 0x7, Program: GlusterFS Handshake, ProgVers: 2, Proc: 1) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.889830] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:12) socket_event_poll_in returned 0 [2021-01-08 10:10:46.890013] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:12) in:1, out:0, err:0 [2021-01-08 10:10:46.890027] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (12) is already connected [2021-01-08 10:10:46.890035] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.890047] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-0: ***** reading over non-SSL [2021-01-08 10:10:46.890067] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x6 Program: GF-DUMP, ProgVers: 1, Proc: 2) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.890102] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-datastore-client-0: Ping latency is 1ms [2021-01-08 10:10:46.890119] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:12) socket_event_poll_in returned 0 [2021-01-08 10:10:46.890519] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:12) in:1, out:0, err:0 [2021-01-08 10:10:46.890602] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (12) is already connected [2021-01-08 10:10:46.890615] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.890623] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.890638] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x7 Program: GlusterFS Handshake, ProgVers: 2, Proc: 1) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.890667] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-datastore-client-0: Connected to datastore-client-0, attached to remote volume '/virtstack/storage/disk/datastore'. [2021-01-08 10:10:46.890677] D [MSGID: 0] [client-handshake.c:945:client_post_handshake] 0-datastore-client-0: No fds to open - notifying all parents child up [2021-01-08 10:10:46.890685] D [MSGID: 0] [afr-common.c:5147:afr_get_halo_latency] 0-datastore-replicate-0: Using halo latency 5 [2021-01-08 10:10:46.890690] I [MSGID: 108005] [afr-common.c:5231:afr_handle_child_up_event] 0-datastore-replicate-0: Subvolume 'datastore-client-0' came back up; going online. [2021-01-08 10:10:46.891167] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:12) socket_event_poll_in returned 0 [2021-01-08 10:10:46.891451] D [MSGID: 0] [common-utils.c:532:gf_resolve_ip6] 0-resolver: returning ip-172.16.88.31 (port-24007) for hostname: node1 and port: 24007 [2021-01-08 10:10:46.891532] T [socket.c:1045:__socket_nodelay] 0-datastore-client-1: NODELAY enabled for socket 9 [2021-01-08 10:10:46.891581] T [socket.c:1134:socket_keepalive] 0-datastore-client-1: Keep-alive enabled for socket: 9, (idle: 20, interval: 2, max-probes: 9, timeout: 0) [2021-01-08 10:10:46.891647] T [socket.c:3589:socket_connect] 0-datastore-client-1: >>> connect() with non-blocking IO for ALL [2021-01-08 10:10:46.891671] T [socket.c:226:socket_dump_info] 0-datastore-client-1: $$$ client: connecting to (af:2,sock:9) 172.16.88.31 non-SSL (errno:0:Success) [2021-01-08 10:10:46.892176] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:9) in:0, out:4, err:0 [2021-01-08 10:10:46.892207] T [socket.c:3007:socket_event_handler] 0-datastore-client-1: client (sock:9) socket is not connected, completing connection [2021-01-08 10:10:46.892230] D [MSGID: 0] [client.c:2323:client_rpc_notify] 0-datastore-client-1: got RPC_CLNT_CONNECT [2021-01-08 10:10:46.892258] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 72, payload: 8, rpc hdr: 64 [2021-01-08 10:10:46.892359] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 10, XID: 0x5, Program: GF-DUMP, ProgVers: 1, Proc: 1) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.892590] D [rpc-clnt-ping.c:96:rpc_clnt_remove_ping_timer_locked] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x133)[0x7ffff6e8da93] (--> /lib64/libgfrpc.so.0(+0x130c8)[0x7ffff73aa0c8] (--> /lib64/libgfrpc.so.0(+0x138c1)[0x7ffff73aa8c1] (--> /lib64/libgfrpc.so.0(rpc_clnt_submit+0x338)[0x7ffff73a6b18] (--> /usr/lib64/glusterfs/7.8/xlator/protocol/client.so(+0x1381d)[0x7fffed20781d] ))))) 0-: 172.16.88.31:49152: ping timer event already removed [2021-01-08 10:10:46.892643] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 64, payload: 0, rpc hdr: 64 [2021-01-08 10:10:46.892747] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 11, XID: 0x6, Program: GF-DUMP, ProgVers: 1, Proc: 2) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.892768] T [socket.c:2905:socket_handle_client_connection_attempt] 0-datastore-client-1: socket_connect_finish() returned 0 [2021-01-08 10:10:46.892801] T [socket.c:3014:socket_event_handler] 0-datastore-client-1: (sock:9) socket_complete_connection() returned 1 [2021-01-08 10:10:46.892819] T [socket.c:3018:socket_event_handler] 0-datastore-client-1: (sock:9) returning to wait on socket [2021-01-08 10:10:46.892832] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:9) in:1, out:4, err:0 [2021-01-08 10:10:46.892861] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (9) is already connected [2021-01-08 10:10:46.892871] T [socket.c:3036:socket_event_handler] 0-datastore-client-1: (sock:9) socket_event_poll_out returned 0 [2021-01-08 10:10:46.893137] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: ***** reading over non-SSL [2021-01-08 10:10:46.893155] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.893195] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x5 Program: GF-DUMP, ProgVers: 1, Proc: 1) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.893300] I [MSGID: 114057] [client-handshake.c:1375:select_server_supported_programs] 0-datastore-client-1: Using Program GlusterFS 4.x v1, Num (1298437), Version (400) [2021-01-08 10:10:46.893216] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:9) in:1, out:0, err:0 [2021-01-08 10:10:46.893422] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (9) is already connected [2021-01-08 10:10:46.893433] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.893446] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: ***** reading over non-SSL [2021-01-08 10:10:46.893453] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 940, payload: 860, rpc hdr: 80 [2021-01-08 10:10:46.893466] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x6 Program: GF-DUMP, ProgVers: 1, Proc: 2) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.893475] D [rpc-clnt-ping.c:195:rpc_clnt_ping_cbk] 0-datastore-client-1: Ping latency is 0ms [2021-01-08 10:10:46.893601] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 12, XID: 0x7, Program: GlusterFS Handshake, ProgVers: 2, Proc: 1) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.893678] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:9) socket_event_poll_in returned 0 [2021-01-08 10:10:46.893692] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:9) socket_event_poll_in returned 0 [2021-01-08 10:10:46.894671] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:9) in:1, out:0, err:0 [2021-01-08 10:10:46.894731] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (9) is already connected [2021-01-08 10:10:46.894744] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.894760] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.894786] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x7 Program: GlusterFS Handshake, ProgVers: 2, Proc: 1) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.894814] I [MSGID: 114046] [client-handshake.c:1105:client_setvolume_cbk] 0-datastore-client-1: Connected to datastore-client-1, attached to remote volume '/virtstack/storage/disk/datastore'. [2021-01-08 10:10:46.894827] D [MSGID: 0] [client-handshake.c:945:client_post_handshake] 0-datastore-client-1: No fds to open - notifying all parents child up [2021-01-08 10:10:46.894839] D [MSGID: 0] [afr-common.c:5147:afr_get_halo_latency] 0-datastore-replicate-0: Using halo latency 5 [2021-01-08 10:10:46.894874] T [MSGID: 0] [dht-diskusage.c:136:dht_get_du_info_for_subvol] 0-stack-trace: stack-address: 0x7fffe4002cd8, winding from datastore-dht to datastore-replicate-0 [2021-01-08 10:10:46.895005] T [MSGID: 0] [afr-common.c:4369:afr_statfs] 0-stack-trace: stack-address: 0x7fffe4002cd8, winding from datastore-replicate-0 to datastore-client-0 [2021-01-08 10:10:46.895069] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 108, payload: 28, rpc hdr: 80 [2021-01-08 10:10:46.895171] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 13, XID: 0x8, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.895218] T [MSGID: 0] [afr-common.c:4369:afr_statfs] 0-stack-trace: stack-address: 0x7fffe4002cd8, winding from datastore-replicate-0 to datastore-client-1 [2021-01-08 10:10:46.895236] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 108, payload: 28, rpc hdr: 80 [2021-01-08 10:10:46.895351] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 13, XID: 0x8, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.895857] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:12) in:1, out:0, err:0 [2021-01-08 10:10:46.895926] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (12) is already connected [2021-01-08 10:10:46.895939] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: ***** reading over non-SSL [2021-01-08 10:10:46.895961] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.895986] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x8 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.895998] T [MSGID: 0] [client-rpc-fops_v2.c:626:client4_0_statfs_cbk] 0-stack-trace: stack-address: 0x7fffe4002cd8, datastore-client-0 returned 0 [2021-01-08 10:10:46.896018] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:12) socket_event_poll_in returned 0 [2021-01-08 10:10:46.896232] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:9) socket_event_poll_in returned 0 [2021-01-08 10:10:46.896290] T [MSGID: 0] [syncop.c:1136:syncop_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from gfapi to meta-autoload [2021-01-08 10:10:46.896364] T [MSGID: 0] [io-stats.c:2699:io_stats_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore to datastore-md-cache [2021-01-08 10:10:46.896389] T [MSGID: 0] [md-cache.c:631:mdc_inode_iatt_get] 0-md-cache: mdc_inode_ctx_get failed (00000000-0000-0000-0000-000000000001) [2021-01-08 10:10:46.896418] T [MSGID: 0] [md-cache.c:1324:mdc_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-md-cache to datastore-quick-read [2021-01-08 10:10:46.896458] T [MSGID: 0] [quick-read.c:671:qr_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-quick-read to datastore-open-behind [2021-01-08 10:10:46.896476] T [MSGID: 0] [defaults.c:3027:default_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-open-behind to datastore-io-cache [2021-01-08 10:10:46.896428] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:9) in:1, out:0, err:0 [2021-01-08 10:10:46.896507] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (9) is already connected [2021-01-08 10:10:46.896517] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.896532] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: ***** reading over non-SSL [2021-01-08 10:10:46.896528] T [MSGID: 0] [io-cache.c:347:ioc_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-io-cache to datastore-read-ahead [2021-01-08 10:10:46.896608] T [MSGID: 0] [defaults.c:3027:default_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-read-ahead to datastore-write-behind [2021-01-08 10:10:46.896562] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x8 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 14) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.896625] T [MSGID: 0] [write-behind.c:2481:wb_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-write-behind to datastore-utime [2021-01-08 10:10:46.896733] T [MSGID: 0] [utime.c:261:gf_utime_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-utime to datastore-dht [2021-01-08 10:10:46.896760] T [MSGID: 0] [defaults.c:3027:default_lookup] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-dht to datastore-replicate-0 [2021-01-08 10:10:46.896805] T [MSGID: 0] [client-rpc-fops_v2.c:626:client4_0_statfs_cbk] 0-stack-trace: stack-address: 0x7fffe4002cd8, datastore-client-1 returned 0 [2021-01-08 10:10:46.896994] T [MSGID: 0] [afr-common.c:4332:afr_statfs_cbk] 0-stack-trace: stack-address: 0x7fffe4002cd8, datastore-replicate-0 returned 0 [2021-01-08 10:10:46.897025] D [MSGID: 0] [dht-diskusage.c:90:dht_du_info_cbk] 0-datastore-dht: subvolume 'datastore-replicate-0': avail_percent is: 94.00 and avail_space is: 4993212416 and avail_inodes is: 99.00 [2021-01-08 10:10:46.896810] T [MSGID: 0] [afr-common.c:3178:afr_discover_do] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-replicate-0 to datastore-client-0 [2021-01-08 10:10:46.897065] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:9) socket_event_poll_in returned 0 [2021-01-08 10:10:46.897121] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 656, payload: 572, rpc hdr: 84 [2021-01-08 10:10:46.897200] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 14, XID: 0x9, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.897235] T [MSGID: 0] [afr-common.c:3178:afr_discover_do] 0-stack-trace: stack-address: 0x6d4348, winding from datastore-replicate-0 to datastore-client-1 [2021-01-08 10:10:46.897267] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 656, payload: 572, rpc hdr: 84 [2021-01-08 10:10:46.897386] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 14, XID: 0x9, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.897825] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:12) in:1, out:0, err:0 [2021-01-08 10:10:46.897925] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (12) is already connected [2021-01-08 10:10:46.897950] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.897964] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.897988] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0x9 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.898016] T [MSGID: 0] [client-rpc-fops_v2.c:2641:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x6d4348, datastore-client-0 returned 0 [2021-01-08 10:10:46.898034] T [MSGID: 0] [afr-common.c:2560:afr_attempt_local_discovery] 0-stack-trace: stack-address: 0x7fffe4002cd8, winding from datastore-replicate-0 to datastore-client-0 [2021-01-08 10:10:46.898060] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 144, payload: 64, rpc hdr: 80 [2021-01-08 10:10:46.898139] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 15, XID: 0xa, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (datastore-client-0) [2021-01-08 10:10:46.898161] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:12) socket_event_poll_in returned 0 [2021-01-08 10:10:46.898521] T [socket.c:3000:socket_event_handler] 0-datastore-client-0: client (sock:12) in:1, out:0, err:0 [2021-01-08 10:10:46.898562] T [socket.c:3026:socket_event_handler] 0-datastore-client-0: Client socket (12) is already connected [2021-01-08 10:10:46.898574] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: ***** reading over non-SSL [2021-01-08 10:10:46.898586] T [socket.c:574:socket_ssl_readv] 0-datastore-client-0: reading over non-SSL [2021-01-08 10:10:46.898587] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:9) in:1, out:0, err:0 [2021-01-08 10:10:46.898605] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-0: received rpc message (RPC XID: 0xa Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (datastore-client-0) [2021-01-08 10:10:46.898634] D [MSGID: 0] [client-rpc-fops_v2.c:926:client4_0_getxattr_cbk] 0-datastore-client-0: resetting op_ret to 0 from 83 [2021-01-08 10:10:46.898606] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (9) is already connected [2021-01-08 10:10:46.898665] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.898680] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: ***** reading over non-SSL [2021-01-08 10:10:46.898647] T [MSGID: 0] [client-rpc-fops_v2.c:930:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x7fffe4002cd8, datastore-client-0 returned 0 [2021-01-08 10:10:46.898703] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0x9 Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.898707] I [MSGID: 108031] [afr-common.c:2533:afr_local_discovery_cbk] 0-datastore-replicate-0: selecting local read_child datastore-client-0 [2021-01-08 10:10:46.898761] T [socket.c:3044:socket_event_handler] 0-datastore-client-0: (sock:12) socket_event_poll_in returned 0 [2021-01-08 10:10:46.898824] T [MSGID: 0] [client-rpc-fops_v2.c:2641:client4_0_lookup_cbk] 0-stack-trace: stack-address: 0x6d4348, datastore-client-1 returned 0 [2021-01-08 10:10:46.899016] T [MSGID: 0] [afr-common.c:2560:afr_attempt_local_discovery] 0-stack-trace: stack-address: 0x7fffe004f008, winding from datastore-replicate-0 to datastore-client-1 [2021-01-08 10:10:46.899049] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 144, payload: 64, rpc hdr: 80 [2021-01-08 10:10:46.899246] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 16, XID: 0xa, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) to rpc-transport (datastore-client-1) [2021-01-08 10:10:46.899309] T [MSGID: 0] [afr-common.c:3008:afr_discover_unwind] 0-stack-trace: stack-address: 0x6d4348, datastore-replicate-0 returned 0 [2021-01-08 10:10:46.899337] T [MSGID: 0] [utime.c:219:gf_utime_set_mdata_lookup_cbk] 0-stack-trace: stack-address: 0x6d4348, datastore-utime returned 0 [2021-01-08 10:10:46.899345] T [MSGID: 0] [write-behind.c:2439:wb_lookup_cbk] 0-stack-trace: stack-address: 0x6d4348, datastore-write-behind returned 0 [2021-01-08 10:10:46.899355] T [MSGID: 0] [ioc-inode.c:184:ioc_inode_create] 0-datastore-io-cache: locked table(0x7fffe00354d0) [2021-01-08 10:10:46.899361] T [MSGID: 0] [ioc-inode.c:190:ioc_inode_create] 0-datastore-io-cache: unlocked table(0x7fffe00354d0) [2021-01-08 10:10:46.899367] T [MSGID: 0] [ioc-inode.c:192:ioc_inode_create] 0-datastore-io-cache: adding to inode_lru[1] [2021-01-08 10:10:46.899373] T [MSGID: 0] [io-cache.c:258:ioc_inode_update] 0-datastore-io-cache: locked inode(0x7fffe0053300) [2021-01-08 10:10:46.899378] T [MSGID: 0] [io-cache.c:267:ioc_inode_update] 0-datastore-io-cache: unlocked inode(0x7fffe0053300) [2021-01-08 10:10:46.899383] T [MSGID: 0] [io-cache.c:275:ioc_inode_update] 0-datastore-io-cache: locked table(0x7fffe00354d0) [2021-01-08 10:10:46.899388] T [MSGID: 0] [io-cache.c:280:ioc_inode_update] 0-datastore-io-cache: unlocked table(0x7fffe00354d0) [2021-01-08 10:10:46.899395] T [MSGID: 0] [io-cache.c:318:ioc_lookup_cbk] 0-stack-trace: stack-address: 0x6d4348, datastore-io-cache returned 0 [2021-01-08 10:10:46.899403] T [MSGID: 0] [quick-read.c:631:qr_lookup_cbk] 0-stack-trace: stack-address: 0x6d4348, datastore-quick-read returned 0 [2021-01-08 10:10:46.899455] T [md-cache.c:595:mdc_inode_iatt_set_validate] (-->/usr/lib64/glusterfs/7.8/xlator/performance/quick-read.so(+0x6adc) [0x7fffec031adc] -->/usr/lib64/glusterfs/7.8/xlator/performance/md-cache.so(+0xda45) [0x7fffdfdeba45] -->/usr/lib64/glusterfs/7.8/xlator/performance/md-cache.so(+0x9b4e) [0x7fffdfde7b4e] ) 0-md-cache: Updated iatt(00000000-0000-0000-0000-000000000001) time:1610100646 generation=0 [2021-01-08 10:10:46.899496] T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements [2021-01-08 10:10:46.899576] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.glusterfs.mdata doesn't satisfy caching requirements" repeated 2 times between [2021-01-08 10:10:46.899496] and [2021-01-08 10:10:46.899573] [2021-01-08 10:10:46.899574] T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.afr.datastore-client-1 doesn't satisfy caching requirements [2021-01-08 10:10:46.899637] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key trusted.afr.datastore-client-1 doesn't satisfy caching requirements" repeated 2 times between [2021-01-08 10:10:46.899574] and [2021-01-08 10:10:46.899635] [2021-01-08 10:10:46.899636] T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.entrylk-count doesn't satisfy caching requirements [2021-01-08 10:10:46.899658] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.entrylk-count doesn't satisfy caching requirements" repeated 2 times between [2021-01-08 10:10:46.899636] and [2021-01-08 10:10:46.899656] [2021-01-08 10:10:46.899657] T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.inodelk-count doesn't satisfy caching requirements [2021-01-08 10:10:46.899681] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key glusterfs.inodelk-count doesn't satisfy caching requirements" repeated 2 times between [2021-01-08 10:10:46.899657] and [2021-01-08 10:10:46.899678] [2021-01-08 10:10:46.899680] T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key link-count doesn't satisfy caching requirements [2021-01-08 10:10:46.899794] T [socket.c:3000:socket_event_handler] 0-datastore-client-1: client (sock:9) in:1, out:0, err:0 [2021-01-08 10:10:46.899903] T [socket.c:3026:socket_event_handler] 0-datastore-client-1: Client socket (9) is already connected [2021-01-08 10:10:46.899915] T [socket.c:574:socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.899928] T [socket.c:574:__socket_ssl_readv] 0-datastore-client-1: reading over non-SSL [2021-01-08 10:10:46.899937] D [logging.c:1883:_gf_msg_internal] 0-logging-infra: Buffer overflow of a buffer whose size limit is 5. About to flush least recently used log message to disk The message "T [MSGID: 0] [md-cache.c:695:is_mdc_key_satisfied] 0-md-cache: xattr key link-count doesn't satisfy caching requirements" repeated 2 times between [2021-01-08 10:10:46.899680] and [2021-01-08 10:10:46.899931] [2021-01-08 10:10:46.899954] T [rpc-clnt.c:662:rpc_clnt_reply_init] 0-datastore-client-1: received rpc message (RPC XID: 0xa Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 18) from rpc-transport (datastore-client-1) [2021-01-08 10:10:46.899936] T [MSGID: 0] [md-cache.c:790:mdc_inode_xatt_set] 0-md-cache: xatt cache set for (00000000-0000-0000-0000-000000000001) time:1610100646 [2021-01-08 10:10:46.899987] T [MSGID: 0] [md-cache.c:1256:mdc_lookup_cbk] 0-stack-trace: stack-address: 0x6d4348, datastore-md-cache returned 0 [2021-01-08 10:10:46.899986] D [MSGID: 0] [client-rpc-fops_v2.c:926:client4_0_getxattr_cbk] 0-datastore-client-1: resetting op_ret to 0 from 83 [2021-01-08 10:10:46.900014] T [MSGID: 0] [client-rpc-fops_v2.c:930:client4_0_getxattr_cbk] 0-stack-trace: stack-address: 0x7fffe004f008, datastore-client-1 returned 0 [2021-01-08 10:10:46.900035] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:9) socket_event_poll_in returned 0 [2021-01-08 10:10:46.900069] D [MSGID: 0] [glfs-resolve.c:49:glfs_first_lookup_safe] 0-meta-autoload: first lookup complete 0 [2021-01-08 10:10:46.900087] I [MSGID: 104041] [glfs-resolve.c:976:__glfs_active_subvol] 0-datastore: switched to graph 6e6f6465-302d-3636-3536-38302d323032 (0) [2021-01-08 10:10:46.900110] T [MSGID: 0] [syncop.c:1136:syncop_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from gfapi to meta-autoload [2021-01-08 10:10:46.900124] T [MSGID: 0] [io-stats.c:2699:io_stats_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore to datastore-md-cache [2021-01-08 10:10:46.900127] T [socket.c:3044:socket_event_handler] 0-datastore-client-1: (sock:9) socket_event_poll_in returned 0 [2021-01-08 10:10:46.900172] T [MSGID: 0] [md-cache.c:1313:mdc_lookup] 0-stack-trace: stack-address: 0x6dd378, datastore-md-cache returned 0 [2021-01-08 10:10:52.828825] T [MSGID: 0] [syncop.c:1136:syncop_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from gfapi to meta-autoload [2021-01-08 10:10:52.828965] T [MSGID: 0] [io-stats.c:2699:io_stats_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore to datastore-md-cache [2021-01-08 10:10:52.829005] T [MSGID: 0] [md-cache.c:1324:mdc_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-md-cache to datastore-quick-read [2021-01-08 10:10:52.829033] T [MSGID: 0] [quick-read.c:671:qr_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-quick-read to datastore-open-behind [2021-01-08 10:10:52.829044] T [MSGID: 0] [defaults.c:3027:default_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-open-behind to datastore-io-cache [2021-01-08 10:10:52.829057] T [MSGID: 0] [io-cache.c:347:ioc_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-io-cache to datastore-read-ahead [2021-01-08 10:10:52.829067] T [MSGID: 0] [defaults.c:3027:default_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-read-ahead to datastore-write-behind [2021-01-08 10:10:52.829079] T [MSGID: 0] [write-behind.c:2481:wb_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-write-behind to datastore-utime [2021-01-08 10:10:52.829093] T [MSGID: 0] [utime.c:261:gf_utime_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-utime to datastore-dht [2021-01-08 10:10:52.829109] T [MSGID: 0] [defaults.c:3027:default_lookup] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-dht to datastore-replicate-0 [2021-01-08 10:10:52.829244] T [MSGID: 0] [afr-common.c:3280:afr_lookup_do] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-replicate-0 to datastore-client-0 [2021-01-08 10:10:52.829373] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 668, payload: 584, rpc hdr: 84 [2021-01-08 10:10:52.829514] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 18, XID: 0xb, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (datastore-client-0) [2021-01-08 10:10:52.829595] T [MSGID: 0] [afr-common.c:3280:afr_lookup_do] 0-stack-trace: stack-address: 0x6dd378, winding from datastore-replicate-0 to datastore-client-1 [2021-01-08 10:10:52.829666] T [rpc-clnt.c:1444:rpc_clnt_record_build_header] 0-rpc-clnt: Request fraglen 668, payload: 584, rpc hdr: 84 [2021-01-08 10:10:52.829819] T [rpc-clnt.c:1731:rpc_clnt_submit] 0-rpc-clnt: submitted request (unique: 18, XID: 0xb, Program: GlusterFS 4.x v1, ProgVers: 400, Proc: 27) to rpc-transport (datastore-client-1)

3. brick process status dump

DUMP-START-TIME: 2021-01-08 10:13:34.447912

[mallinfo] mallinfo_arena=7131136 mallinfo_ordblks=281 mallinfo_smblks=68 mallinfo_hblks=17 mallinfo_hblkhd=17350656 mallinfo_usmblks=0 mallinfo_fsmblks=6800 mallinfo_uordblks=1377728 mallinfo_fordblks=5753408 mallinfo_keepcost=121696

[global.glusterfs - Memory usage] num_types=127

[global.glusterfs - usage-type gf_common_mt_dnscache6 memusage] size=16 num_allocs=1 max_size=16 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_event_pool memusage] size=20696 num_allocs=3 max_size=20696 max_num_allocs=3 total_allocs=3

[global.glusterfs - usage-type gf_common_mt_xlator_t memusage] size=74592 num_allocs=18 max_size=74592 max_num_allocs=18 total_allocs=18

[global.glusterfs - usage-type gf_common_mt_xlator_list_t memusage] size=544 num_allocs=34 max_size=544 max_num_allocs=34 total_allocs=34

[global.glusterfs - usage-type gf_common_mt_volume_opt_list_t memusage] size=864 num_allocs=36 max_size=864 max_num_allocs=36 total_allocs=38

[global.glusterfs - usage-type gf_common_mt_gf_timer_t memusage] size=128 num_allocs=2 max_size=192 max_num_allocs=3 total_allocs=3812

[global.glusterfs - usage-type gf_common_mt_gf_timer_registry_t memusage] size=120 num_allocs=1 max_size=120 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_iobuf memusage] size=200816 num_allocs=8 max_size=200816 max_num_allocs=8 total_allocs=8

[global.glusterfs - usage-type gf_common_mt_iobuf_arena memusage] size=2592 num_allocs=9 max_size=2592 max_num_allocs=9 total_allocs=9

[global.glusterfs - usage-type gf_common_mt_iobuf_pool memusage] size=1776 num_allocs=1 max_size=1776 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_asprintf memusage] size=43 num_allocs=2 max_size=686 max_num_allocs=6 total_allocs=34

[global.glusterfs - usage-type gf_common_mt_strdup memusage] size=2546 num_allocs=116 max_size=3401 max_num_allocs=131 total_allocs=316

[global.glusterfs - usage-type gf_common_mt_socket_private_t memusage] size=1992 num_allocs=3 max_size=1992 max_num_allocs=3 total_allocs=3

[global.glusterfs - usage-type gf_common_mt_char memusage] size=1391 num_allocs=71 max_size=24820 max_num_allocs=72 total_allocs=79

[global.glusterfs - usage-type gf_common_mt_mem_pool memusage] size=640 num_allocs=10 max_size=640 max_num_allocs=10 total_allocs=10

[global.glusterfs - usage-type gf_common_mt_rpcsvc_auth_list memusage] size=360 num_allocs=5 max_size=360 max_num_allocs=5 total_allocs=5

[global.glusterfs - usage-type gf_common_mt_rpcsvc_t memusage] size=216 num_allocs=1 max_size=216 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcsvc_program_t memusage] size=279248 num_allocs=2 max_size=279248 max_num_allocs=2 total_allocs=2

[global.glusterfs - usage-type gf_common_mt_rpcsvc_listener_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcclnt_t memusage] size=328 num_allocs=1 max_size=328 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcclnt_savedframe_t memusage] size=200 num_allocs=1 max_size=200 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpc_trans_t memusage] size=3840 num_allocs=3 max_size=3840 max_num_allocs=3 total_allocs=3

[global.glusterfs - usage-type gf_common_mt_glusterfs_graph_t memusage] size=296 num_allocs=1 max_size=296 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_rpcclnt_cb_program_t memusage] size=88 num_allocs=1 max_size=88 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_cliententry_t memusage] size=2048 num_allocs=1 max_size=2048 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_clienttable_t memusage] size=72 num_allocs=1 max_size=72 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_ereg memusage] size=114688 num_allocs=1 max_size=114688 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_syncenv memusage] size=16040 num_allocs=1 max_size=16040 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_tw_ctx memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_volfile_t memusage] size=320 num_allocs=1 max_size=320 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gf_common_mt_server_cmdline_t memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[global.glusterfs - usage-type gfd_mt_xlator_cmdline_option_t memusage] size=80 num_allocs=2 max_size=80 max_num_allocs=2 total_allocs=2

[global.glusterfs - usage-type gfd_mt_char memusage] size=53 num_allocs=4 max_size=53 max_num_allocs=4 total_allocs=4

[global.glusterfs - usage-type gfd_mt_call_pool_t memusage] size=88 num_allocs=1 max_size=88 max_num_allocs=1 total_allocs=1

[mempool] -----=----- pool-name=fd_t active-count=0 sizeof-type=120 padded-sizeof=256 size=0 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=dentry_t active-count=4 sizeof-type=56 padded-sizeof=128 size=512 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=inode_t active-count=8 sizeof-type=168 padded-sizeof=256 size=2048 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=fd_t active-count=0 sizeof-type=120 padded-sizeof=256 size=0 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=dentry_t active-count=0 sizeof-type=56 padded-sizeof=128 size=0 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=inode_t active-count=1 sizeof-type=168 padded-sizeof=256 size=256 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=trash_local_t active-count=0 sizeof-type=8688 padded-sizeof=16384 size=0 shared-pool=0x7f7823e5d1b8 -----=----- pool-name=rpcsvc_request_t active-count=0 sizeof-type=2824 padded-sizeof=4096 size=0 shared-pool=0x7f7823e5d168 -----=----- pool-name=changelog_local_t active-count=0 sizeof-type=88 padded-sizeof=128 size=0 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=br_stub_local_t active-count=0 sizeof-type=56 padded-sizeof=128 size=0 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=pl_local_t active-count=0 sizeof-type=184 padded-sizeof=256 size=0 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=read_only_priv_t active-count=1 sizeof-type=40 padded-sizeof=128 size=128 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=upcall_local_t active-count=0 sizeof-type=152 padded-sizeof=256 size=0 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=selinux_priv_t active-count=0 sizeof-type=1 padded-sizeof=128 size=0 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=marker_local_t active-count=0 sizeof-type=504 padded-sizeof=1024 size=0 shared-pool=0x7f7823e5d118 -----=----- pool-name=index_local_t active-count=0 sizeof-type=16 padded-sizeof=128 size=0 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=quota_local_t active-count=0 sizeof-type=496 padded-sizeof=1024 size=0 shared-pool=0x7f7823e5d118 -----=----- pool-name=rpcsvc_request_t active-count=0 sizeof-type=2824 padded-sizeof=4096 size=0 shared-pool=0x7f7823e5d168 -----=----- pool-name=struct saved_frame active-count=0 sizeof-type=96 padded-sizeof=256 size=0 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=struct rpc_req active-count=0 sizeof-type=560 padded-sizeof=1024 size=0 shared-pool=0x7f7823e5d118 -----=----- pool-name=rpcsvc_request_t active-count=0 sizeof-type=2824 padded-sizeof=4096 size=0 shared-pool=0x7f7823e5d168 -----=----- pool-name=log_buf_t active-count=0 sizeof-type=112 padded-sizeof=256 size=0 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=data_t active-count=193 sizeof-type=72 padded-sizeof=128 size=24704 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=data_pair_t active-count=172 sizeof-type=48 padded-sizeof=128 size=22016 shared-pool=0x7f7823e5d0a0 -----=----- pool-name=dict_t active-count=32 sizeof-type=160 padded-sizeof=256 size=8192 shared-pool=0x7f7823e5d0c8 -----=----- pool-name=call_stub_t active-count=0 sizeof-type=6376 padded-sizeof=8192 size=0 shared-pool=0x7f7823e5d190 -----=----- pool-name=call_stack_t active-count=0 sizeof-type=1864 padded-sizeof=2048 size=0 shared-pool=0x7f7823e5d140 -----=----- pool-name=call_frame_t active-count=0 sizeof-type=184 padded-sizeof=256 size=0 shared-pool=0x7f7823e5d0c8

[iobuf.global] iobuf_pool=0x5570ebeee680 iobuf_pool.default_page_size=131072 iobuf_pool.arena_size=12976128 iobuf_pool.arena_cnt=8 iobuf_pool.request_misses=0

[purge.1] purge.1.mem_base=0x7f7824049000 purge.1.active_cnt=0 purge.1.passive_cnt=1024 purge.1.alloc_cnt=2495 purge.1.max_active=4 purge.1.page_size=128

[purge.2] purge.2.mem_base=0x7f7824009000 purge.2.active_cnt=0 purge.2.passive_cnt=512 purge.2.alloc_cnt=1232 purge.2.max_active=2 purge.2.page_size=512

[purge.3] purge.3.mem_base=0x7f7823f09000 purge.3.active_cnt=0 purge.3.passive_cnt=512 purge.3.alloc_cnt=57 purge.3.max_active=3 purge.3.page_size=2048

[purge.4] purge.4.mem_base=0x7f782090c000 purge.4.active_cnt=0 purge.4.passive_cnt=128 purge.4.alloc_cnt=1 purge.4.max_active=1 purge.4.page_size=8192

[arena.5] arena.5.mem_base=0x7f782070c000 arena.5.active_cnt=0 arena.5.passive_cnt=64 arena.5.alloc_cnt=0 arena.5.max_active=0 arena.5.page_size=32768

[arena.6] arena.6.mem_base=0x7f782030c000 arena.6.active_cnt=0 arena.6.passive_cnt=32 arena.6.alloc_cnt=0 arena.6.max_active=0 arena.6.page_size=131072

[purge.7] purge.7.mem_base=0x7f782010c000 purge.7.active_cnt=0 purge.7.passive_cnt=8 purge.7.alloc_cnt=3 purge.7.max_active=1 purge.7.page_size=262144

[arena.8] arena.8.mem_base=0x7f781ff0c000 arena.8.active_cnt=0 arena.8.passive_cnt=2 arena.8.alloc_cnt=0 arena.8.max_active=0 arena.8.page_size=1048576

[global.callpool] callpool_address=0x5570ebf25eb0 callpool.cnt=0

[dict] max-pairs-per-dict=23 total-pairs-used=2161 total-dicts-used=1106 average-pairs-per-dict=1

[active graph - 1]

[datastore-server.latency] datastore-server.latency.OPEN=233828.833,6,1402973.000 datastore-server.latency.READ=5070276.667,3,15210830.000 datastore-server.latency.STATFS=210523.700,10,2105237.000 datastore-server.latency.FLUSH=101597.333,6,609584.000 datastore-server.latency.GETXATTR=371777.731,245,91085544.000 datastore-server.latency.OPENDIR=43565.067,179,7798147.000 datastore-server.latency.LOOKUP=376790.461,217,81763530.000 datastore-server.latency.READDIR=1071744.596,354,379397587.000 datastore-server.latency.READDIRP=1213586.000,2,2427172.000

[protocol/server.datastore-server - Memory usage] num_types=130

[protocol/server.datastore-server - usage-type gf_common_mt_fdentry_t memusage] size=10240 num_allocs=5 max_size=12288 max_num_allocs=6 total_allocs=20

[protocol/server.datastore-server - usage-type gf_common_mt_fdtable_t memusage] size=400 num_allocs=5 max_size=400 max_num_allocs=5 total_allocs=10

[protocol/server.datastore-server - usage-type gf_common_mt_inode_ctx memusage] size=2432 num_allocs=4 max_size=3040 max_num_allocs=5 total_allocs=12

[protocol/server.datastore-server - usage-type gf_common_mt_list_head memusage] size=1273488 num_allocs=2 max_size=1273488 max_num_allocs=2 total_allocs=2

[protocol/server.datastore-server - usage-type gf_common_mt_inode_table_t memusage] size=240 num_allocs=1 max_size=240 max_num_allocs=1 total_allocs=1

[protocol/server.datastore-server - usage-type gf_common_mt_volume_opt_list_t memusage] size=48 num_allocs=2 max_size=72 max_num_allocs=3 total_allocs=3

[protocol/server.datastore-server - usage-type gf_common_mt_auth_handle_t memusage] size=48 num_allocs=2 max_size=48 max_num_allocs=2 total_allocs=2

[protocol/server.datastore-server - usage-type gf_common_mt_memdup memusage] size=5722 num_allocs=108 max_size=5722 max_num_allocs=108 total_allocs=218

[protocol/server.datastore-server - usage-type gf_common_mt_asprintf memusage] size=40 num_allocs=1 max_size=588 max_num_allocs=25 total_allocs=1043

[protocol/server.datastore-server - usage-type gf_common_mt_strdup memusage] size=1680 num_allocs=52 max_size=2613 max_num_allocs=69 total_allocs=479

[protocol/server.datastore-server - usage-type gf_common_mt_socket_private_t memusage] size=3984 num_allocs=6 max_size=3984 max_num_allocs=6 total_allocs=11

[protocol/server.datastore-server - usage-type gf_common_mt_char memusage] size=2036 num_allocs=115 max_size=3497 max_num_allocs=143 total_allocs=2804

[protocol/server.datastore-server - usage-type gf_common_mt_mem_pool memusage] size=256 num_allocs=4 max_size=256 max_num_allocs=4 total_allocs=4

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_auth_list memusage] size=360 num_allocs=5 max_size=360 max_num_allocs=5 total_allocs=5

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_t memusage] size=216 num_allocs=1 max_size=216 max_num_allocs=1 total_allocs=1

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_program_t memusage] size=558496 num_allocs=4 max_size=558496 max_num_allocs=4 total_allocs=4

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_listener_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[protocol/server.datastore-server - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=32 num_allocs=1 max_size=64 max_num_allocs=2 total_allocs=6

[protocol/server.datastore-server - usage-type gf_common_mt_rpc_trans_t memusage] size=7680 num_allocs=6 max_size=7680 max_num_allocs=6 total_allocs=11

[protocol/server.datastore-server - usage-type gf_common_mt_client_t memusage] size=1200 num_allocs=10 max_size=1200 max_num_allocs=10 total_allocs=20

[protocol/server.datastore-server - usage-type gf_common_mt_client_ctx memusage] size=640 num_allocs=5 max_size=640 max_num_allocs=5 total_allocs=10

[protocol/server.datastore-server - usage-type gf_server_mt_server_conf_t memusage] size=49632 num_allocs=6 max_size=49632 max_num_allocs=6 total_allocs=11

[protocol/server.datastore-server - usage-type gf_server_mt_child_status memusage] size=64 num_allocs=2 max_size=64 max_num_allocs=2 total_allocs=2

[xlator.protocol.server.priv] server.total-bytes-read=189000 server.total-bytes-write=236188 conn.0.bound_xl./virtstack/storage/disk/datastore.hashsize=14057 conn.0.bound_xl./virtstack/storage/disk/datastore.name=/virtstack/storage/disk/datastore/inode conn.0.bound_xl./virtstack/storage/disk/datastore.lru_limit=16384 conn.0.bound_xl./virtstack/storage/disk/datastore.active_size=1 conn.0.bound_xl./virtstack/storage/disk/datastore.lru_size=7 conn.0.bound_xl./virtstack/storage/disk/datastore.purge_size=0 conn.0.bound_xl./virtstack/storage/disk/datastore.invalidate_size=0

[conn.0.bound_xl./virtstack/storage/disk/datastore.active.1] gfid=00000000-0000-0000-0000-000000000001 nlookup=1 fd-count=0 active-fd-count=0 ref=1 invalidate-sent=0 ia_type=2 ref_by_xl:.datastore-io-threads=1

[xlator.features.locks.datastore-locks.inode] path=/ mandatory=0

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.1] gfid=e76e9307-02cb-4ff7-b840-5e24d4f5d97d nlookup=1 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=2

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.2] gfid=2e868bb4-c9a9-4e8a-87dd-d8750a210b7d nlookup=3 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=1

[xlator.features.locks.datastore-locks.inode] path=/CentOS-7.6-x86_64.qcow2 mandatory=0

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.3] gfid=b931f8a7-7a9a-4471-885e-79f4c123bcc7 nlookup=3 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=1

[xlator.features.locks.datastore-locks.inode] path=/3_k.qcow2 mandatory=0

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.4] gfid=742bb4b1-cc48-4355-b3ad-f579d1b073fa nlookup=60 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=2

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.5] gfid=492c8b2d-d062-4a3f-9c19-9f030ba7118a nlookup=60 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=2

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.6] gfid=40df7887-7e68-4a71-ad25-bf148c4453a8 nlookup=60 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=2

[conn.0.bound_xl./virtstack/storage/disk/datastore.lru.7] gfid=6b237ba9-6505-4c56-aaac-d1d934e5a2ee nlookup=7 fd-count=0 active-fd-count=0 ref=0 invalidate-sent=0 ia_type=1

[xlator.features.locks.datastore-locks.inode] path=/test.qcow2 mandatory=0 conn.1.id=CTX_ID:2adad87b-d657-4995-8292-508fdd4b4248-GRAPH_ID:0-PID:1522-HOST:node1-PC_NAME:datastore-client-0-RECON_NO:-0 conn.1.ref=1 conn.1.bound_xl=/virtstack/storage/disk/datastore conn.2.id=CTX_ID:fd5d0fb3-4ff8-4cf9-900c-3099009232c2-GRAPH_ID:0-PID:4242-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 conn.2.ref=1 conn.2.bound_xl=/virtstack/storage/disk/datastore conn.3.id=CTX_ID:0a083ca9-e676-4a37-867b-f88fe7550fe4-GRAPH_ID:0-PID:3762-HOST:node1-PC_NAME:datastore-client-0-RECON_NO:-0 conn.3.ref=1 conn.3.bound_xl=/virtstack/storage/disk/datastore conn.4.id=CTX_ID:cee3be9c-21c1-402c-9471-737379f6f277-GRAPH_ID:0-PID:665680-HOST:node0-PC_NAME:datastore-client-0-RECON_NO:-0 conn.4.ref=1 conn.4.bound_xl=/virtstack/storage/disk/datastore

[/virtstack/storage/disk/datastore.latency]

[debug/io-stats./virtstack/storage/disk/datastore - Memory usage] num_types=128

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_common_mt_strdup memusage] size=48 num_allocs=3 max_size=73 max_num_allocs=4 total_allocs=9

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_common_mt_dnscache memusage] size=16 num_allocs=1 max_size=16 max_num_allocs=1 total_allocs=1

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_conf memusage] size=6808 num_allocs=1 max_size=6808 max_num_allocs=1 total_allocs=1

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_stat memusage] size=872 num_allocs=13 max_size=872 max_num_allocs=13 total_allocs=13

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_stat_list memusage] size=256 num_allocs=8 max_size=256 max_num_allocs=8 total_allocs=8

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_sample_buf memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[debug/io-stats./virtstack/storage/disk/datastore - usage-type gf_io_stats_mt_ios_sample memusage] size=9437040 num_allocs=1 max_size=9437040 max_num_allocs=1 total_allocs=1

[datastore-quota.latency] datastore-quota.latency.GETXATTR=319228.665,245,78211023.000

[features/quota.datastore-quota - Memory usage] num_types=133

[features/quota.datastore-quota - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/quota.datastore-quota - usage-type gf_quota_mt_quota_priv_t memusage] size=136 num_allocs=1 max_size=136 max_num_allocs=1 total_allocs=1

[datastore-index.latency] datastore-index.latency.GETXATTR=312434.061,245,76546345.000 datastore-index.latency.OPENDIR=6070.112,179,1086550.000 datastore-index.latency.LOOKUP=325514.759,220,71613247.000 datastore-index.latency.READDIR=1044778.353,354,369851537.000

[features/index.datastore-index - Memory usage] num_types=126

[features/index.datastore-index - usage-type gf_common_mt_asprintf memusage] size=6 num_allocs=3 max_size=8 max_num_allocs=4 total_allocs=213

[features/index.datastore-index - usage-type gf_common_mt_char memusage] size=116 num_allocs=6 max_size=153 max_num_allocs=7 total_allocs=396

[features/index.datastore-index - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/index.datastore-index - usage-type gf_index_mt_priv_t memusage] size=288 num_allocs=1 max_size=288 max_num_allocs=1 total_allocs=1

[features/index.datastore-index - usage-type gf_index_inode_ctx_t memusage] size=224 num_allocs=4 max_size=280 max_num_allocs=5 total_allocs=12

[datastore-barrier.latency]

[features/barrier.datastore-barrier - Memory usage] num_types=123

[features/barrier.datastore-barrier - usage-type gf_barrier_mt_priv_t memusage] size=96 num_allocs=1 max_size=96 max_num_allocs=1 total_allocs=1

[xlator.features.barrier.priv] barrier.enabled=0 barrier.timeout=120

[datastore-marker.latency] datastore-marker.latency.GETXATTR=486719.441,68,33096922.000 datastore-marker.latency.LOOKUP=451629.400,40,18065176.000 datastore-marker.latency.READDIRP=1150809.000,2,2301618.000

[features/marker.datastore-marker - Memory usage] num_types=131

[features/marker.datastore-marker - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/marker.datastore-marker - usage-type gf_marker_mt_marker_conf_t memusage] size=120 num_allocs=1 max_size=120 max_num_allocs=1 total_allocs=1

[datastore-selinux.latency] datastore-selinux.latency.GETXATTR=431391.632,68,29334631.000

[features/selinux.datastore-selinux - Memory usage] num_types=123

[features/selinux.datastore-selinux - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/selinux.datastore-selinux - usage-type gf_selinux_mt_selinux_priv_t memusage] size=1 num_allocs=1 max_size=1 max_num_allocs=1 total_allocs=1

[datastore-io-threads.latency] datastore-io-threads.latency.OPEN=209632.667,6,1257796.000 datastore-io-threads.latency.READ=5045972.667,3,15137918.000 datastore-io-threads.latency.STATFS=177046.300,10,1770463.000 datastore-io-threads.latency.FLUSH=86186.500,6,517119.000 datastore-io-threads.latency.GETXATTR=416959.574,68,28353251.000 datastore-io-threads.latency.OPENDIR=214768.000,2,429536.000 datastore-io-threads.latency.LOOKUP=444559.525,40,17782381.000 datastore-io-threads.latency.READDIRP=1145458.500,2,2290917.000

[performance/io-threads.datastore-io-threads - Memory usage] num_types=124

[performance/io-threads.datastore-io-threads - usage-type gf_iot_mt_iot_conf_t memusage] size=472 num_allocs=1 max_size=472 max_num_allocs=1 total_allocs=1

[performance/io-threads.datastore-io-threads - usage-type gf_iot_mt_client_ctx_t memusage] size=640 num_allocs=5 max_size=640 max_num_allocs=5 total_allocs=10

[performance/io-threads.datastore-io-threads] maximum_threads_count=16 current_threads_count=1 sleep_count=1 idle_time=120 stack_size=262144 max_high_priority_threads=16 max_normal_priority_threads=16 max_low_priority_threads=16 max_least_priority_threads=1 current_high_priority_threads=0 current_normal_priority_threads=0 current_low_priority_threads=0 current_least_priority_threads=0

[datastore-upcall.latency] datastore-upcall.latency.OPEN=91659.667,6,549958.000 datastore-upcall.latency.READ=4839245.000,3,14517735.000 datastore-upcall.latency.STATFS=42177.500,10,421775.000 datastore-upcall.latency.GETXATTR=165648.676,68,11264110.000 datastore-upcall.latency.OPENDIR=79878.500,2,159757.000 datastore-upcall.latency.LOOKUP=234719.600,40,9388784.000 datastore-upcall.latency.READDIRP=1007473.000,2,2014946.000

[features/upcall.datastore-upcall - Memory usage] num_types=126

[features/upcall.datastore-upcall - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/upcall.datastore-upcall - usage-type gf_upcall_mt_private_t memusage] size=96 num_allocs=1 max_size=96 max_num_allocs=1 total_allocs=1

[datastore-leases.latency] datastore-leases.latency.OPEN=86604.167,6,519625.000 datastore-leases.latency.READ=4829027.000,3,14487081.000 datastore-leases.latency.FLUSH=13517.167,6,81103.000

[features/leases.datastore-leases - Memory usage] num_types=130

[features/leases.datastore-leases - usage-type gf_leases_mt_private_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[datastore-read-only.latency] datastore-read-only.latency.OPEN=84511.333,6,507068.000

[features/read-only.datastore-read-only - Memory usage] num_types=123

[features/read-only.datastore-read-only - usage-type gf_read_only_mt_priv_t memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[datastore-worm.latency]

[features/worm.datastore-worm - Memory usage] num_types=123

[features/worm.datastore-worm - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[datastore-locks.latency] datastore-locks.latency.OPEN=81981.500,6,491889.000 datastore-locks.latency.READ=4827139.333,3,14481418.000 datastore-locks.latency.STATFS=32105.900,10,321059.000 datastore-locks.latency.FLUSH=10021.833,6,60131.000 datastore-locks.latency.GETXATTR=152484.853,68,10368970.000 datastore-locks.latency.OPENDIR=64589.500,2,129179.000 datastore-locks.latency.LOOKUP=227954.075,40,9118163.000 datastore-locks.latency.READDIRP=978930.500,2,1957861.000

[features/locks.datastore-locks - Memory usage] num_types=131

[features/locks.datastore-locks - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/locks.datastore-locks - usage-type gf_locks_mt_pl_inode_t memusage] size=1088 num_allocs=4 max_size=1088 max_num_allocs=4 total_allocs=4

[features/locks.datastore-locks - usage-type gf_locks_mt_posix_locks_private_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[datastore-access-control.latency] datastore-access-control.latency.OPEN=76409.167,6,458455.000 datastore-access-control.latency.GETXATTR=122374.971,68,8321498.000 datastore-access-control.latency.OPENDIR=59721.500,2,119443.000 datastore-access-control.latency.LOOKUP=191895.650,40,7675826.000 datastore-access-control.latency.READDIRP=974339.500,2,1948679.000

[features/access-control.datastore-access-control - Memory usage] num_types=126

[features/access-control.datastore-access-control - usage-type gf_posix_acl_mt_ctx_t memusage] size=160 num_allocs=5 max_size=192 max_num_allocs=6 total_allocs=6

[features/access-control.datastore-access-control - usage-type gf_posix_acl_mt_posix_ace_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[features/access-control.datastore-access-control - usage-type gf_posix_acl_mt_conf_t memusage] size=56 num_allocs=1 max_size=56 max_num_allocs=1 total_allocs=1

[datastore-bitrot-stub.latency] datastore-bitrot-stub.latency.OPEN=74522.333,6,447134.000 datastore-bitrot-stub.latency.GETXATTR=108882.029,68,7403978.000 datastore-bitrot-stub.latency.OPENDIR=57741.500,2,115483.000 datastore-bitrot-stub.latency.LOOKUP=175621.850,40,7024874.000 datastore-bitrot-stub.latency.READDIRP=963914.500,2,1927829.000

[features/bitrot-stub.datastore-bitrot-stub - Memory usage] num_types=137

[features/bitrot-stub.datastore-bitrot-stub - usage-type gf_common_mt_mem_pool memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/bitrot-stub.datastore-bitrot-stub - usage-type gf_br_stub_mt_private_t memusage] size=9480 num_allocs=1 max_size=9480 max_num_allocs=1 total_allocs=1

[datastore-changelog.latency] datastore-changelog.latency.OPEN=69458.167,6,416749.000

[features/changelog.datastore-changelog - Memory usage] num_types=136

[features/changelog.datastore-changelog - usage-type gf_common_mt_strdup memusage] size=241 num_allocs=8 max_size=255 max_num_allocs=10 total_allocs=10

[features/changelog.datastore-changelog - usage-type gf_common_mt_socket_private_t memusage] size=664 num_allocs=1 max_size=664 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_char memusage] size=180 num_allocs=8 max_size=180 max_num_allocs=8 total_allocs=8

[features/changelog.datastore-changelog - usage-type gf_common_mt_mem_pool memusage] size=128 num_allocs=2 max_size=128 max_num_allocs=2 total_allocs=2

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_auth_list memusage] size=360 num_allocs=5 max_size=360 max_num_allocs=5 total_allocs=5

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_t memusage] size=216 num_allocs=1 max_size=216 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_program_t memusage] size=279248 num_allocs=2 max_size=279248 max_num_allocs=2 total_allocs=2

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_listener_t memusage] size=160 num_allocs=1 max_size=160 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpcsvc_wrapper_t memusage] size=32 num_allocs=1 max_size=32 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rpc_trans_t memusage] size=1280 num_allocs=1 max_size=1280 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rbuf_t memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_common_mt_rlist_t memusage] size=896 num_allocs=4 max_size=896 max_num_allocs=4 total_allocs=4

[features/changelog.datastore-changelog - usage-type gf_common_mt_rvec_t memusage] size=4194432 num_allocs=4 max_size=4194432 max_num_allocs=4 total_allocs=4

[features/changelog.datastore-changelog - usage-type gf_changelog_mt_priv_t memusage] size=1224 num_allocs=1 max_size=1224 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_changelog_mt_rt_t memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[features/changelog.datastore-changelog - usage-type gf_changelog_mt_ev_dispatcher_t memusage] size=24 num_allocs=1 max_size=24 max_num_allocs=1 total_allocs=1

[datastore-trash.latency]

[features/trash.datastore-trash - Memory usage] num_types=126

[features/trash.datastore-trash - usage-type gf_common_mt_inode_ctx memusage] size=608 num_allocs=1 max_size=608 max_num_allocs=1 total_allocs=1

[features/trash.datastore-trash - usage-type gf_common_mt_list_head memusage] size=1273488 num_allocs=2 max_size=1273488 max_num_allocs=2 total_allocs=2

[features/trash.datastore-trash - usage-type gf_common_mt_inode_table_t memusage] size=240 num_allocs=1 max_size=240 max_num_allocs=1 total_allocs=1

[features/trash.datastore-trash - usage-type gf_common_mt_asprintf memusage] size=22 num_allocs=1 max_size=119 max_num_allocs=1 total_allocs=2

[features/trash.datastore-trash - usage-type gf_common_mt_strdup memusage] size=46 num_allocs=2 max_size=46 max_num_allocs=2 total_allocs=2

[features/trash.datastore-trash - usage-type gf_common_mt_mem_pool memusage] size=256 num_allocs=4 max_size=256 max_num_allocs=4 total_allocs=4

[features/trash.datastore-trash - usage-type gf_trash_mt_trash_private_t memusage] size=64 num_allocs=1 max_size=64 max_num_allocs=1 total_allocs=1

[datastore-posix.latency] datastore-posix.latency.OPEN=66892.667,6,401356.000 datastore-posix.latency.READ=4819455.333,3,14458366.000 datastore-posix.latency.STATFS=24601.400,10,246014.000 datastore-posix.latency.FLUSH=4569.167,6,27415.000 datastore-posix.latency.GETXATTR=95687.824,68,6506772.000 datastore-posix.latency.OPENDIR=45531.000,2,91062.000 datastore-posix.latency.LOOKUP=171211.850,40,6848474.000 datastore-posix.latency.READDIRP=934919.000,2,1869838.000

[storage/posix.datastore-posix - Memory usage] num_types=129

[storage/posix.datastore-posix - usage-type gf_common_mt_inode_ctx memusage] size=2432 num_allocs=4 max_size=3040 max_num_allocs=5 total_allocs=5

[storage/posix.datastore-posix - usage-type gf_common_mt_strdup memusage] size=34 num_allocs=1 max_size=34 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_common_mt_char memusage] size=256 num_allocs=1 max_size=316 max_num_allocs=4 total_allocs=117

[storage/posix.datastore-posix - usage-type gf_common_mt_tw_timer_list memusage] size=40 num_allocs=1 max_size=40 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_posix_mt_posix_private memusage] size=576 num_allocs=1 max_size=576 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_posix_mt_trash_path memusage] size=54 num_allocs=1 max_size=54 max_num_allocs=1 total_allocs=1

[storage/posix.datastore-posix - usage-type gf_posix_mt_mdata_attr memusage] size=256 num_allocs=4 max_size=320 max_num_allocs=5 total_allocs=3201

[storage/posix.datastore-posix] base_path=/virtstack/storage/disk/datastore base_path_length=33 max_read=262144 max_write=0

DUMP-END-TIME: 2021-01-08 10:13:34.451610

mohit84 commented 3 years ago

Hi,

Thanks for sharing the log, i have not found any issue in logs . Actually I missed your initial comment, gfapi program is hanging only while running with gdb, gdb does not work well with syncop so it is expected behavior.

Thanks, Mohit Agrawal

tangyla commented 3 years ago

Hi @mohit84 Thanks your patient explanation; did have gluster volume property or gdb property set to avoid gfapi hang up? future will repair syncop make it can work well with gdb?

Recently, I developing high availability VM, using qemu as hypervisor and glusterfs as storage backend. beacuse qemu/block/gluster layer need mutli opened gluster volume and close volume and image, the cleanup funciton glfs_fini consumed about 11 seconds with glusterfs version 7/8, I want to know this fuction consumed time is expected?

ps: glfs_fini under gluster version 4.1 consumed only about 1 second.

I had reproted this issue: https://github.com/gluster/glusterfs/issues/1962

can you give me some advise and guidance? thank you.

tangyla commented 3 years ago

I had verified this issue is gdb bug, update your gdb then you can debug program.