I use plugin over system procs (pg_logical_slot_peek_binary_changes and pg_logical_slot_get_binary_changes) since postrgres live behind pgbouncer. Some times (on one of our db servers) consumer process stuck for really long time with no feaseable reasons.
I was not able to cancel backend (via pg_cancel_backend), but after some time (couple hours) it died.
I was able to only collect stacktrace, maybe it can be helpfull.
# bt -p 941944
Wed Jul 24 12:56:27 2019
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007fac1bdbdd43 in ?? () from /usr/lib/postgresql/10/lib/wal2json.so
#0 0x00007fac1bdbdd43 in ?? () from /usr/lib/postgresql/10/lib/wal2json.so
#1 0x0000561ca1b98619 in change_cb_wrapper (cache=<optimized out>, txn=<optimized out>, relation=<optimized out>, change=<optimized out>) at ./build/../src/backend/replication/logical/logical.c:716
#2 0x0000561ca1ba0f55 in ReorderBufferCommit (rb=0x561ca359ed88, xid=xid@entry=1953261668, commit_lsn=commit_lsn@entry=115224700423768, end_lsn=end_lsn@entry=115224700424976, commit_time=commit_time@entry=617248545781491, origin_id=origin_id@entry=0, origin_lsn=0) at ./build/../src/backend/replication/logical/reorderbuffer.c:1586
#3 0x0000561ca1b96aa0 in DecodeCommit (xid=1953261668, parsed=0x7ffd548e80a0, buf=<synthetic pointer>, ctx=0x561ca35b7c28) at ./build/../src/backend/replication/logical/decode.c:611
#4 DecodeXactOp (buf=<synthetic pointer>, ctx=0x561ca35b7c28) at ./build/../src/backend/replication/logical/decode.c:241
#5 LogicalDecodingProcessRecord (ctx=ctx@entry=0x561ca35b7c28, record=<optimized out>) at ./build/../src/backend/replication/logical/decode.c:113
#6 0x0000561ca1b99f8d in pg_logical_slot_get_changes_guts (fcinfo=0x7ffd548e83e0, confirm=<optimized out>, binary=<optimized out>) at ./build/../src/backend/replication/logical/logicalfuncs.c:329
#7 0x0000561ca1ac6dd4 in ExecMakeTableFunctionResult (setexpr=0x561ca35ec090, econtext=0x561ca35ebe30, argContext=<optimized out>, expectedDesc=<optimized out>, randomAccess=<optimized out>) at ./build/../src/backend/executor/execSRF.c:231
#8 0x0000561ca1ad2f97 in FunctionNext (node=node@entry=0x561ca35ebd20) at ./build/../src/backend/executor/nodeFunctionscan.c:94
#9 0x0000561ca1ac5f1a in ExecScanFetch (recheckMtd=0x561ca1ad2cd0 <FunctionRecheck>, accessMtd=0x561ca1ad2d00 <FunctionNext>, node=0x561ca35ebd20) at ./build/../src/backend/executor/execScan.c:97
#10 ExecScan (node=0x561ca35ebd20, accessMtd=0x561ca1ad2d00 <FunctionNext>, recheckMtd=0x561ca1ad2cd0 <FunctionRecheck>) at ./build/../src/backend/executor/execScan.c:147
#11 0x0000561ca1acc0bc in ExecProcNode (node=0x561ca35ebd20) at ./build/../src/include/executor/executor.h:250
#12 fetch_input_tuple (aggstate=aggstate@entry=0x561ca35eb678) at ./build/../src/backend/executor/nodeAgg.c:695
#13 0x0000561ca1ace388 in agg_retrieve_direct (aggstate=0x561ca35eb678) at ./build/../src/backend/executor/nodeAgg.c:2362
#14 ExecAgg (pstate=0x561ca35eb678) at ./build/../src/backend/executor/nodeAgg.c:2173
#15 0x0000561ca1abff45 in ExecProcNode (node=0x561ca35eb678) at ./build/../src/include/executor/executor.h:250
#16 ExecutePlan (execute_once=<optimized out>, dest=0x561ca35dfbe0, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x561ca35eb678, estate=0x561ca35eb468) at ./build/../src/backend/executor/execMain.c:1723
#17 standard_ExecutorRun (queryDesc=0x561ca3572478, direction=<optimized out>, count=0, execute_once=<optimized out>) at ./build/../src/backend/executor/execMain.c:364
#18 0x00007faca5a610c5 in pgss_ExecutorRun (queryDesc=0x561ca3572478, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at ./build/../contrib/pg_stat_statements/pg_stat_statements.c:889
#19 0x0000561ca1bf7716 in PortalRunSelect (portal=portal@entry=0x561ca32a1638, forward=forward@entry=1 '\001', count=0, count@entry=9223372036854775807, dest=dest@entry=0x561ca35dfbe0) at ./build/../src/backend/tcop/pquery.c:932
#20 0x0000561ca1bf8d38 in PortalRun (portal=portal@entry=0x561ca32a1638, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=1 '\001', run_once=run_once@entry=1 '\001', dest=dest@entry=0x561ca35dfbe0, altdest=altdest@entry=0x561ca35dfbe0, completionTag=0x7ffd548e8cb0 "") at ./build/../src/backend/tcop/pquery.c:773
#21 0x0000561ca1bf4840 in exec_simple_query (query_string=0x561ca376d6f8 "\nSELECT count(*) FROM pg_logical_slot_get_binary_changes(\n\t\t\t\t\t('cdc_replica'),\n\t\t\t\t\t('68CB/D7657010'),\n\t\t\t\t\tNULL,\n\t\t\t\t\t'add-tables', ('public.users,ad.order,ad.order_payment_log,ad.payment_log,geoadv"...) at ./build/../src/backend/tcop/postgres.c:1122
#22 0x0000561ca1bf6811 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x561ca32a7a38, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4117
#23 0x0000561ca193473c in BackendRun (port=0x561ca3278820) at ./build/../src/backend/postmaster/postmaster.c:4402
#24 BackendStartup (port=0x561ca3278820) at ./build/../src/backend/postmaster/postmaster.c:4074
#25 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1756
#26 0x0000561ca1b85561 in PostmasterMain (argc=9, argv=0x561ca322b0e0) at ./build/../src/backend/postmaster/postmaster.c:1364
#27 0x0000561ca1936432 in main (argc=9, argv=0x561ca322b0e0) at ./build/../src/backend/main/main.c:228
I use plugin over system procs (pg_logical_slot_peek_binary_changes and pg_logical_slot_get_binary_changes) since postrgres live behind pgbouncer. Some times (on one of our db servers) consumer process stuck for really long time with no feaseable reasons.
I was not able to cancel backend (via pg_cancel_backend), but after some time (couple hours) it died. I was able to only collect stacktrace, maybe it can be helpfull.