Open ppavlov39 opened 1 month ago
You can use CH's introspection functions to dump where the threads are stuck:
SET allow_introspection_functions = 1;
SELECT
thread_name,
thread_id,
query_id,
arrayStringConcat(`all`, '\n') AS res
FROM system.stack_trace
FORMAT Vertical;
Hello! Today we encountered a problem again. I tried two queries and got very strange results - it was empty:
SELECT
thread_name,
thread_id,
query_id,
arrayStringConcat(`trace`, '\n') AS res --In our version table doesn't have 'all' column, just 'trace'
FROM system.stack_trace
SETTINGS allow_introspection_functions=1
FORMAT Vertical;
The output of the query was as follows:
The second one:
SELECT
arrayStringConcat(arrayMap(x -> demangle(addressToSymbol(x)), trace), '\n') AS trace_functions,
count()
FROM system.stack_trace
GROUP BY trace_functions
ORDER BY count()
DESC
SETTINGS allow_introspection_functions=1
FORMAT Vertical;
That query return this:
After restarting the server, these queries start returning the following results
When I ran these queries during the issues, it took minutes (up to 10), whereas after a restart the queries took less than 10 seconds.
Check sudo dmesg -T
for CPU lockup messages.
I am 90% sure it's a Linux Kernel bug, and you need to upgrade a kernel.
The output of sudo dmesg -T
command has many strings that looks like this:
[Сб сен 28 14:49:58 2024] audit: type=1300 audit(1727524293.135:20815699): arch=c000003e syscall=42 success=no exit=-115 a0=3 a1=7ffdc79d9230 a2=10 a3=7ffdc79d84a0 items=0 ppid=2095 pid=23543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="<cmd>" exe="<cmd_path>" key=(null)
I've check count of audit messages:
$ sudo dmesg -T | grep audit | wc -l
2263
$ sudo dmesg -T | grep -v audit | wc -l
0
There are no messages related with CPU
And during today issue i've try check the stack_trace table and saw the same output as before. It was empty. As described in that message.
I tried to check the system calls with strace. The stuck thread gave the following output:
$ sudo strace -p 31507
strace: Process 31507 attached
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x2468, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[QUIT PIPE TERM]}) = 202
futex(0x7f63697020f8, FUTEX_WAIT_PRIVATE, 2, NULL^Cstrace: Process 31507 detached
<detached ...>
And another thread:
$ sudo strace -p 1374
strace: Process 1374 attached
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x8ca, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[PIPE]}) = 202
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x8ca, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[PIPE]}) = 202
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x8ca, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[PIPE]}) = 202
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x8ca, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[PIPE]}) = 202
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x8ca, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[PIPE]}) = 202
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x8ca, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[PIPE]}) = 202
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGUSR1 {si_signo=SIGUSR1, si_code=SI_TIMER, si_timerid=0x8ca, si_overrun=0, si_value={int=0, ptr=NULL}} ---
rt_sigreturn({mask=[PIPE]}) = 202
futex(0x187b78f8, FUTEX_WAIT_PRIVATE, 2, NULL^Cstrace: Process 1374 detached
<detached ...>
I recently got a stack trace of two stuck threads with this command gdb -ex "set pagination 0" -ex "thread apply all bt" --batch -p 15182
.
Maybe you could see something useful that will help you figure out the root cause of these problems.
The first one:
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f72c894254d in __lll_lock_wait () from /lib64/libpthread.so.0
Thread 1 (process 15182):
#0 0x00007f72c894254d in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f72c893f1b2 in pthread_rwlock_rdlock () from /lib64/libpthread.so.0
#2 0x0000000018136fcb in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::getInfoFromDwarfSection(unsigned long, libunwind::UnwindInfoSections const&, un
signed int) ()
#3 0x0000000018134c57 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::setInfoBasedOnIPRegister(bool) ()
#4 0x00000000181342b2 in unw_step ()
#5 0x000000000db2d579 in StackTrace::StackTrace(ucontext_t const&) ()
#6 0x000000000db5290d in DB::(anonymous namespace)::writeTraceInfo(DB::TraceType, int, siginfo_t*, void*) ()
#7 <signal handler called>
#8 0x00007f72c894254b in __lll_lock_wait () from /lib64/libpthread.so.0
#9 0x00007f72c893f1b2 in pthread_rwlock_rdlock () from /lib64/libpthread.so.0
#10 0x0000000018136fcb in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::getInfoFromDwarfSection(unsigned long, libunwind::UnwindInfoSections const&, un
signed int) ()
#11 0x0000000018134c57 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::setInfoBasedOnIPRegister(bool) ()
#12 0x00000000181342b2 in unw_step ()
#13 0x000000000db2d66b in StackTrace::tryCapture() ()
#14 0x000000000db1c238 in MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) ()
#15 0x000000000db1c75d in MemoryTracker::allocImpl(long, bool, MemoryTracker*, double) ()
#16 0x000000000dad2a67 in Allocator<false, false>::realloc(void*, unsigned long, unsigned long, unsigned long) ()
#17 0x0000000007e5f10f in void DB::PODArrayBase<8ul, 4096ul, Allocator<false, false>, 63ul, 64ul>::realloc<>(unsigned long) ()
#18 0x0000000007a06835 in DB::SerializationDecimalBase<DB::Decimal<long> >::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const ()
#19 0x0000000011176cd4 in DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBul
kSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char
> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic
_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<ch
ar> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const ()
#20 0x00000000111ba93d in DB::SerializationNullable::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBi
naryBulkSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocat
or<char> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1
::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::alloc
ator<char> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const ()
#21 0x0000000012ffbd54 in DB::NativeReader::read() ()
#22 0x0000000012eaa9d9 in DB::Connection::receiveDataImpl(DB::NativeReader&) ()
#23 0x0000000012ea9971 in DB::Connection::receivePacket() ()
#24 0x0000000012ee4de8 in DB::PacketReceiver::Task::run(std::__1::function<void (int, Poco::Timespan, DB::AsyncEventTimeoutType, std::__1::basic_string<char, std::__1::char_traits<char>, st
d::__1::allocator<char> > const&, unsigned int)>, std::__1::function<void ()>) ()
#25 0x00000000110ce963 in void boost::context::detail::fiber_entry<boost::context::detail::fiber_record<boost::context::fiber, FiberStack&, Fiber::RoutineImpl<DB::AsyncTaskExecutor::Routine
> > >(boost::context::detail::transfer_t) ()
#26 0x000000000693d8ef in make_fcontext ()
#27 0x0000000000000000 in ?? ()
[Inferior 1 (process 15182) detached]
And another thread:
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f72c894254d in __lll_lock_wait () from /lib64/libpthread.so.0
Thread 1 (process 2182):
#0 0x00007f72c894254d in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f72c893f1b2 in pthread_rwlock_rdlock () from /lib64/libpthread.so.0
#2 0x0000000018136fcb in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::getInfoFromDwarfSection(unsigned long, libunwind::UnwindInfoSections const&, un
signed int) ()
#3 0x0000000018134c57 in libunwind::UnwindCursor<libunwind::LocalAddressSpace, libunwind::Registers_x86_64>::setInfoBasedOnIPRegister(bool) ()
#4 0x00000000181342b2 in unw_step ()
#5 0x000000000db2d579 in StackTrace::StackTrace(ucontext_t const&) ()
#6 0x000000000db5290d in DB::(anonymous namespace)::writeTraceInfo(DB::TraceType, int, siginfo_t*, void*) ()
#7 <signal handler called>
#8 0x00000000111e6210 in DB::deserializeBinarySSE2<1> ()
#9 0x0000000011176cd4 in DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBul
kSettings&, std::__1::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::__1::unordered_map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char
> >, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::__1::hash<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::equal_to<std::__1::basic
_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > >, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<ch
ar> > const, COW<DB::IColumn>::immutable_ptr<DB::IColumn> > > >*) const ()
#10 0x0000000012ffbd54 in DB::NativeReader::read() ()
#11 0x0000000012eaa9d9 in DB::Connection::receiveDataImpl(DB::NativeReader&) ()
#12 0x0000000012ea9971 in DB::Connection::receivePacket() ()
#13 0x0000000012ee4de8 in DB::PacketReceiver::Task::run(std::__1::function<void (int, Poco::Timespan, DB::AsyncEventTimeoutType, std::__1::basic_string<char, std::__1::char_traits<char>, st
d::__1::allocator<char> > const&, unsigned int)>, std::__1::function<void ()>) ()
#14 0x00000000110ce963 in void boost::context::detail::fiber_entry<boost::context::detail::fiber_record<boost::context::fiber, FiberStack&, Fiber::RoutineImpl<DB::AsyncTaskExecutor::Routine
> > >(boost::context::detail::transfer_t) ()
#15 0x000000000693d8ef in make_fcontext ()
#16 0x0000000000000000 in ?? ()
[Inferior 1 (process 2182) detached]
The trace seem to be stuck in libunwind code when writing traces. Try deactivating the query profiler
We disabled the trace log 2 weeks ago and the cluster has been running without any issues since then.
Describe the issue After upgrading the Clickhouse version from 24.1.2.5 to 24.7.3.42, we faced with periodic partial failures of cluster nodes. After some time, it was decided to upgrade to 24.8.4.13. This did not help. Also after it, the new query analyzer was disabled, since it caused many problems when executing previously stable queries (allow_experimental_analyzer: 0).
Most often, the failure does not occur immediately. It is preceded by the growth of several metrics - RWLockActiveReaders, BackgroundMergesAndMutationsPoolTask (no mutations are performed at that time) and the number of parallel queries. Essentially, the failure is caused by one of the cluster nodes starting to hit the limit set by the max_concurrent_queries parameter, and new queries stop being executed.
The queries that are stuck seem to have nothing in common - they could be queries to select data from system tables or user data tables, DDL queries, etc. The tables they use are different. The only solution we could find was to restart the server process.
On metrics it looks likes this (Yellow and orange lines): RWLockActiveReaders:
BackgroundMergesAndMutationsPoolTask:
Clickhouse runs on Centos 7 with elrepo kernel 5.4.224-1.el7.elrepo.x86_64.
How to reproduce We can't reproduce it, it happens unexpectedly.
How can we find out the root cause of this behavior?