Closed lacunoc closed 6 years ago
this is a vdr issue caused by the debug logging of locking sequence. I have disabled it in my branch of vdr: https://github.com/FernetMenta/VDR/commit/e474ce83ef0d60e3d074a4f0c18b13c080b7374a
thank you very much:) I will recompile your fork then
Are we sure this is fixed with your branch? I compiled your source and experienced the same behavior.
Yes, this particular logging feature is disabled in master branch of my repo: https://github.com/FernetMenta/VDR/commit/e474ce83ef0d60e3d074a4f0c18b13c080b7374a
I can trigger the deadlock on VDR 2.3.8 by creating two recordings with the same end time. With option "Emergency exit" disabled the VDR hangs at end of recordings.
A back trace shows:
Thread 1 (Thread 0x7f1cd742d740 (LWP 867)):
#0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1 0x00007f1cd70a9aa5 in __GI___pthread_mutex_lock (mutex=0x7f1c88001710)
at ../nptl/pthread_mutex_lock.c:80
#2 0x00000000004bed89 in cMutex::Lock() ()
#3 0x00000000004bedff in cMutexLock::Lock(cMutex*) ()
#4 0x00007f1cd51b00e3 in cVNSIClient::Recording(cDevice const*, char const*, char const*, bool) ()
from /storage/.kodi/addons/service.multimedia.vdr-addon/plugin/libvdr-vnsiserver.so.2.3.8
#5 0x0000000000511b96 in cStatus::MsgRecording(cDevice const*, char const*, char const*, bool) ()
#6 0x00000000004ca5d2 in cRecordControl::Stop(bool) ()
#7 0x00000000004ca6dc in cRecordControl::~cRecordControl() ()
#8 0x00000000004ca6f9 in cRecordControl::~cRecordControl() ()
#9 0x00000000004ca559 in cRecordControls::Process(cTimers*, long) ()
#10 0x000000000045a5d0 in main ()
Thread 32 (Thread 0x7f1c7a7fc700 (LWP 1042)):
#0 0x00007f1cd70abc98 in futex_wait (private=0, expected=17,
futex_word=0x6286b8 <cTimers::timers+56>)
at ../sysdeps/unix/sysv/linux/futex-internal.h:61
#1 futex_wait_simple (private=0, expected=17,
futex_word=0x6286b8 <cTimers::timers+56>)
at ../sysdeps/nptl/futex-internal.h:135
#2 __pthread_rwlock_rdlock_slow (rwlock=0x6286b0 <cTimers::timers+48>)
at pthread_rwlock_rdlock.c:68
#3 0x00000000004bef68 in cRwLock::Lock(bool, int) ()
#4 0x000000000050ea7f in cStateLock::Lock(cStateKey&, bool, int) ()
#5 0x000000000050c757 in cTimers::GetTimersRead(cStateKey&, int) ()
#6 0x00007f1cd51b3349 in cVNSIClient::processTIMER_GetList(cRequestPacket&) ()
from /storage/.kodi/addons/service.multimedia.vdr-addon/plugin/libvdr-vnsiserver.so.2.3.8
#7 0x00007f1cd51afb0b in cVNSIClient::processRequest(cRequestPacket&) ()
from /storage/.kodi/addons/service.multimedia.vdr-addon/plugin/libvdr-vnsiserver.so.2.3.8
#8 0x00007f1cd51b0975 in cVNSIClient::Action() ()
from /storage/.kodi/addons/service.multimedia.vdr-addon/plugin/libvdr-vnsiserver.so.2.3.8
#9 0x00000000004bf86d in cThread::StartThread(cThread*) ()
#10 0x00007f1cd70a73f4 in start_thread (arg=0x7f1c7a7fc700)
at pthread_create.c:333
#11 0x00007f1cd5ef747f in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:105
Thread 1: cTimers::GetTimersWrite() was called in main() and cVNSIClient::Recording() tries to lock(&m_msgLock)
Thread 32: lock(&m_msgLock) was called in cVNSIClient::processRequest() and cVNSIClient::processTIMER_GetList() tries to call cTimers::GetTimersRead()
The deadlock did not occur any more after removing the lock() at https://github.com/FernetMenta/vdr-plugin-vnsiserver/blob/master/vnsiclient.c#L302 but this is only a dirty hack for testing.
I'll have a look. VDR calls into plugin code with grabbed locks. This is really nasty of VDR.
indeed, fixed. thank you!
I'm not 100% sure if this is related to vnsi, but since vnsiserver is mentioned in the stracktrace and I saw this commit, I found it a proper way to at least ask.
From time to time, when a recording ends, the vdr server is crashing and automatically restarting with the following stacktrace:
This is my setup:
Anything I can do about it?