Closed eblot closed 6 years ago
So the server shouldnt die when the client is killed, however, some driver implementation find a way to segfault anyway. So in this case, it could be a bug in soapyplutosdr
For example, rtl-sdr was crashing the server when the client was killed because the server was trying to clean up the stream. But the closeStream call segfaulted if it didnt first stop the streaming threads in deactivateStream: this commit: https://github.com/pothosware/SoapyRTLSDR/commit/b77a9fc82cca1397d1b21e3cb92f80da92cdcb2b
So it could be something similar, the server only tries to close the stream, which should be safe. So closeStream in pluto should be nicely closing anything down that needs to close. Just a hint, but if not, see if you can run the server in gdb and see which call is crashing it.
Running through GDB would help indeed, I will try this and post the results.
Here is the first backtrace
Thread 0xb61ff050 (LWP 1670) exited]
Thread 0xb3bfe050 (LWP 1674) exited]
Thread 0xb43fe050 (LWP 1673) exited]
erminate called without an active exception
hread 5 "SoapySDRServer" received signal SIGABRT, Aborted.
Switching to Thread 0xb4ef0050 (LWP 1669)]
xb6c36794 in raise () from /lib/libc.so.6
gdb) bt
0 0xb6c36794 in raise () from /lib/libc.so.6
1 0xb6c37b38 in abort () from /lib/libc.so.6
2 0xb6e4b944 in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/libstdc++.so.6
3 0xb6e49770 in ?? () from /usr/lib/libstdc++.so.6
4 0xb6e497e4 in std::terminate() () from /usr/lib/libstdc++.so.6
5 0xb63a2e40 in ~thread () at /opt/Xilinx/SDK/2016.4/gnu/arm/lin/arm-xilinx-linux-gnueabi/include/ ++/4.9.2/thread:143
6 ~rx_streamer () at /home/steve/Desktop/SDR/Pluto/SoapyPlutoSDR/PlutoSDR_Streaming.cpp:227
7 0xb63a6224 in _M_dispose () at /opt/Xilinx/SDK/2016.4/gnu/arm/lin/arm-xilinx-linux-gnueabi/include/ ++/4.9.2/bits/shared_ptr_base.h:373
8 0xb63a2c7c in _M_release () at /opt/Xilinx/SDK/2016.4/gnu/arm/lin/arm-xilinx-linux-gnueabi/include/ ++/4.9.2/bits/shared_ptr_base.h:149
9 ~__shared_count () at /opt/Xilinx/SDK/2016.4/gnu/arm/lin/arm-xilinx-linux-gnueabi/include/ ++/4.9.2/bits/shared_ptr_base.h:666
10 ~__shared_ptr () at /opt/Xilinx/SDK/2016.4/gnu/arm/lin/arm-xilinx-linux-gnueabi/include/ ++/4.9.2/bits/shared_ptr_base.h:914
11 ~shared_ptr () at /opt/Xilinx/SDK/2016.4/gnu/arm/lin/arm-xilinx-linux-gnueabi/include/c++/4.9.2/bits/ hared_ptr.h:93
12 ~PlutoSDRStream () at /home/steve/Desktop/SDR/Pluto/SoapyPlutoSDR/PlutoSDR_Streaming.cpp:9
13 closeStream () at /home/steve/Desktop/SDR/Pluto/SoapyPlutoSDR/PlutoSDR_Streaming.cpp:72
14 0x00016940 in ~SoapyClientHandler () at /home/steve/Desktop/SDR/Pluto/SoapyRemote/server/ lientHandler.cpp:44
15 0x00015844 in handlerLoop () at /home/steve/Desktop/SDR/Pluto/SoapyRemote/server/ erverListener.cpp:53
16 0xb6ea40a8 in ?? () from /usr/lib/libstdc++.so.6
17 0xb6d48da4 in start_thread () from /lib/libpthread.so.0
18 0xb6cda5b0 in ?? () from /lib/libc.so.6
you were right, it looks like a crash in one instance destructor of SoapyPlutoSDR's plugin. I'll open a ticket there, and I guess close this one if you're ok with this?
Closing the issue since I want the driver to be able to cleanup the stream nicely regardless of activate/deactivate stream so its easy for dumb applications to cleanup.
Im not sure if @jocover (?) is still maintaining SoapyPlutoSDR, theres other issues that have come up with the discovery and timeouts too. I'm considering maintaining a fork here if various fixes if it comes down to it. Let me know if you are interested in helping to make and or test fixes, since I have no way to test them myself.
I'm using SoapySDRServer with a somewhat unstable client.
On regular basis, the
soapy_power
client which is forked from a parent process keeps running when I expect it to stop. I simply kill it (kill -9
).Whenever the client is killed, the SoapySDRServer dies immediately, with the following traces:
startup traces:
Client connection traces
Client killed traces
Although the client should be fixed, I believe the server should not died when the client is killed.
Let me know if you need more details.