Closed iguanaonmystack closed 6 years ago
Hm. I'm reminded of a previous issue I had regarding this when I was still trying to get the Mastodon in the Bitlbee itself. I finally closed it because it seemed to work well enough. Sadly I found no way to solve it back when I looked at it last year. Patches welcome. Anyway, the key seems to be that memory can get freed twice, or that it can get freed eventhough there are still connections sending data. This is truly the dark side of C and I'm not very familiar with the best approach on how to fix these issues. Sorry. :(
I've been taking a look at this and I'm currently running my forked version with a few changes to it.
I think the problem is a race condition between the service being disconnected and some of the callbacks coming back. So I've added some protective code and I'll see how that copes after the next mastodon disconnect. I'm not entirely sure why mastodon suffers from this while twitter doesn't -- is mastodon making additional calls?
Yes, I think it does. I don't quite remember whether the Twitter code uses the streaming API or not. Perhaps it uses polling? The Mastodon plugin uses the Streaming API and opens a connection for the default timeline and one more for ever hashtag channel, the local and the federated timeline (if you join those group chats/channels). Look for all the requests that end up specifying mastodon_http_stream
as a callback. They all set req->flags |= HTTPC_STREAMING
and add themselves to md->streams
.
At the time I thought that Twitter was simply not dropping the connection as often as my Mastodon instances and that is why the race condition was triggered. I hardly ever noticed bitlbee crashing, but then again, I run it as a user directly and maybe I just never saw it.
I think (part of) the solution was a couple of use-after-free in some of the http callbacks. I've further patched my fork and am now running that code. I'll hold off submitting a pull request until the bee can survive a mastodon drop :)
🐘 ☂️ 🐝
Sounds good to me. Thanks for looking into it.
I'm ready for patches. :)
Mastodon hasn't actually disconnected me since my latest HEAD so I can't say whether the issue is fixed yet. I'm reasonably sure I've sorted out the current latest traceback, but I don't know if more are waiting in store. I suppose more testers would be useful; I can open a pull request if you like? :)
Sure, please do. Also, if Code quality is simply better, that’s also going to be a win.
Okay, I've opened pull request #11
Thanks!
FWIW today i got
Finishing HTTP request with status: 200 OK
* Error in `/home/dx/bitlbee/bitlbee': double free or corruption (!prev): 0x000055555694e380 *
Program received signal SIGABRT, Aborted.
0x00007ffff5ce58a0 in raise () from /usr/lib/libc.so.6
(gdb) bt
#0 0x00007ffff5ce58a0 in raise () at /usr/lib/libc.so.6
#1 0x0000000000000000 in ()
Not very useful, but inspecting the garbage from the stack it shows mastodon reconnection messages, so I guess it's this. I was running the code from when this was a bitlbee PR, so outdated as heck.
Any reason this isn't closed?
Note merge request #12?
I think we can close this now? Nothing new seems to have popped up.
I'm getting on really well with the mastodon plugin but unfortunately every time mastodon closes the stream (eg
mastodon - Error: Stream closed (200 OK)
) bitlbee segfaults.BitlBee-3.5.1+20171123+master+30-g4a9c6b0-git
Mastodon from git at 4a0262752105eb094a8f5ecfc2708f5b7f9c4e64 (HEAD of master at the time of writing)
My current workaround is running bitlbee in valgrind, which prevents the segfault.
Valgrind output:
Is there anything else I can provide to help debug this?