Closed martensitingale closed 6 years ago
I believe this is caused by the callback object getting freed too early; could someone verify that the following patch fixes the problem?
diff --git a/src/NetManager.vala b/src/NetManager.vala
index 23ed95b..fc049cd 100644
--- a/src/NetManager.vala
+++ b/src/NetManager.vala
@@ -83,7 +84,7 @@ public class Tootle.NetManager : GLib.Object {
return yield session.websocket_connect_async (msg, null, null, null);
}
- public Soup.Message queue (Soup.Message msg, Soup.SessionCallback? cb = null) {
+ public Soup.Message queue (Soup.Message msg, owned Soup.SessionCallback? cb = null) {
requests_processing++;
started ();
I'll give it a test in a moment
I can't believe it works
For what it's worth, my debugging workflow here was actually prompted by working on the image cache (which was showing similar crashes). In GDB, the callback data object (which was lowered to a C variable called "data2" would show garbage values for its refcounts and other fields when I inspected it at the point of the crash. I noticed that in the generated C code in the build dir, there was a patttern of:
ValaGeneratedCallbackData43* _tmp0_ = g_slice_new0(sizeof(ValaGeneratedCallbackData43));
_tmp0_->self=foo;
_tmp0_->bar=baz;
etc.
my_async_function_start(a, b, _tmp0_);
unref(_tmp0_);
which results in tmp0 being freed when the async function completes. Changing the async functions to accept owned callbacks, on the other hand, causes Vala code generation to insert an additional reference-count increment before passing to the async function, so the callback data lives until the async function completes.
Either way this one "owned" just owned my forever uncommitted system for checking whether a pending request is still relevant and needs to be handled
Well played, sir
I get a variety of different crashes when clicking a toot and then pressing "back" before the toot has loaded. It seems the crashes occur when the toot does finish downloading.