aatxe / irc

the irc crate – usable, async IRC for Rust
Mozilla Public License 2.0
530 stars 97 forks source link

After long periods of time the program will crash with an assertion error #92

Closed ghost closed 7 years ago

ghost commented 7 years ago

Was trying to track this line for a while, made a reduced case (which is pretty much just the example on the github) and it happens for me.

extern crate irc;

use std::default::Default;
use irc::client::prelude::*;

fn main() {
    let cfg = Config {
        nickname: Some(format!("bot_test")),
        server: Some(format!("irc.mozilla.org")),
        channels: Some(vec![format!("#rust")]),
        .. Default::default()
    };
    let server = IrcServer::from_config(cfg).unwrap();
    server.identify().unwrap();
    server.for_each_incoming(|_| {
        // This shouldn't cause an issue, but its what i tested with because i forgot it could just be blank
        let x = 4;
    })
}

crashes with:

[ayy@lmao test_irc]$ cargo run
   Compiling test_irc v0.1.0 (file:///home/ayy/Projects/Rust/test_irc)
warning: unused variable: `x`
  --> src/main.rs:16:13
   |
16 |         let x = 4;
   |             ^
   |
   = note: #[warn(unused_variables)] on by default

    Finished dev [unoptimized + debuginfo] target(s) in 2.56 secs
     Running `target/debug/test_irc`
thread 'main' panicked at 'assertion failed: self.remaining_mut() >= src.remaining()', /home/ayy/.cargo/registry/src/github.com-1ecc6299db9ec823/bytes-0.4.4/src/buf/buf_mut.rs:229
note: Run with `RUST_BACKTRACE=1` for a backtrace.

The mentioned line is located here: https://github.com/carllerche/bytes/blob/master/src/buf/buf_mut.rs#L204

And the only place that uses that method is here: https://github.com/aatxe/irc/blob/41632b10af58537476e6339a23772a15e426a651/src/proto/line.rs#L72

I couldn't track the issue much farther than that though.

The amount of time it takes to crash is ~10 hours, and it does not appear to be related to a message that comes through from a user. I am using #rust channel for another thing so to keep it the same i used it for the test, and the times don't match up with anything a user posts. I will run it with the backtrace and see what it shows. But it will take a while. I'd be happy to test any further ideas you have on the issue.

aatxe commented 7 years ago

So, I think I have an idea for a fix, but what concerns me is that I'm confused about the cause. It seems that we'd be able to fix this by either using extend, or just checking remaining bytes and reserving more only when necessary (which extend does internally).

aatxe commented 7 years ago

I'm not sure to what extent this issue is tied to the timing. It's clear that the panic should be happening when the buffer (which is held internally by the Framed instance within the transport) is full (or would be more than full after adding the message). It's unclear to me if this is because you're very rarely receiving a very long message, or if this is because of something about the state of that buffer, e.g. it is not being emptied fast enough or something. Still, I think I will change it to using extend which should resolve your issue (but if the latter situation is the case, maybe there'll be memory usage issues?).

ghost commented 7 years ago

Okay, i will do some further testing with the extend change, keeping memory usage in mind. Thanks for the quick fix :+1:

aatxe commented 7 years ago

Most likely, there's nothing weird going on (just the buffer getting full for some ordinary reason, and then eventually being sent on the stream and flushed). So, this should be all that's necessary.

pshc commented 7 years ago

FWIW I've been running into this too. Thanks for the fix!