museun / twitchchat

interface to the irc portion of Twitch's chat
Apache License 2.0
103 stars 23 forks source link

Determine whether the Client should split long messages into 512-byte chunks #10

Closed museun closed 4 years ago

museun commented 5 years ago

IRC Messages are limited to 512 bytes (including the trailing \r\n). When a user does a .send() or a .raw() there's no checks to make sure this fits within that 512 bytes, which also includes all the Command and Target information.

One solution would be to just provide a split function that the core .write() uses to break up long lines into multiple lines. With this split function, a rate-limit method could be added which can adhere to the Twitch guidelines for message sending limitation.s

from the docs

Limit Applies to …
20 per 30 seconds Users sending commands or messages to channels in which they do not have Moderator or Operator status
100 per 30 seconds Users sending commands or messages to channels in which they have Moderator or Operator status
50 per 30 seconds Known bots
7500 per 30 seconds Verified bots

To implement rate limiting, a token bucket could be used -- or even a timer wheel. Other parts of the crate need to bring in chrono for doing "time-like" operations. Keeping track of the last send per window would count against the available tokens.

A major design decision and question would be what to do when we've exceeded this. Do we block the thread?

Such as:

let next_window = some_calculation();
std::thread::sleep(std::time::Duration::from_millis(next_window));

Or do we have the caller decide when to 'try' again? By returning some "rate limit" style error and internally buffer it, we can allow the caller to figure out what to do w.r.t to blocking and retrying.

    enum RateLimit {
        Limited {
            reqs_remain: usize,
            preferred_time: std::time::Duration,
        },
        Continue, // or None or something
    }

    // drain queue here first (FIFO (recursive call this function maybe))

    let next_window = some_calculation();
    if self.is_too_soon(next_window) {
        self.queue.push(their_data);
        return Ok(RateLimit::Limited {
            reqs_remain: self.queue.len(),
            preferred_time: self.next_best_time(),
        });
    }
    Ok(RateLimit::Continue)
museun commented 5 years ago

https://github.com/museun/twitchchat/blob/8b5697b209a07fa77e1395cb876ef67914aee7de/src/twitch/client.rs#L553-L591 this could be reused.

also, from another project: split

museun commented 5 years ago

see 5f545ec3f14d523b7aa5ed9e7e0de742b4de14b3