davidmoreno / onion

C library to create simple HTTP servers and Web Applications.
http://www.coralbits.com/libonion/
Other
2.02k stars 250 forks source link

receiving more than 1500 bytes of https data #222

Open schenkus opened 6 years ago

schenkus commented 6 years ago

There is a problem with the current logic for receiving https data when more than 1500bytes arrive at once: when the socket is ready for reading onion_read_http_ready() gets called and then calls onion_https_read() with a 1500 bytes buffer. onion_https_read than calls gnutls_record_recv() with the 1500 bytes buffer, but gnuttls_record_recv() may read more than the 1500 bytes from the socket into an internal buffer. This "unread data" sitting in the internal buffer of gnutls waiting for another call of gnutls_record_recv() will then only get read when more data arrives or the poller times out after 5 seconds.

This can be solved by using a different "read_ready" function for https that reads until gnutls_record_check_pending() has no more data left.

/**
 * @short read all available data from https interface (not just the first 1500bytes)
 * @memberof onion_https_t
 * @ingroup https
 */
static int onion_https_read_ready(onion_request *con){
    gnutls_session_t session=(gnutls_session_t)con->connection.user_data;
    onion_connection_status st;

    do
    {
        char buffer[1500];
        ssize_t len=con->connection.listen_point->read(con, buffer, sizeof(buffer));
        if (len<=0)
            return OCS_CLOSE_CONNECTION;

        st=onion_request_write(con, buffer, len);
    } while ((st==OCS_NEED_MORE_DATA) && (gnutls_record_check_pending(session)>0));

    if (st!=OCS_NEED_MORE_DATA){
        if (st==OCS_REQUEST_READY)
            st=onion_request_process(con); // May give error to the connection, or yield or whatever.
        if (st<0)
            return st;
    }

    return OCS_PROCESSED;
}
davidmoreno commented 6 years ago

Sounds good. As I understand this changes the use of the http_read_ready, to check for this case.

Another option would be to change the original http_read_ready to check if there is more data on the case that full 1500 bytes are returned. IIRC the read buffer is set to non blocking, so we can read on that case and if there is more, feed it to the parser, or else, continue. So it would be to change the current http_read_ready with your and put len >= 1500 instead of your gnutls_record_check_pending.

When using https, I guess it returns as much data as available, so it should work on that case too.

Both versions look good to me. I slightly prefer mine for being more generic, but yours is ok.

So unless you want to spend some more time reworking the patch, I feel ok with your approach.

Can you prepare a pull request? That way it's easier to check the full extent of changes, and give proper attribution :).

Extra points if there is a test case that forces that case.

Regards, David.