webrecorder / warcio

Streaming WARC/ARC library for fast web archive IO
https://pypi.python.org/pypi/warcio
Apache License 2.0
385 stars 58 forks source link

warcio mangles non-ASCII HTTP headers #128

Open JustAnotherArchivist opened 3 years ago

JustAnotherArchivist commented 3 years ago

I discovered today that warcio mangles the HTTP header data when it isn't pure ASCII. Specifically, I am dealing with a server that returns ISO-8859-1 headers.

As far as I can tell, this behaviour was introduced by #45 in warcio 1.6.0. I suspect that it's fine for reading WARCs, but it's absolutely horrible for writing because the WARCs will not contain the data as sent by the server (nor even data that would be considered equivalent per HTTP!), hence violating the WARC specification.

Example:

import io
import warcio

output = io.BytesIO()
writer = warcio.warcwriter.WARCWriter(output, gzip = False)
payload = io.BytesIO()
payload.write(b'HTTP/1.1 200 OK\r\nDate: Thu, 27 May 2021 22:03:54 GMT\r\nContent-Length: 0\r\nX-non-utf8-value: \xff\r\n\r\n')
payload.seek(0)
record = writer.create_warc_record('http://example.org/', 'response', payload = payload)
writer.write_record(record)
print(output.getvalue())

Expected output for the relevant header: X-non-utf8-value: <0xFF> (where <0xFF> is that literal byte) Actual output: X-non-utf8-value: %C3%BF

ikreymer commented 3 years ago

Yeah, this is tricky...

First, the API isn't quite correct, it should really be:

headers = warcio.StatusAndHeaders("HTTP/1.0 200 OK",
          {b"Date": b"Thu, 27 May 2021 22:03:54 GMT",
           b"Content-Length": b"0",
           b"X-non-utf8-value": b"\xff"
          })
record = writer.create_warc_record('http://example.org/', 'response', http_headers=headers, payload=payload)

but it seems like the result is the same.

The %-encoding was implemented to deal with ambiguity, as even though ISO-8859-1 is supported to be the encoding for headers, servers can and do use UTF-8, and probably other encodings. In python its standard to represent headers as Unicode strings, so with Python 3, this is an issue. The %-encoding was designed to remove that ambiguity when serializing headers.

The default %-encoding via urllib.parse.quote uses utf-8 though, so that converts it to what you're seeing.

urllib.parse.quote('\xff')
'%C3%BF'

It seems like the implementation was geared towards content-disposition and similar uses of UTF-8. Following https://datatracker.ietf.org/doc/html/rfc5987#section-3.2.2 and https://datatracker.ietf.org/doc/html/rfc8187#section-3.2.3

One option is to add the charset prefix, but not sure this is correct either (this is for multi-value headers, I think)

X-non-utf8-value: *=UTF-8''%C3%BF

or

X-non-utf8-value: *=ISO-8859-1''%FF

Probably it should just be:

X-non-utf8-value: %FF

Maybe an encoding param is also needed here...

JustAnotherArchivist commented 3 years ago

First, the API isn't quite correct, it should really be: [using http_headers]

Hmm, yeah, but there is even a test for a single payload with the headers included in the test suite, so I assumed that usage was also fine: https://github.com/webrecorder/warcio/blob/aa702cb321621b233c6e5d2a4780151282a778be/test/test_writer.py#L356-L368

Unfortunately, the documentation for warcio's APIs is lacking, so it's often hard to tell what is and isn't supported usage.

[ambiguity with headers, encoding the value somehow]

I disagree. Yes, there is ambiguity, but that only matters when trying to consume HTTP. The data written to WARC response records is 'the full HTTP response received over the network' (or '... request sent ...' for request records). As such, no manipulation should ever happen to that data. Your proposed percent encoding would be equivalent per the HTTP spec, but it still wouldn't be what was sent/received over the network. In other words, representing the headers like that in the StatusAndHeaders mapping for further consumption by the client is fine, but having that in the WARC itself is a problem.

ikreymer commented 3 years ago

First, the API isn't quite correct, it should really be: [using http_headers]

Hmm, yeah, but there is even a test for a single payload with the headers included in the test suite, so I assumed that usage was also fine:

You're right, sorry, I forgot about that support, its been a while :) ...

[ambiguity with headers, encoding the value somehow]

I disagree. Yes, there is ambiguity, but that only matters when trying to consume HTTP. The data written to WARC response records is 'the full HTTP response received over the network' (or '... request sent ...' for request records). As such, no manipulation should ever happen to that data. Your proposed percent encoding would be equivalent per the HTTP spec, but it still wouldn't be what was sent/received over the network. In other words, representing the headers like that in the StatusAndHeaders mapping for further consumption by the client is fine, but having that in the WARC itself is a problem.

Yeah, it's tricky because warcio serves a dual role here, both for parsing data from existing WARCs, where the HTTP semantics are important, as well as writing raw data. An option could be to add a flag, say raw mode, that uses to_bytes instead of to_ascii_bytes at: https://github.com/webrecorder/warcio/blob/master/warcio/statusandheaders.py#L118 I'm not sure if it should be the default, don't want to break the fix for #38

I don't have a lot of time to focus on this now unfortunately, but if you have other suggestions, or want to open a PR, can review it

JustAnotherArchivist commented 3 years ago

I'd argue that #45 does not actually fix #38 at all. Reading a WARC record and then writing it again should produce identical record body data (though it may change the WARC headers to an equivalent representation), i.e. the HTTP headers should not be reencoded in that scenario either. Essentially, reading the response record from a WARC is equivalent to reading from a network connection with the actual server.

I think the only sane approach is to avoid the round trip to the mapping and back to bytes entirely. I haven't thought this through completely, but my idea right now would be to have StatusAndHeadersParser record the raw data read from .readline() and then set the StatusAndHeaders object's headers_buff attribute directly to the concatenation of that. That would solve #35 (properly), this, and #129 in one go, and it should also prevent any further bugs like this on future header parsing changes.

manueldeprada commented 3 years ago

@JustAnotherArchivist is right. Warcio fails to decode and reencode again this file for example: test.warc.gz In particular, a record from the file has this headers:

HTTP/1.1 200 OK Date: Mon, 26 Mar 2012 07:43:46 GMT Server: Apache/2.0.63 (Unix) mod_ssl/2.0.63 OpenSSL/0.9.7a mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 PHP/5.2.13 X-Powered-By: PHP/5.2.13 Set-Cookie: ³ÒÚÍ×=%96%A4n%9Bu%B0%A9f%A5%7Cl%7D%98; expires=Sun, 06-May-2012 23:43:46 GMT; path=/ Content-Length: 7881 Connection: close Content-Type: text/html

Notice the Set-Cookie header, where the name before the "=" has non-ascii characters. I agree this is not compliant nor common, but warcio should be able to deal with all sorts of WARC files archived in different manner. Instead, this error happens:

'ascii' codec can't encode character '\ufeff' in position 146: ordinal not in range(128)

in https://github.com/webrecorder/warcio/blob/aa702cb321621b233c6e5d2a4780151282a778be/warcio/statusandheaders.py#L179

wumpus commented 3 years ago

I agree that this is a real bug. I don't think I have this tested in the "warcio check" branch, either! If you look at the WARC standard it's a bit complicated to figure out what you're really supposed to do in this case, so there should be no surprise that there are buggy implementations out there. But it's very obvious what is meant.

ikreymer commented 3 years ago

Yeah, I think as @JustAnotherArchivist points out in https://github.com/webrecorder/warcio/issues/128#issuecomment-850067348, probably the only real solution is to avoid changing the encoding at all and always treat the headers as bytes, and let the browser handle it for replay. The initial idea was to be in line with how Python 3 treats headers (as strings), and %-encode any non-ascii characters, but that clearly will not work if we want to allow invalid headers as they are, which probably does make sense.

manueldeprada commented 3 years ago

I created a pull request with a temporary fix so at least WARCIO doesn't terminate when this happens.

ssairanen commented 2 years ago

Yes, this is serious problem. I've encountered warcio breaking when trying to convert old arcs to warcs. The issue #88 is one example how warcio breaks. It happens in https://github.com/webrecorder/warcio/blob/aa702cb321621b233c6e5d2a4780151282a778be/warcio/statusandheaders.py#L203-L206 .sub happens, but afterwards quoting doesn't (because curr_value isn't new_value) and afterwards this header breaks with UnicodeError on the line https://github.com/webrecorder/warcio/blob/aa702cb321621b233c6e5d2a4780151282a778be/warcio/statusandheaders.py#L179

I also suggest that headers should just stay as they are, because it's not warcios job to fix the multitude of problems some web servers might cause in the headers. Warcio can ofc be used to fix the headers if there are functional problems in playback, but in main usage it shouldn't try to fix everything.

(Even though RFCs defining HTTP headers have stuff like "Historically, HTTP has allowed field content with text in the ISO-8859-1 charset [ISO-8859-1], supporting other charsets only through use of [RFC2047] encoding. In practice, most HTTP header field values use only a subset of the US-ASCII charset [USASCII]. Newly defined header fields SHOULD limit their field values to US-ASCII octets. A recipient SHOULD treat other octets in field content (obs-text) as opaque data.")