whatwg / url

URL Standard
https://url.spec.whatwg.org/
Other
532 stars 139 forks source link

Parser generates invalid URLs #379

Open LEW21 opened 6 years ago

LEW21 commented 6 years ago

https://url.spec.whatwg.org/commit-snapshots/a1b789c6b6c36fcdb16311da5abd177e84151fca/#url-parsing

For each byte in buffer:​

If byte is less than 0x21 (!), greater than 0x7E (\~), or is 0x22 ("), 0x23 (#), 0x3C (<), or 0x3E (>), append byte, percent encoded, to url’s query.

Otherwise, append a code point whose value is byte to url’s query.

This leads to creation of invalid URLs - ones that contain [, \, ], ^, `, {, |, }, which are neither URL code points nor '%' and trigger validation errors:

Otherwise:

If c is not a URL code point and not U+0025 (%), validation error.

I think that either:

Related issues: #378, #17

LEW21 commented 6 years ago

Actually, there is a similar problem for other URL parts, too. image

For example:

LEW21 commented 6 years ago

I'd also consider excluding ` from the fragment percent-encode set, like it was excluded from query in #17, so that fragment percent-encode set is contained within query percent-encode set.

annevk commented 6 years ago

I don't think we're in a position to change these, unless implementations vary enough for a particular code point that there's some leeway.

What we should maybe do is document the when the parser doesn't "fixup" an invalid URL more clearly.

domenic commented 6 years ago

I think it would definitely be worth attempting to change these if at all possible, or else expanding the definition of valid URL.

annevk commented 6 years ago

I don't think we can change most of these due to compatibility, but if someone wants to have another attempt at doing the research I'm willing to assist.

I don't think we should change the definition of what's valid either though. Validity in part helps signaling problems you might face, including with legacy software. (Same as with HTML.)

domenic commented 6 years ago

It seems extraordinarily strange that we're not only giving developers the tools to create invalid URLs, but we're also encouraging other standards to produce invalid URLs and pass them on to the rest of the ecosystem (e.g., over HTTP). In that case I'd question why we're calling such URLs invalid at all. At that point they're just "URLs produced by all software that follows the URL Standard", and validity doesn't buy us much.

I don't believe you can parse/serialize any string through the HTML parser and get invalid HTML, for example. (Maybe some edge cases exist, but these characters are hardly edge cases.)

annevk commented 6 years ago

Other standards? I don't see how this is different from #118.

domenic commented 6 years ago

Sure. Every standard that uses the URL parser on user input is currently producing invalid URLs, right? Including standards that then use the URL serializer and send the result to the network or elsewhere.

118 stemmed from suggestions that the parser should fail when given an invalid URL. This issue is a concern about the parser producing an invalid URL.

annevk commented 6 years ago

@domenic they can produce invalid URLs, sure. Just like the HTML parser can produce invalid HTML.

annevk commented 6 years ago

118 was precisely about this. That the syntax (valid URLs) conflicts with the parser.

LEW21 commented 6 years ago

@annevk, I'd agree that this is a case of Garbage-In-Garbage-Out, if it would only happen when parsing invalid URLs. However, it also happens when changing components of an URL object:

x = new URL('http://localhost')
x.search = 'a#{}'
x.href // "http://localhost/?a%23{}"

Also, the browsers don't 100% follow the spec here - for example Chrome escapes ^ and | in paths, while the standard says it should not. I'll do more tests to check how browsers behave on all chars in all places.

LEW21 commented 6 years ago

OK, these are the results for Chrome, Firefox, Edge: https://docs.google.com/spreadsheets/d/1mSl2N2Wrc7ZdKy2ArhLg0t3EHI2DOdHaDyWMZQByHE4/edit?usp=sharing

image

Note: I'm not sure if I've set spec behavior of \ correctly.

I was testing with:

U = () => new URL('http://localhost')
s = 'x "#%<>?[\\]^`{|}$'
u = U(); u.pathname = s; u.href
u = U(); u.search = s; u.href
u = U(); u.hash = s; u.href

On Edge, u.pathname = s and u.search = s were throwing errors, so I had to check these character-by-character with loops:

for (let c of s) {try { u = U(); u.pathname = 'x' + c + 'x'; console.log(c, u.href); } catch (e) { console.log(c, e) }}
for (let c of s) {try { u = U(); u.search   = 'x' + c + 'x'; console.log(c, u.href); } catch (e) { console.log(c, e) }}
LEW21 commented 6 years ago

Looking at the results, I think that:

The only confusing one is:

annevk commented 4 years ago

@LEW21 I had another look.

alwinb commented 3 years ago

I've added a number of test cases to test the distinct percent encode sets. The PR (for wpt) is here.

Some observations,

Firefox and Chrome deviate from the spec in the following:

It's easy to make mistakes with this, so additional eyes would be good.

nox commented 2 years ago

FWIW as part of my work I've found some requests using unencoded curly braces and double quotes in URIs' paths. I think it's an NVIDIA update tool or something.

See that test in the Rust crate httparse: https://github.com/seanmonstar/httparse/blob/8bd42db76543dee9f7e172d3f14a6666530f220c/tests/uri.rs#L3682-L3693