w3c / spec-generator

Service to automatically generate specs from various source formats
MIT License
25 stars 7 forks source link

Response delay of ~20+ seconds is way too long #11

Closed sideshowbarker closed 9 years ago

sideshowbarker commented 9 years ago

When generating the WebDriver spec from https://labs.w3.org/spec-generator/?type=respec&url=https://w3c.github.io/webdriver/webdriver-spec.html?specStatus=WD;shortName=webdriver the spec generator just hangs for ~20 seconds or more before starting to send any HTTP response. This makes me angry. It also means that if you feed that URL to the validator to check it, the validator also has to sit there for ~20 seconds waiting for it to respond. Previously I had the timeout for the validator set to ~15 seconds, which is still a long time. An HTTP server should start to return a response within 3 seconds or less. Many types of HTTP clients are not going to just sit for 20 seconds waiting for a response.

I understand that @deniak is looking into fixing this, but I’m just raising it here so we have a record of it in the mean time, and it doesn’t fall through the cracks.

tripu commented 9 years ago

@sideshowbarker, I merged @deniak's PR :)

One question, out of curiosity. I understand 20″ is probably way too much, in all cases. But, where do the other limits (15″, 3″) come from? And, wouldn't those prevent some legit developers from using the checkers, specially from developing countries or remote regions (users whose web sites are sitting in slow, shared machines, served by high-latency network connections, etc)…?

darobin commented 9 years ago

IIRC browsers time out at 300s; but it's true that 20s is already a looooong time to wait.

Part of the problem is that we could send some headers early, but that would obfuscate errors that can happen late in the process.

I wonder: can we return 100 response codes while processing is ongoing or will that break things?

sideshowbarker commented 9 years ago

But, where do the other limits (15″, 3″) come from?

The 15-second one is that I set it at the last time I got a report from somebody running a badly-configured site and I caved and upped the timeout (I think in between I’d had it at 10 for a while).

I think the 3-second setting may have come from the markup validator and css validator use. Regardless, that’s in the same range as what the defaults for the HTML checker (validator.nu) are, which is 5 seconds.

Maybe @ylafon would know more about what the thinking of the systeam is on this stuff. But my impression is that the systeam, based on experience of running tools like this over the years, has preferred shorter HTTP-response timeouts.

As far as why we should use shorter timeouts: We obviously don’t have an infinite number of sockets we can keep open for our services. So the rationale for being conservative with timeouts is that if we use longer, more-liberal timeouts, each open connection is a resource that’s costing us for the whole time it’s hanging open—a socket we could be using to serve requests from other users that are waiting.

So it’s somewhat a choice between trying to help more users and serve more requests in the same amount of time vs having particular users consume far more of our resources than others (and further facilitating the brokenness of those users’ systems by accommodating them instead of failing so that they would then have more incentive to fix their stuff).

And, wouldn't those prevent some legit developers from using the checkers, specially from developing countries or remote regions (users whose web sites are sitting in slow, shared machines, served by high-latency network connections, etc)…?

I haven’t personally seen any actual data/evidence of a pattern of the Web servers from which we get long response delays being in developing countries or remote regions. For the HTML checker, I never got very many user reports about the timeouts even when I had them down at 3 seconds. But the few reports I did get were from servers running in North America or Europe on 3rd-party-hosted normal machines.

For example, see https://github.com/validator/validator/issues/124. The site reported there, http://www.divesitenet.com, is hosted by Cloudflare. So the guy developing that site really must be doing something bad/wasteful on the backend to cause that response to be slowed down as much as it is.

tripu commented 9 years ago

@sideshowbarker: OK; I guess this is a non-issue, then.

I wondered what a reasonable timeout looks like to someone at the end of a really bad link. But if you are not aware of a significant number of users with that problem (or the ones you have seen are simply suffering the consequences of their own bad engineering), then I agree there are no good reasons to allow longer sessions.

(Assigning this issue to you to validate @deniak's solution.)

sideshowbarker commented 9 years ago

Yeah, this is way faster now that the change from #12 is deployed. It’s currently ~8 seconds, which seems pretty good to me.