Closed iamacarpet closed 6 months ago
The difficultly in adding default timeouts is determining what's reasonable (not necessarily a coding problem). A 1GB file could take days to download on a slow enough connection for example, and that's not necessarily a failure.
What might make sense is to detect whether there was no progress in downloading a given file in "X" amount of time, and consider that a failure.
Yes, that's 100% what I was thinking - using Go terminology, a read deadline that is reset with every block of data that is written to disk / received from the remote, or something along those lines, so you detect a stall, rather than setting a total time allowed for a particular download.
Thanks @TsekNet , hopefully that looks acceptable :).
We ran across a problem today on a slow network, where a download stalled (from UpdateMSU).
We waited for ~30 minutes and it never timed out, so someone had to manually intervene.
Looking through the code, I can see it's using urllib internally, but I can't see anywhere that timeouts are being configured:
https://github.com/google/glazier/blob/master/glazier/lib/actions/updates.py#L51
https://github.com/google/glazier/blob/master/glazier/lib/actions/files.py#L111
https://github.com/google/glazier/blob/master/glazier/lib/download.py#L395
I'm by no means very good at Python, so not sure of my ability to implement, but do you think some default timeouts to prevent infinite stalling would be a good idea?