Closed sebastian-nagel closed 2 years ago
The question is: is this expected behaviour or not? It's invalid input and you would want some sort of error thrown.
EDIT: ah, I see. The content reader hangs. That shouldn't happen.
Fixed together with some other issues. New binaries should be on PyPi in a few minutes.
The goal as I understood it is not just resilience of large-scale data processing jobs with respect to, e.g., extreme or invalid HTML files, but also resilience against errors occurring on other parts of the processing pipeline. It would be wasteful if a million-WARC file processing job fails because of a single corrupt WARC file.
At any rate, can recoverable errors be logged (on demand)?
EDIT: This comment relates to the previous one on whether this was expected behavior.
Of course. But resilience also means that you should be able to react on errors. With the fix, the processing pipeline just continues without errors, even if the GZip stream is truncated, which is fine I believe (it shouldn't hang in any case, which is one of the major issues I've had with previous pipelines and the whole reason Resiliparse has TimeGuard and MemoryGuard). In fact, I wonder if this error should be logged at all or if it should be up to the user to detect this kind of issue. As a user you could compare the stream content length with the Content-Length header or verify the record digests if you worry about truncated records. So yes, not throwing an unexpected exception wouldn't be desirable here, I would say.
Regarding logging, I guess we should not have expectations about whatever goes on in different parts of operations that involve processing WARC files at scale. Rather, if we have knowledge of an error, then it makes sense to tell the user about it---albeit, maybe only on demand.
So, what's the most common wish users have from their tools? Silent by default, and noisy on demand? Or the other way around?
If more extensive logging is introduced, it creates lots of extra plumbing (e.g., where does the tool store the logs and can this be adjusted, logging server connections in case of distributed usage, etc.). But in the long run, such facilities might be asked for, anyway, given the professional context of resilient large-scale processing that is the target audience of this tool.
For performance reasons, I would refrain from adding intensive logging at the moment.
The ArchiveIterator, resp. the underlying stream_io.BufferedReader when reading a truncated gzipped WARC file (eg. an incomplete download). The issue can be reproduced when reading clipped.warc.gz, see iipc/jwarc#17. The stack during the hangup (instead of
ftell
I've also observedstream_io.FileStream.read()
on top of_refill_working_buf()
: