Closed dholth closed 6 months ago
read
used to set download_size to the _chunk_size for small size. We can probably ignore this for our precise usage of lazy_wheel
but it would be good to understand the underlying bug.
def read(self, size: int = -1) -> bytes:
- download_size = max(size, self._chunk_size)
+ download_size = size
Hi there, thank you for your contribution!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed automatically if no further activity occurs.
If you would like this issue to remain open please:
NOTE: If this issue was closed prematurely, please leave a comment.
Thanks!
Was downloading (less than chunk size) during prefetch, then (chunk size) during actual read() caused an extra request
How did the prefetch
read()
wind up downloading less than theread()
fromZipFile
?zipfile
is correct and never reads past the file you are trying to extract,lazy_wheel
might not have been marking the downloaded ranges correctly?It's good to download exactly the
info
even when it is < 10kB but it seems likely a bug remains in thelazy_wheel
logic...We prefetch because ZipFile would otherwise make several tiny reads for the header, then read the compressed body of that file.