Open racinmat opened 1 year ago
with fs.open(a_file_gz, 'wb', compression='gzip', fixed_key_metadata={'content_encoding': 'gzip'}) as f:
f.write(b'abcd')
this is not correct. The content encoding is not the same as the MIME type, which would be "application/gzip". If you wanted to use content encoding like this, then the appropriate compression is actually none.
I don't exactly follow what your code snippet is trying to achieve: what behaviour are you after?
I am trying to use the gzip transcoding https://cloud.google.com/storage/docs/transcoding
The documentation there literally says Content-Encoding: gzip
The code I have used properly encodes the data into gzip format, and on google cloud, it is stored in gzip-compressed format, and when I download it, it is automatically decompressed, as the documentation states, so I'm not sure why its not correct when it's doing what it should. When I look at the object in the bucket browser, it shows correct encoding.
Yes, but in that case, fsspec must not attempt to decompress it, because the transport library (aiohttp) should have done it already. Also note that the size reported for the file might be wrong.
I know, but AFAIK fsspec does not attempt to decompress it, because the compression='gzip'
is only at the 'wb'
, because GCP needs to obtain the data compressed, and does not compress it on its own, but during the rb'
there is no compression, I am just adding the header, because without it the code throws error. And apparently it works, because the fsspec correctly obtains the decompressed data.
I found out that if I read the whole file, it works. The size of gzipped file is 24 bytes.
assert fs.read_range(a_file_gz, 0, 23) == b'abcd'
But when I read only part it seems it does not work, and it looks like it tried to decode it.
I found out the problem, it's in headers, I can replicate the error in curl
curl --location --request GET 'https://storage.googleapis.com/download/storage/v1/b/our-temp/o/tmp_bong%2Fa_test.gz?alt=media' \
--header '... \
--header 'Range: bytes=1-5' \
--header 'Accept-Encoding: gzip, deflate, br'
errors out, but when using only
```bash
curl --location --request GET 'https://storage.googleapis.com/download/storage/v1/b/our-temp/o/tmp_bong%2Fa_test.gz?alt=media' \
--header '... \
--header 'Range: bytes=1-5' \
--header 'Accept-Encoding: deflate, br'
without the gzip
, it works and returns the whole contents according to the docs.
And there is no way to pass some custom Accept-Encoding
header to the underlying GET
call.
This is not unexpected. You can only get specific offsets within the bytestream after decompression, this is a limitation of gzip. I expect the server is really returning the byte range you request out of the original compressed data, but that no longer is a valid gzip stream and so causes the error. If you save your data as gzip, you cannot expect random access of uncompressed data.
The server decompressed the data and returns the whole range. The GCP documentation, link I shared, states the whole file contents is returned, decoded.
Well, in the first place it says that you shouldn't ever do this; in the second that the header key will be ignored. And thirdly we have found that the documentation is incorrect. I don't think there's anything gcsfs can do about this.
Reading gzipped file using transcoding works when you use the
fs.open
, but not when usingfs.cat_file
. Here is and example uploading 2 files, 1 plaintext, 1 gzipped, and both files are read using open, and then using cat_file:This part works:
this errors out
throwing this error
my guess is because it's not passing the header in https://github.com/fsspec/gcsfs/blob/main/gcsfs/core.py#L859-L863