Open lordtangent opened 8 years ago
So, I did a quick proof of concept and I was able to mount GCS. But my updates are not perfect. I can't list directories properly. Here is a link to my hack. Any pointers to getting directory listing working would be appreciated.
https://github.com/lordtangent/yas3fs/commit/ea264449d05c8c06263db8d9613e3293297ea6a2
I get this error in the shell wth I do an ls:
ls: reading directory .: Bad address
It doesn't matter if I provide a fully qualified path. I get the same error.
Where is the metadata for directories stored? I don't see any place where it might be stored.
Here is the debug info that dumped out when I tried to list directories:
2016-03-10 09:16:54,525 DEBUG readdir '/test_dir' '0' S3 list 'test_dir/'
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/fuse.py", line 414, in _wrapper
return func(*args, **kwargs) or 0
File "/usr/lib/python2.7/site-packages/fuse.py", line 602, in readdir
fip.contents.fh):
File "/usr/lib/python2.7/site-packages/fuse.py", line 881, in __call__
ret = getattr(self, op)(path, *args)
File "/usr/lib/python2.7/site-packages/yas3fs/__init__.py", line 1699, in readdir
for k in key_list:
File "/usr/lib/python2.7/site-packages/boto/s3/bucketlistresultset.py", line 34, in bucket_lister
encoding_type=encoding_type)
File "/usr/lib/python2.7/site-packages/boto/s3/bucket.py", line 473, in get_all_keys
'', headers, **params)
File "/usr/lib/python2.7/site-packages/boto/s3/bucket.py", line 411, in _get_all
response.status, response.reason, body)
S3ResponseError: S3ResponseError: 400 Bad Request
I was hoping to user yas3fs with Google Cloud Storage. I've had luck with other s3 mounter tools I have tested due to the s3 compatibility mode on GCS.
But I was disappointed to find that yas3fs enforces syntax that makes it impossible to get it to mount buckets from https://storage.googleapis.com/ (rather than s3://)
Would it be possible to add an argument for explicitly setting the url prefix rather than assuming everything is mounted from s3:// ? Based on my experience with other s3 mounters, I'm fairly certain yas3fs will work on GCS with no other changes if only the URL could be fed to it correctly.
thank you
Here is the error message when using https://storage.googleapis.com/ :
ERROR The S3 path to mount must be in URL format: s3://BUCKET/PATH, use -h for help.