Closed mauricoder closed 7 years ago
Is that the actual command you used? If so, get rid of the http://
bit:
yas3fs s3://[my-bucket-name].s3.amazonaws.com/ web/uploads
Thanks very much for the help!
Don't know how the http:// got there..
Just trying again... if I do
yas3fs s3://[my-bucket-name].s3.amazonaws.com/ web/uploads
I get:
ERROR Uncaught Exception: <class 'ssl.CertificateError'> hostname 'yoreparo-staging-uploads.s3.amazonaws.com.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com' <traceback object at 0x7f810e9aea28>
So I do
yas3fs s3://[my-bucket-name] web/uploads
and the mount gets done. However somehow I can't read any file in the bucket or write to it.
The output of yas3fs s3://[my-bucket-name] web/uploads
is:
2017-04-10 23:50:15,457 INFO Version: 2.3.2
2017-04-10 23:50:15,458 INFO s3-retries: '3'
2017-04-10 23:50:15,458 INFO s3-retries-sleep: '1' seconds
2017-04-10 23:50:15,458 INFO S3 bucket: 'yoreparo-staging-uploads'
2017-04-10 23:50:15,459 INFO S3 prefix (can be empty): ''
2017-04-10 23:50:15,459 INFO Cache entries: '100000'
2017-04-10 23:50:15,459 INFO Cache memory size (in bytes): '134217728'
2017-04-10 23:50:15,459 INFO Cache disk size (in bytes): '1073741824'
2017-04-10 23:50:15,460 INFO Cache on disk if file size greater than (in bytes): '0'
2017-04-10 23:50:15,460 INFO Cache check interval (in seconds): '5'
2017-04-10 23:50:15,460 INFO Cache ENOENT rechecks S3: False
2017-04-10 23:50:15,460 INFO AWS Managed Encryption enabled: False
2017-04-10 23:50:15,460 INFO AWS Managed Encryption enabled: False
2017-04-10 23:50:15,461 INFO Number of parallel S3 threads (0 to disable writeback): '32'
2017-04-10 23:50:15,461 INFO Number of parallel downloading threads: '4'
2017-04-10 23:50:15,461 INFO Number download retry attempts: '60'
2017-04-10 23:50:15,461 INFO Download retry sleep time seconds: '1'
2017-04-10 23:50:15,461 INFO Number read retry attempts: '10'
2017-04-10 23:50:15,461 INFO Read retry sleep time seconds: '1'
2017-04-10 23:50:15,462 INFO Number of parallel prefetching threads: '2'
2017-04-10 23:50:15,462 INFO Download buffer size (in KB, 0 to disable buffering): '10485760'
2017-04-10 23:50:15,462 INFO Number of buffers to prefetch: '0'
2017-04-10 23:50:15,462 INFO Write metadata (file system attr/xattr) on S3: 'True'
2017-04-10 23:50:15,462 INFO Download prefetch: 'False'
2017-04-10 23:50:15,463 INFO Multipart size: '104857600'
2017-04-10 23:50:15,463 INFO Multipart maximum number of parallel threads: '4'
2017-04-10 23:50:15,463 INFO Multipart maximum number of retries per part: '3'
2017-04-10 23:50:15,463 INFO Default expiration for signed URLs via xattrs: '2592000'
2017-04-10 23:50:15,463 INFO S3 Request Payer: 'False'
2017-04-10 23:50:15,463 INFO Cache path (on disk): '/tmp/yas3fs/yoreparo-staging-uploads'
2017-04-10 23:50:15,613 INFO Unique node ID: '8bd3ddae-dc35-4d02-a293-4992b9499779'
The output of mount
shows the mount is done:
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=271572k,nr_inodes=67893,mode=755)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /dev/shm type tmpfs (rw,relatime)
/dev/xvda1 on / type ext4 (rw,noatime,data=ordered)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
yas3fs on /var/app/current/web/uploads type fuse (rw,nosuid,nodev,relatime,user_id=500,group_id=500,allow_other,max_read=131072)
However, if I try to do a simple "ls" for the mounted folder I get:
Thanks in advance!
Please try again with the foreground and debug flags, something like
yas3fs -fd s3://[my-bucket-name] web/uploads
. You may need to umount /var/app/current/web/uploads
first.
On Apr 10, 2017 4:54 PM, "Mauricio Etcheverry" notifications@github.com wrote:
Thanks very much for the help!
Don't know how the http:// got there..
Just trying again... if I do
yas3fs s3://[my-bucket-name].s3.amazonaws.com/ web/uploads
I get:
ERROR Uncaught Exception: <class 'ssl.CertificateError'> hostname ' yoreparo-staging-uploads.s3.amazonaws.com.s3.amazonaws.com' doesn't match either of '*.s3.amazonaws.com', 's3.amazonaws.com' <traceback object at 0x7f810e9aea28>
So I do
yas3fs s3://[my-bucket-name] web/uploads
and the mount gets done. However somehow I can't read any file in the bucket or write to it.
The output of yas3fs s3://[my-bucket-name] web/uploads is:
2017-04-10 23:50:15,457 INFO Version: 2.3.2 2017-04-10 23:50:15,458 INFO s3-retries: '3' 2017-04-10 23:50:15,458 INFO s3-retries-sleep: '1' seconds 2017-04-10 23:50:15,458 INFO S3 bucket: 'yoreparo-staging-uploads' 2017-04-10 23:50:15,459 INFO S3 prefix (can be empty): '' 2017-04-10 23:50:15,459 INFO Cache entries: '100000' 2017-04-10 23:50:15,459 INFO Cache memory size (in bytes): '134217728' 2017-04-10 23:50:15,459 INFO Cache disk size (in bytes): '1073741824' 2017-04-10 23:50:15,460 INFO Cache on disk if file size greater than (in bytes): '0' 2017-04-10 23:50:15,460 INFO Cache check interval (in seconds): '5' 2017-04-10 23:50:15,460 INFO Cache ENOENT rechecks S3: False 2017-04-10 23:50:15,460 INFO AWS Managed Encryption enabled: False 2017-04-10 23:50:15,460 INFO AWS Managed Encryption enabled: False 2017-04-10 23:50:15,461 INFO Number of parallel S3 threads (0 to disable writeback): '32' 2017-04-10 23:50:15,461 INFO Number of parallel downloading threads: '4' 2017-04-10 23:50:15,461 INFO Number download retry attempts: '60' 2017-04-10 23:50:15,461 INFO Download retry sleep time seconds: '1' 2017-04-10 23:50:15,461 INFO Number read retry attempts: '10' 2017-04-10 23:50:15,461 INFO Read retry sleep time seconds: '1' 2017-04-10 23:50:15,462 INFO Number of parallel prefetching threads: '2' 2017-04-10 23:50:15,462 INFO Download buffer size (in KB, 0 to disable buffering): '10485760' 2017-04-10 23:50:15,462 INFO Number of buffers to prefetch: '0' 2017-04-10 23:50:15,462 INFO Write metadata (file system attr/xattr) on S3: 'True' 2017-04-10 23:50:15,462 INFO Download prefetch: 'False' 2017-04-10 23:50:15,463 INFO Multipart size: '104857600' 2017-04-10 23:50:15,463 INFO Multipart maximum number of parallel threads: '4' 2017-04-10 23:50:15,463 INFO Multipart maximum number of retries per part: '3' 2017-04-10 23:50:15,463 INFO Default expiration for signed URLs via xattrs: '2592000' 2017-04-10 23:50:15,463 INFO S3 Request Payer: 'False' 2017-04-10 23:50:15,463 INFO Cache path (on disk): '/tmp/yas3fs/yoreparo-staging-uploads' 2017-04-10 23:50:15,613 INFO Unique node ID: '8bd3ddae-dc35-4d02-a293-4992b9499779'
The output of mount show the mount is done:
proc on /proc type proc (rw,relatime) sysfs on /sys type sysfs (rw,relatime) devtmpfs on /dev type devtmpfs (rw,relatime,size=271572k,nr_inodes=67893,mode=755) devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /dev/shm type tmpfs (rw,relatime) /dev/xvda1 on / type ext4 (rw,noatime,data=ordered) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) yas3fs on /var/app/current/web/uploads type fuse (rw,nosuid,nodev,relatime,user_id=500,group_id=500,allow_other,max_read=131072)
However, if I try to do a simple "ls" for the mounted folder I get:
[image: image] https://cloud.githubusercontent.com/assets/1232233/24887099/d1db1c50-1e2f-11e7-8049-afec534f2afa.png
Thanks in advance!
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/danilop/yas3fs/issues/149#issuecomment-293111036, or mute the thread https://github.com/notifications/unsubscribe-auth/ABfi-43Oe-pkb9f8l11uhUwg2dIWbYisks5rusE2gaJpZM4M5O8W .
Thanks very much!
With the debug turned on I could saw it was a permissions problem over each object. I got this kind of ouptput:
2017-04-11 12:01:49,714 DEBUG readdir '/' '0'
2017-04-11 12:01:49,714 DEBUG readdir '/' '0' no cache
2017-04-11 12:01:49,714 DEBUG readdir '/' '0' S3 list ''
2017-04-11 12:01:49,773 DEBUG readdir '/' '0' S3 list key '.'
2017-04-11 12:01:49,773 DEBUG readdir '/' '0' S3 list key 'FALTA.txt'
2017-04-11 12:01:49,773 DEBUG readdir '/' '0' S3 list key 'funny-guitar-pig-msyugioh123-32713183-500-445.jpg'
2017-04-11 12:01:49,774 DEBUG readdir '/' '0' '[u'.', u'..', u'FALTA.txt', u'funny-guitar-pig-msyugioh123-32713183-500-445.jpg']'
2017-04-11 12:01:49,774 DEBUG getattr -> '/FALTA.txt' 'None'
2017-04-11 12:01:49,774 DEBUG get_metadata -> '/FALTA.txt' 'attr' 'None'
2017-04-11 12:01:49,774 DEBUG get_key /FALTA.txt
2017-04-11 12:01:49,775 DEBUG get_key from S3 #1 '/FALTA.txt'
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/fuse.py", line 495, in _wrapper
return func(*args, **kwargs) or 0
File "/usr/local/lib/python2.7/site-packages/fuse.py", line 511, in getattr
return self.fgetattr(path, buf, None)
File "/usr/local/lib/python2.7/site-packages/fuse.py", line 759, in fgetattr
attrs = self.operations('getattr', self._decode_optional_path(path), fh)
File "/usr/local/lib/python2.7/site-packages/fuse.py", line 972, in __call__
ret = getattr(self, op)(path, *args)
File "/usr/local/lib/python2.7/site-packages/yas3fs/__init__.py", line 1646, in getattr
attr = self.get_metadata(path, 'attr')
File "/usr/local/lib/python2.7/site-packages/yas3fs/__init__.py", line 1502, in get_metadata
key = self.get_key(path)
File "/usr/local/lib/python2.7/site-packages/yas3fs/__init__.py", line 1470, in get_key
key = self.s3_bucket.get_key(self.join_prefix(path).encode('utf-8'), headers=self.default_headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 193, in get_key
key, resp = self._get_key_internal(key_name, headers, query_args_l)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 231, in _get_key_internal
response.status, response.reason, '')
S3ResponseError: S3ResponseError: 403 Forbidden
So I made objects public and the problem was fixed.
I'll keep digging as I don't want this bucket or objects to be public.
This is running on Elastic Beanstalk. Should yas3fs be able to access the bucket if the aws-elasticbeanstalk-service-role has a Role Policy that allows him to access the bucket? Or authorization is handled other way?
Thanks!
This is running on Elastic Beanstalk. Should yas3fs be able to access the bucket if the aws-elasticbeanstalk-service-role has a Role Policy that allows him to access the bucket? Or authorization is handled other way?
Fixed! It was not aws-elasticbeanstalk-service-role that needed the permissions, but aws-elasticbeanstalk-ec2-role
Thanks for so kind help
😄
No problem
Hello
I'm getting an error while trying to mount a bucket.
ERROR Uncaught Exception: <class 'socket.gaierror'> [Errno -2] Name or service not known <traceback object at 0x7fb563a4a8c0>
I installed yas3fs on a fresh Amazon Linux AMI release 2016.09, I followed next steps to install:
Then I created a public bucket (to be sure is not a permissions problem) and a folder to mount it to..
Did:
yas3fs s3://http://[my-bucket-name].s3.amazonaws.com/ web/uploads
and I got:
2017-04-10 19:30:13,881 ERROR Uncaught Exception: <class 'socket.gaierror'> [Errno -2] Name or service not known <traceback object at 0x7f4a542fd908>
Can you please guide me on how to debug it?
Forgot to mention, the EC2 instance is at us-east-1, it's created by Elastic Beanstalk, and the bucket is at us-east-1 to
Thanks!