xiongxu / s3fs

Automatically exported from code.google.com/p/s3fs
GNU General Public License v2.0
0 stars 0 forks source link

IAM user permissions issue #153

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Create a user with the following permissions, I am able to mount the bucket 
without problem (I have verified that the credentials are correct)
AdminsGroupPolicy
{
   "Statement":[{
      "Effect":"Allow",
      "Action":"*",
      "Resource":"*"
      }
   ]
}

2. Create a user with permissions limited to a specific S3 bucket, I receive 
the following errors:
s3fs: CURLE_HTTP_RETURNED_ERROR
s3fs: HTTP Error Code: 403
s3fs: AWS Error Code: AccessDenied
s3fs: AWS Message: Access Denied

Permissions:
BackupProjectPolicy
{
   "Statement":[{
      "Effect":"Allow",
      "Action":["s3:*"],
      "Resource":["arn:aws:s3:::data-folder/*",
      "arn:aws:s3:::data-folder"]
      },
      {
      "Effect":"Deny",
      "Action":["s3:*"],
      "NotResource":["arn:aws:s3:::data-folder/*",
      "arn:aws:s3:::data-folder"]
      }
   ]
}

What is the expected output? What do you see instead?
Are there permissions other than S3 access to the bucket required to mount a 
bucket as a file system? I have been able to use these credentials to upload 
files using Ruby AWS/s3 so I believe they work correctly.

What version of the product are you using? On what operating system?
* Linux 2.6.35-25-virtual #44-Ubuntu SMP x86_64 GNU/Linux (mounted as ec2 
instance)
* Amazon Simple Storage Service File System 1.35

Please provide any additional information below.
Simply changing the credentials in the .password_s3fs file from the admin user 
to the other user causes the error. 

Original issue reported on code.google.com by cris.fl...@gmail.com on 9 Feb 2011 at 10:00

GoogleCodeExporter commented 9 years ago
Probably not a defect but has to do with the usage model.

There are several ways that you can get the credentials into AWS (in order of 
precedence)

// 1 - from a password file specified on the command line

           Needs to be readable by the effective user, cannot be readable by group/other

// 2 - from environment variables

  AWSACCESSKEYID
  AWSSECRETACCESSKEY

// 3 - from the users ${HOME}/.passwd-s3fs

    same permissions restrictions as #1

// 4 - from /etc/passwd-s3fs

   Can be group readable, but not other readable

It looks like what you might want to do is create a special group for your 
backup folks and make the /etc/passwd-s3fs that group owned and group readable.

If you want the user to use the central /etc/passwd-s3fs file, then they 
shouldn't have a $HOME/.passwd-s3fs file, but they can have one, but the 
credentials need to be correct -- be careful with default credentials in these 
files, you might want the use the explicit format.

If the user executing the 

Original comment by dmoore4...@gmail.com on 10 Feb 2011 at 4:40

GoogleCodeExporter commented 9 years ago
The situation relates to #3 using the passwd-s3fs file for a single user.
1) log in 
2) su to root
3) cd /root/
4) cp .admin_credentials.txt .passwd-s3fs
5) emacs .passwd-s3fs (edit so it is in correct format)
6) mount s3fs data-folder /mnt/tmp
  - everything works
7) umount /mnt/tmp
8) cp .limited_user_credentials.txt .passwd-s3fs
9) emacs .passwd-s3fs (edit so it is in correct format)
10) mount s3fs data-folder /mnt/tmp
  -  AWS Error Code: AccessDenied

Do I need IAM permissions for anything other than the s3 bucket (as shown 
above)?  I have the correct format in the .passwd-s3fs file.  The s3fs setup is 
correct since I can access it with the admin credentials.  The limited user 
credentials are correct since I have used them to access the s3 bucket from 
other software.  I have tried a policy allowing access to ALL s3 with the 
limited user and that works (again showing the credentials are correct).  I am 
looking of a solution whereby I can set a ploicy for a user allowing the user 
to only mount one of several buckets stored on s3.

Original comment by cris.fl...@gmail.com on 10 Feb 2011 at 7:58

GoogleCodeExporter commented 9 years ago
Admittedly, I'm not up on the whole policy thing. It appears that a policies 
can be set on a per-bucket basis.  I'm afraid that I can provide very little 
support in this area -- hopefully there's another user who can. 

Original comment by dmoore4...@gmail.com on 10 Feb 2011 at 8:42

GoogleCodeExporter commented 9 years ago
Evidently there is a call to s3:ListAllMyBuckets 
(http://docs.amazonwebservices.com/IAM/latest/UserGuide/UsingWithS3.html) that 
is required to determine if the bucket requested exists before attempting to 
mount.  Adding the following policy to the user allowed me to mount the bucket:

{
   "Statement":[{
      "Effect":"Allow",
      "Action":"s3:ListAllMyBuckets",
      "Resource":"arn:aws:s3:::*"
       }
    ]
}

Original comment by cris.fl...@gmail.com on 10 Feb 2011 at 9:15

GoogleCodeExporter commented 9 years ago
You may want to consider only checking the list of all buckets if mounting a 
specific bucket fails.  This would allow granting access to a bucket to mount 
the bucket.  Additionally, it would clear the way to allow 'directories' on s3 
to be mounted (such as data-folder/person1) by one user and differing locations 
(data-folder/person2) by another person.

Original comment by cris.fl...@gmail.com on 10 Feb 2011 at 9:18

GoogleCodeExporter commented 9 years ago
Cris,

Let's revisit this and see if there is something in the behavior of s3fs that 
can be changed that will support your use model. First assume that I know 
nothing of this IAM feature or how to use it.

- should s3fs retrieve the bucket's policy and parse it for pertinent 
information?  If so, what info should we look for and how is it pertinent?

Dan

Original comment by dmoore4...@gmail.com on 7 Apr 2011 at 2:38

GoogleCodeExporter commented 9 years ago
As far as I can tell,  the only issue is an attempt to retrieve the listing
of all buckets prior to connecting to a bucket.  If the bucket name is
known, is a listing of all buckets required?

Cris

Comment #6 on issue 153 by moore...@suncup.net: IAM user permissions issue

http://code.google.com/p/s3fs/issues/detail?id=153
Cris,

Let's revisit this and see if there is something in the behavior of s3fs
that can be changed that will support your use model. First assume that I
know nothing of this IAM feature or how to use it.

- should s3fs retrieve the bucket's policy and parse it for pertinent
information?  If so, what info should we look for and how is it pertinent?

Dan

Original comment by cris.fl...@gmail.com on 8 Apr 2011 at 12:08

GoogleCodeExporter commented 9 years ago
I just ran into this same issue. It would be nice if s3fs would support iam in 
regards to limiting accounts to specific buckets.

Any idea when this usage model will be supported? I'm currently using 1.59

Original comment by dennis.p...@zemoga.com on 18 Aug 2011 at 7:51

GoogleCodeExporter commented 9 years ago
I had an immediate need to make s3fs support IAM policies. I chopped out the 
code the listed all of the buckets available to access key and it now works for 
me.

https://s3.amazonaws.com/dportello/s3fs-1.61.iam.patch.bz2

Original comment by dennis.p...@zemoga.com on 30 Aug 2011 at 6:50

GoogleCodeExporter commented 9 years ago
I don't really need to list all of the buckets so long as there's a mechanism 
to gracefully handle permission errors. Thanks for the patch, I'll take a look 
and see if we can get this pushed out in the next release.

Original comment by ben.lema...@gmail.com on 30 Aug 2011 at 7:05

GoogleCodeExporter commented 9 years ago
Hey Dennis, I just committed r374 which simplifies s3fs_check_service quite a 
bit. Can you give it a go and see if it takes care of your issue?

Original comment by ben.lema...@gmail.com on 30 Aug 2011 at 10:10

GoogleCodeExporter commented 9 years ago
Hi Ben, thanks for taking a look so quickly. I will check it out first thing 
tomorrow. It's not just listing files, but service operations in general. If 
you set a resource mask limiting operations to specific buckets, general 
service operations with give access denied errors.

Original comment by dennis.p...@zemoga.com on 31 Aug 2011 at 2:37

GoogleCodeExporter commented 9 years ago
Hi Ben,

I tested and it works for me!

Original comment by dennis.p...@zemoga.com on 31 Aug 2011 at 8:49

GoogleCodeExporter commented 9 years ago
great! I'll leave it in for now, hopefully any issues will pop up before the 
next release. Hopefully I'll get some time to fully integrate IAM before too 
long.

Original comment by ben.lema...@gmail.com on 31 Aug 2011 at 8:53

GoogleCodeExporter commented 9 years ago
I tested this (using r383, also r374) and about 80% of the time I get 
"Input/output error" when listing a directory (although it does fix the initial 
issue with getting a 403 when using IAM credentials).

These files have been in this bucket forever, so it's not an eventual 
consistency thing.

Using the -d option only prints the init message to syslog:
Nov  2 23:09:55 slice4 s3fs: init $Rev: 382 $  [sic]

Using:
fuse-2.8.6 (patched to fix the --no-canonicalize issue in mount: Issue 228)
CentOS release 5.7 (Final)

What else can I do to help test?

Original comment by darkcont...@gmail.com on 2 Nov 2011 at 11:17

GoogleCodeExporter commented 9 years ago
However, creating a new bucket, with the same bucket policy and IAM user, gives 
no errors after creating 50 files (the other bucket only had 33), and listing 
them never gives the IO error.

Original comment by darkcont...@gmail.com on 2 Nov 2011 at 11:25

GoogleCodeExporter commented 9 years ago
access_useragreement/privacypolicy.$PASSWORD$%@(TRUE)filesetdir=<false>

Original comment by lxzan...@gmail.com on 28 Mar 2012 at 6:15

GoogleCodeExporter commented 9 years ago
I'm getting the "Input/output" error mentioned above. Without the newest 
release however I cannot mount due to restricted IAM permissions.

Original comment by emcl...@gmail.com on 11 Oct 2012 at 1:31

GoogleCodeExporter commented 9 years ago
Hi,

Were you able to solve this problem?
I will close this issue because this issue is about old version and anyone does 
not reply for a while.

Because some bugs are fixed in the latest version, please use the latest 
version.
And please post new issue again if your problem does not seem to be fixed yet.

Thanks in advance for your help.

Original comment by ggta...@gmail.com on 29 Aug 2013 at 8:57