aws-amplify / amplify-cli

The AWS Amplify CLI is a toolchain for simplifying serverless web and mobile development.
Apache License 2.0
2.83k stars 823 forks source link

Support for multiple buckets? #5836

Open ffxsam opened 6 years ago

ffxsam commented 6 years ago

In the documentation examples:

Amplify.configure(
    Auth: {
        identityPoolId: 'XX-XXXX-X:XXXXXXXX-XXXX-1234-abcd-1234567890ab', //REQUIRED - Amazon Cognito Identity Pool ID
        region: 'XX-XXXX-X', // REQUIRED - Amazon Cognito Region
        userPoolId: 'XX-XXXX-X_abcd1234', //OPTIONAL - Amazon Cognito User Pool ID
        userPoolWebClientId: 'XX-XXXX-X_abcd1234', //OPTIONAL - Amazon Cognito Web Client ID
    },
    Storage: {
        bucket: '', //REQUIRED -  Amazon S3 bucket
        region: 'XX-XXXX-X', //OPTIONAL -  Amazon service region
    });

What if the web app needs to interact with more than one bucket? It would be nice to have a system where we could specify several and interact with them via their names.

    Storage: {
      bucketOne: {
        bucket: '', //REQUIRED -  Amazon S3 bucket
        region: 'XX-XXXX-X', //OPTIONAL -  Amazon service region
      },
      bucketTwo: { ... }
    });
tim-thompson commented 6 years ago

Don't know what the progress on this has been but as it is still open thought I would comment. I had this same restriction today and after some digging found that it is possible to pass a bucket option into various calls as follows:

Storage.vault.get(key, {bucket: 'alternative-bucket-name'});

Using this I've managed to successfully use multiple buckets in the same app. If it is not specified then it defaults back to the bucket in the global configuration for Amplify.

jnreynoso commented 6 years ago

Hi, what is valud @tim-thompson ? Storage.vault

annjawn commented 6 years ago

Is there any update on this? Support for multiple buckets is a really desirable feature.

tim-thompson commented 6 years ago

@annjawn I posted a solution further up this page that works for all my scenarios. If you need more info then I've written about it on my blog in more detail - http://tim-thompson.co.uk/aws-amplify-multiple-buckets.

annjawn commented 6 years ago

@tim-thompson i have tried the storage.vault method but it did not work for me for some reason. Also, it looks like only get works with storage.vault however the code suggests otherwise. I've found a solution btw. I am doing storage.config() before each operation by setting the appropriate bucket name. It's less than efficient, but it's getting the job done.

rizerzero commented 6 years ago

@annjawn Hi, do you have a blog post on your method ? thanks in advance 👍

10ky commented 5 years ago

If you are able to get content off of a bucket using this statement:

Storage.vault.get(key, {bucket: 'alternative-bucket-name'});

It would be a security issue. Unless you allow it in an IAM role attached the a user. I believe amplify uses this role "s3amplify...". This role should be modified automatically according to your aws-exports.js file when you do amplify push. I don't see how the above statement would affect "amplify push".

10ky commented 5 years ago

@mlabieniec is this feature request removed from the aws-amplify milestone on Jul 19? I thought this is a good feature to have. I think I have a use case where all my resized photos in S3 can be in a separate bucket. Right now, the S3image and album library resizes photo at the client side. If my photo files are very large, that would not be desirable. If the resize file is put in the same directory as in the private user directory, a lambda trigger would not work because S3 trigger does not support regular expression prefix match.

ngocketit commented 5 years ago

It would be very convenient to have this supported. Currently, I have to call Amplify.configure() with new bucket every time I want to do something with non-default bucket.

hoang-innomize commented 5 years ago

We are also looking for this feature, we are building an app that requires access to multiple buckets, so it would be better if we don't have to specify the bucket when configuring amplify (or we can use default bucket), some APIs also need to allow us specify bucket such as get pre-signed url

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

dylan-westbury commented 5 years ago

Something similar with configuring API, you can specify an array of endpoints. Array of buckets could be nice

DmitriWolf commented 5 years ago

I also would like to see a feature added to Amplify to support the use of multiple buckets. Great work. Thank you all, contributors.

Off2Race commented 5 years ago

I used @tim-thompson's suggestion and it worked for me as well. The documentation for Storage.get probably needs to be updated but the following works fine:

Storage.get(key1, {bucket: 'bucket-1'});  
Storage.get(key2, {bucket: 'bucket-2'});  

I've only tried it for "public" access (any authenticated user of my app) but looking at the code I don't see a reason why it wouldn't work in other scenarios too. In effect, the bucket you specify during the Amplify.configure appears to be a default that can be overridden.

jimji1005 commented 4 years ago

above only works if buckets are in the same region, unfortunately. 🤦‍♂

Ramesh-Chathuranga commented 4 years ago

if you want to add more s3 bucket to your project, use Configuration when uploading file. this is an example code for multi s3 bucket.

function=(fileName, file, isUser=false)=>{
  if(isUser){
  Storage.configur({ AWSS3: {
          bucket: 'bucketA',
          region: "us-exxxx"
        }});
  return Storage.put(fileName, file, {
        contentType: file.type,
      });
}else {
   Storage.configur({ AWSS3: {
          bucket: 'bucketB',
          region: "us-exxxx"
        }});
  return Storage.put(fileName, file, {
        contentType: file.type,
      });
}}
dtelaroli commented 4 years ago

I'm used to use one bucket and event trigger by environment/account. It would be great a native cli support for that.

aelbokl commented 4 years ago

I am just commenting to keep the bot from killing this thread. This feature is much needed and has many use cases.

KesleyDavid commented 4 years ago

Also needing this feature

cody1024d commented 4 years ago

Adding my need for this functionality too

PatrykMilewski commented 4 years ago

Would be nice to have that

sammartinez commented 4 years ago

cc @renebrandel that we need Amplify CLI to do the implementation first prior to us doing anything on Amplify JS

r0zar commented 4 years ago

Does https://github.com/aws-amplify/amplify-cli/issues/3977 solve for this use case? I imagine that it would.

stleon commented 3 years ago
amplify import storage
Scanning for plugins...
Plugin scan successful
? Please select from one of the below mentioned services: S3 bucket - Content (Images, audio, video, etc.)
Amazon S3 storage was already added to your project.

It would be very useful if there were any support for multiple storages.

arealmaas commented 3 years ago

Ping 🙋

santhedan commented 3 years ago

This is a must have feature similar to how you allow multiple APIs to be invoked. Workaround exists for web but not for iOS and Android.

nathanagez commented 3 years ago

Any status on this feature ? :)

johnrusch commented 2 years ago

yeah, unable to list objects from another bucket using this workaround currently. Would love support for this feature.

majirosstefan commented 2 years ago

It would be nice if Amplify CLI supported importing multiple buckets (even from different regions) and maybe it would also ask which bucket should be "the default one" (I mean which bucket should be used when calling Storage methods without "bucket" param)

I also think that it would be really nice if there were some kind of official blog posts, Twitch streams, Youtube videos (whatever) about how to implement it ourselves (e.g. via patching) or how to create pull requests for AWS Amplify CLI in general (some kind of walkthrough of Amplify CLI).

Looking at https://github.com/aws-amplify/amplify-cli/tree/master/packages/amplify-cli it's not straightforward to me (a person outside Amplify team / mobile dev) how actually CLI works, and I am not sure if I want to spend time on this.

rmjwilbur commented 2 years ago

Agreed. It makes sense that one doesn't want to put all their eggs in one bucket. I would use this feature. Will explore workarounds for now. Thanks

dragosiordachioaia commented 1 year ago

How is this still not solved 4 years after the issue was created? So many real-life projects need to have more than one S3 buckets...

macsinte commented 1 year ago

+1 to this, I'm starting to think that Amplify is very very limited when it comes to some real world scenarios, and I truly don't understand why these issues are not getting surfaced sooner. We like to brag about how scalable Amplify can be, and how it could be used by large companies, but the more I use it, the more I realize that other than the authentication mechanism (which is a pain in the ass to do yourself), it is not scalable, and more suited for startup companies.

macsinte commented 1 year ago

It is also baffling how, just as @dragosiordachioaia mentioned above, since the issue was created 4 years ago, it has not yet been addressed. I go back to Amazon's leadership principles of "Customer Obsession" and "Bias for Action", which are clearly not taken seriously here. :)

annjawn commented 1 year ago

I think it's a relatively straightforward implementation. In React projects I create a reusable configuration class which I initialize everytime I need to use Storage with the bucket and storage level I want. Something like this....

//StorageService.js
import { Storage } from 'aws-amplify';
import { SetS3Config } from 'src/services/aws_services';

export default class StorageService{
    constructor(props){       
        const bucket = (props && props.bucket) ? props.bucket: '<default_bucket>';
        this.prefix =  (props && props.prefix) ? props.prefix : '';
        const config = (process.env.NODE_ENV === 'production')? configProdData :configDevData;
        Storage.configure({
             bucket,
             level : props.prefix ? props.prefix: 'public',  //this can be overridden at list, put, get level
             region: config.Auth.region,
             identityPoolId: config.Auth.identityPoolId
        });
    }

   // Other class methods for storage list, put, get, remove etc
   // You don't have to define these storage action methods here since Storage.configure 
   // overrides the configuration globally, but I like to keep everything pertaining to Storage together
   const list = async(key) => { 
            ....
            await Storage.list(key, {
              customPrefix: { public: this.prefix }, //you can also override protected and private prefixes, keeping it '' for all three will mean your app will have access to bucket's root, and ALL prefixes not just public/, private/, protected/ NOTE: make sure your auth and unauth IAM policies are set properly.
               pageSize : 'ALL'});
   }
   const upload = async(...) => { ... }
   ....
}

In the above, configProdData and configDevData is basically Amplify configure JSON (typically amplify.prod.json and amplify.dev.json that you import in StorageService.js). This is also where I define all my actions pertaining to Amplify Storage (list, get, put etc.).

Now whenever I want to use "Storage Service" in a component or a custom Hook, all I do is

import StorageService from 'src/services/storage_services';

//initializes storage with <default_bucket>
const storage = new StorageService(); 
const listKeys = await storage.list(prefix); //gets the list of objects from <default_bucket> at public level or potentially all prefixes not just public/

//initializes storage with <default_bucket>
const storage = new StorageService({ level: 'protected' }); 
const listKeys = await storage.list(prefix); //gets the list of objects from <default_bucket> at protected level

//initializes storage with <some_other_bucket>
const storage = new StorageService({ bucket: '<some_other_bucket>' });
const listKeys = await storage.list(prefix); //gets the list of objects from <some_other_bucket> at public level or potentially all prefixes not just public/

//Or
const storage = new StorageService({ bucket: '<some_other_bucket>', prefix: '<my_custom_prefix>' });
const listKeys = await storage.list(prefix); //gets the list of objects from <some_other_bucket> under <my_custom_prefix>

and so on...

Another important thing to note is that all the buckets you want your app to access should be included in the IAM policy Auth_Role and Unauth_Role, for the Cognito IdentityPool otherwise it won't work (as defined here). Also, if you plan your app to access the bucket at other prefixes or even at the bucket's root level as defined by customPrefix your IAM policy should be set appropriately, not just with public/, private/, protected/ as the documentation shows, but with all the prefixes your app expects to access, or no prefix at all if you want your app to be able to access ALL prefixes under a bucket (although) not recommended just for security purposes).

Overall, I like this level of control rather than Amplify having an opinionated way of handling multiple buckets, which I am sure many will have different opinions about a native implementation. This method keeps the implementation flexibility on me so I can manage the Storage state at any given point simply by passing the bucket name and level. Some may disagree, but it has worked for me. I use this Storage service configuration class in pretty much all my React projects and it has evolved over time, but the implementation of this one class and usage is pretty consistent across how it is used across my multiple projects.

majirosstefan commented 1 year ago

One year later after my first post, but I got it working for mobile projects - React Native ("aws-amplify": "^4.3.11", "aws-amplify-react-native": "^6.0.2").

It took me few minutes to figure out that I need to edit roles, but it's working for public/ prefix (in this case I did not need to call Storage.configure method).

  const RESOURCE_STORAGE_CONFIG = {
      level: "public" as StorageAccessLevel,
      bucket: RESOURCE_BUCKET_NAME,
      region: "us-east-2",
      expires: 60 * RESOURCE_EXPIRES_MINUTES,
    };

const resourceUrl =  await Storage.get(resourceS3Url, RESOURCE_STORAGE_CONFIG);
charlieforward9 commented 1 year ago

Just read through this start to finish. I see there are some good workarounds, but it would definitely be nice to have some more up-to-date & flexible documentation rather than this daunting quote:

Amplify projects are limited to exactly one S3 bucket.