Open ffxsam opened 6 years ago
Don't know what the progress on this has been but as it is still open thought I would comment. I had this same restriction today and after some digging found that it is possible to pass a bucket option into various calls as follows:
Storage.vault.get(key, {bucket: 'alternative-bucket-name'});
Using this I've managed to successfully use multiple buckets in the same app. If it is not specified then it defaults back to the bucket in the global configuration for Amplify.
Hi, what is valud @tim-thompson ? Storage.vault
Is there any update on this? Support for multiple buckets is a really desirable feature.
@annjawn I posted a solution further up this page that works for all my scenarios. If you need more info then I've written about it on my blog in more detail - http://tim-thompson.co.uk/aws-amplify-multiple-buckets.
@tim-thompson i have tried the storage.vault
method but it did not work for me for some reason. Also, it looks like only get
works with storage.vault
however the code suggests otherwise. I've found a solution btw. I am doing storage.config()
before each operation by setting the appropriate bucket name. It's less than efficient, but it's getting the job done.
@annjawn Hi, do you have a blog post on your method ? thanks in advance 👍
If you are able to get content off of a bucket using this statement:
Storage.vault.get(key, {bucket: 'alternative-bucket-name'});
It would be a security issue. Unless you allow it in an IAM role attached the a user. I believe amplify uses this role "s3amplify...". This role should be modified automatically according to your aws-exports.js file when you do amplify push. I don't see how the above statement would affect "amplify push".
@mlabieniec is this feature request removed from the aws-amplify milestone on Jul 19? I thought this is a good feature to have. I think I have a use case where all my resized photos in S3 can be in a separate bucket. Right now, the S3image and album library resizes photo at the client side. If my photo files are very large, that would not be desirable. If the resize file is put in the same directory as in the private user directory, a lambda trigger would not work because S3 trigger does not support regular expression prefix match.
It would be very convenient to have this supported. Currently, I have to call Amplify.configure()
with new bucket every time I want to do something with non-default bucket.
We are also looking for this feature, we are building an app that requires access to multiple buckets, so it would be better if we don't have to specify the bucket when configuring amplify (or we can use default bucket), some APIs also need to allow us specify bucket such as get pre-signed url
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Something similar with configuring API, you can specify an array of endpoints. Array of buckets could be nice
I also would like to see a feature added to Amplify to support the use of multiple buckets. Great work. Thank you all, contributors.
I used @tim-thompson's suggestion and it worked for me as well. The documentation for Storage.get probably needs to be updated but the following works fine:
Storage.get(key1, {bucket: 'bucket-1'});
Storage.get(key2, {bucket: 'bucket-2'});
I've only tried it for "public" access (any authenticated user of my app) but looking at the code I don't see a reason why it wouldn't work in other scenarios too. In effect, the bucket you specify during the Amplify.configure appears to be a default that can be overridden.
above only works if buckets are in the same region, unfortunately. 🤦♂
if you want to add more s3 bucket to your project, use Configuration when uploading file. this is an example code for multi s3 bucket.
function=(fileName, file, isUser=false)=>{
if(isUser){
Storage.configur({ AWSS3: {
bucket: 'bucketA',
region: "us-exxxx"
}});
return Storage.put(fileName, file, {
contentType: file.type,
});
}else {
Storage.configur({ AWSS3: {
bucket: 'bucketB',
region: "us-exxxx"
}});
return Storage.put(fileName, file, {
contentType: file.type,
});
}}
I'm used to use one bucket and event trigger by environment/account. It would be great a native cli support for that.
I am just commenting to keep the bot from killing this thread. This feature is much needed and has many use cases.
Also needing this feature
Adding my need for this functionality too
Would be nice to have that
cc @renebrandel that we need Amplify CLI to do the implementation first prior to us doing anything on Amplify JS
Does https://github.com/aws-amplify/amplify-cli/issues/3977 solve for this use case? I imagine that it would.
amplify import storage
Scanning for plugins...
Plugin scan successful
? Please select from one of the below mentioned services: S3 bucket - Content (Images, audio, video, etc.)
Amazon S3 storage was already added to your project.
It would be very useful if there were any support for multiple storages.
Ping 🙋
This is a must have feature similar to how you allow multiple APIs to be invoked. Workaround exists for web but not for iOS and Android.
Any status on this feature ? :)
yeah, unable to list objects from another bucket using this workaround currently. Would love support for this feature.
It would be nice if Amplify CLI supported importing multiple buckets (even from different regions) and maybe it would also ask which bucket should be "the default one" (I mean which bucket should be used when calling Storage methods without "bucket" param)
I also think that it would be really nice if there were some kind of official blog posts, Twitch streams, Youtube videos (whatever) about how to implement it ourselves (e.g. via patching) or how to create pull requests for AWS Amplify CLI in general (some kind of walkthrough of Amplify CLI).
Looking at https://github.com/aws-amplify/amplify-cli/tree/master/packages/amplify-cli it's not straightforward to me (a person outside Amplify team / mobile dev) how actually CLI works, and I am not sure if I want to spend time on this.
Agreed. It makes sense that one doesn't want to put all their eggs in one bucket. I would use this feature. Will explore workarounds for now. Thanks
How is this still not solved 4 years after the issue was created? So many real-life projects need to have more than one S3 buckets...
+1 to this, I'm starting to think that Amplify is very very limited when it comes to some real world scenarios, and I truly don't understand why these issues are not getting surfaced sooner. We like to brag about how scalable Amplify can be, and how it could be used by large companies, but the more I use it, the more I realize that other than the authentication mechanism (which is a pain in the ass to do yourself), it is not scalable, and more suited for startup companies.
It is also baffling how, just as @dragosiordachioaia mentioned above, since the issue was created 4 years ago, it has not yet been addressed. I go back to Amazon's leadership principles of "Customer Obsession" and "Bias for Action", which are clearly not taken seriously here. :)
I think it's a relatively straightforward implementation. In React projects I create a reusable configuration class which I initialize everytime I need to use Storage with the bucket and storage level I want. Something like this....
//StorageService.js
import { Storage } from 'aws-amplify';
import { SetS3Config } from 'src/services/aws_services';
export default class StorageService{
constructor(props){
const bucket = (props && props.bucket) ? props.bucket: '<default_bucket>';
this.prefix = (props && props.prefix) ? props.prefix : '';
const config = (process.env.NODE_ENV === 'production')? configProdData :configDevData;
Storage.configure({
bucket,
level : props.prefix ? props.prefix: 'public', //this can be overridden at list, put, get level
region: config.Auth.region,
identityPoolId: config.Auth.identityPoolId
});
}
// Other class methods for storage list, put, get, remove etc
// You don't have to define these storage action methods here since Storage.configure
// overrides the configuration globally, but I like to keep everything pertaining to Storage together
const list = async(key) => {
....
await Storage.list(key, {
customPrefix: { public: this.prefix }, //you can also override protected and private prefixes, keeping it '' for all three will mean your app will have access to bucket's root, and ALL prefixes not just public/, private/, protected/ NOTE: make sure your auth and unauth IAM policies are set properly.
pageSize : 'ALL'});
}
const upload = async(...) => { ... }
....
}
In the above, configProdData
and configDevData
is basically Amplify configure JSON (typically amplify.prod.json
and amplify.dev.json
that you import in StorageService.js
). This is also where I define all my actions pertaining to Amplify Storage (list
, get
, put
etc.).
Now whenever I want to use "Storage Service" in a component or a custom Hook, all I do is
import StorageService from 'src/services/storage_services';
//initializes storage with <default_bucket>
const storage = new StorageService();
const listKeys = await storage.list(prefix); //gets the list of objects from <default_bucket> at public level or potentially all prefixes not just public/
//initializes storage with <default_bucket>
const storage = new StorageService({ level: 'protected' });
const listKeys = await storage.list(prefix); //gets the list of objects from <default_bucket> at protected level
//initializes storage with <some_other_bucket>
const storage = new StorageService({ bucket: '<some_other_bucket>' });
const listKeys = await storage.list(prefix); //gets the list of objects from <some_other_bucket> at public level or potentially all prefixes not just public/
//Or
const storage = new StorageService({ bucket: '<some_other_bucket>', prefix: '<my_custom_prefix>' });
const listKeys = await storage.list(prefix); //gets the list of objects from <some_other_bucket> under <my_custom_prefix>
and so on...
Another important thing to note is that all the buckets you want your app to access should be included in the IAM policy Auth_Role and Unauth_Role, for the Cognito IdentityPool otherwise it won't work (as defined here). Also, if you plan your app to access the bucket at other prefixes or even at the bucket's root level as defined by customPrefix
your IAM policy should be set appropriately, not just with public/
, private/
, protected/
as the documentation shows, but with all the prefixes your app expects to access, or no prefix at all if you want your app to be able to access ALL prefixes under a bucket (although) not recommended just for security purposes).
Overall, I like this level of control rather than Amplify having an opinionated way of handling multiple buckets, which I am sure many will have different opinions about a native implementation. This method keeps the implementation flexibility on me so I can manage the Storage state at any given point simply by passing the bucket name and level. Some may disagree, but it has worked for me. I use this Storage service configuration class in pretty much all my React projects and it has evolved over time, but the implementation of this one class and usage is pretty consistent across how it is used across my multiple projects.
One year later after my first post, but I got it working for mobile projects - React Native ("aws-amplify": "^4.3.11", "aws-amplify-react-native": "^6.0.2").
It took me few minutes to figure out that I need to edit roles, but it's working for public/
prefix (in this case I did not need to call Storage.configure method).
const RESOURCE_STORAGE_CONFIG = {
level: "public" as StorageAccessLevel,
bucket: RESOURCE_BUCKET_NAME,
region: "us-east-2",
expires: 60 * RESOURCE_EXPIRES_MINUTES,
};
const resourceUrl = await Storage.get(resourceS3Url, RESOURCE_STORAGE_CONFIG);
Just read through this start to finish. I see there are some good workarounds, but it would definitely be nice to have some more up-to-date & flexible documentation rather than this daunting quote:
Amplify projects are limited to exactly one S3 bucket.
In the documentation examples:
What if the web app needs to interact with more than one bucket? It would be nice to have a system where we could specify several and interact with them via their names.