Closed mirkoscotti closed 3 days ago
This makes sense. The BucketAlreadyExistsException
should also catch in a similar way - i.e. the bucket is owned by someone else (bucket names are globally unique).
There's also unique behavior in us-east-1 (from the SDK docs)
Amazon S3 returns this error in all Amazon Web Services Regions except in the North Virginia Region. For legacy compatibility, if you re-create an existing bucket that you already own in the North Virginia Region, Amazon S3 returns 200 OK and resets the bucket access control lists (ACLs).
Hey, thx for the fast fix! Just a tiny addition: throw new FileSystemAlreadyExistsException(e.getCause().getMessage(), e);
, giving the possibility to get the original cause distinguishing the buckets owned by you (in this case one could catch the FileSystemAlreadyExistsException
ignoring the error returning the getFileSystem
) from the others (in this case the FileSystemAlreadyExistsException
is definitely thrown).
I wasn't sure if you wanted the library to do this automatically or if a library user would do it.
For the former ...
I'm not sure is this is a good idea as the Java NIO SPI has both newFileSystem
and getFileSystem
methods. The former involves creation of a whole new filesystem while the latter gets an existing filesystem. Translated to S3 the first implies "create a bucket", the second is "get me a representation of an already existing bucket". I don't think the former should automatically do the later otherwise it might give you the impression that you have made a brand new bucket when you actually haven't. Of course a user could catch the exception and subsequently call getFileSystem
.
For the latter...
One thing I will do it add the S3 exception as the cause of the FilesystemExists
to make it easier for users to code logic for what to do.
The cause of the exception is now added so developers can inspect the cause if needed.
When I try to create an already existing bucket through the
S3FileSystemProvider.newFileSystem
, aBucketAlreadyOwnedByYouException
is raised by the AWS library and forwarded to anExecutionException
. The file system provider should detect this exception and convert it to a newFileSystemAlreadyExistsException
, according to the FileSystemProvider's Javadoc. I suggest modifying theExecutionException
catch block as follows: