linagora / linshare

LinShare
https://www.linshare.org/
GNU Affero General Public License v3.0
484 stars 86 forks source link

Guide on S3 Integration #177

Open fir3wall opened 3 years ago

fir3wall commented 3 years ago

Hi,

Is there is a document somewhere that describes in detail (not like example properties file) integration with S3 Storage?

This is what I've figured out but does not worK: linshare.documents.storage.mode=s3 linshare.documents.storage.bucket=MY_S3_BUCKET linshare.documents.storage.user.name=API KEY linshare.documents.storage.credential=API SECRET linshare.documents.storage.endpoint=https://s3.AWS_REGION_CODE.amazonaws.com

But getting this: Caused by: org.jclouds.http.HttpResponseException: request: HEAD https://s3.AWS_REGION_CODE.amazonaws.com/MY_S3_BUCKET HTTP/1.1 failed with response: HTTP/1.1 400 Bad Request

Please note instance is in AWS and have IAM role attached, but without credentials in file it was complaining about missing parameters so created API key/secret too (not sure is it needed).

Can somebody send some official documents or throw some guidance, please?

Thanks D

fmartin-linagora commented 3 years ago

Hi,

Sorry I don't have the answer right now. The support of S3 API was developed and tested using Minio at the time. I have a tiny doubt about the support of regions, it seems it was only implemented for OpenStack Swift. Sorry. Regards, Fred

tanandy commented 2 years ago

Since, we are using jcloud toolkit it supports AWS https://jclouds.apache.org/guides/aws/ We will try to provide a small guide for AWS S3

fmartin-linagora commented 2 years ago

The last time i used S3 API with minio, i used the following keys: linshare.documents.storage.mode=s3 linshare.documents.storage.identity=${AWS_ACCESS_KEY_ID} linshare.documents.storage.credential=${AWS_SECRET_ACCESS_KEY} linshare.documents.storage.endpoint=${AWS_AUTH_URL}

Can you share more logs ?

ghyster commented 1 year ago

I've opened an issue on the same topic in #51 , here's what I suggested then : using aws-s3 instead of s3 for the storage provider resolves the issue, I'd suggest replacing s3 by aws-s3 in the supported providers