Closed Centzilius closed 6 years ago
I can't find too. Probably it means custom server supported inside codes but not on UI.
@alanedwardes would you like to consider adding it or it is too much work?
Hi,
Can you tell me a bit more about your use case?
Thanks
On Sun, Dec 27, 2015 at 10:16 AM -0800, "Jaex" notifications@github.com<mailto:notifications@github.com> wrote:
@alanedwardeshttps://github.com/alanedwardes would you like to consider adding it or it is too much work?
Reply to this email directly or view it on GitHubhttps://github.com/ShareX/ShareX/issues/1058#issuecomment-167432288.
@alanedwardes (Not sure if you meant me) I'd like to use a OpenStack SWIFT Server which has if I'm not mistake a S3 compatible API.
@Centzilius Thanks. If there are other people wanting to do this I think it should be considered, but I'm afraid your use case is very specific, so I don't think it's worth adding to ShareX.
Out of interest, why do you want to host your own S3? It feels a bit crazy considering the reliability, scale and cost-effectiveness Amazon gives you. You don't have to manage infrastructure!
@alanedwardes I was planing to use OVHs Public Cloud powered by OpenStack and it's pretty cheap in my opinion https://www.ovh.de/cloud/storage/object-storage.xml (German) https://www.ovh.co.uk/cloud/storage/object-storage.xml (English)
+1 from me. There are a few services out there that use an S3 compatible API. Being able to enter a custom host would allow ShareX to work with all of them.
https://en.wikipedia.org/wiki/Amazon_S3#S3_API_and_competing_services
@grvr Which service are you using?
I am using ObjSpace from Delimiter.
I'm in the same boat as grvr, so yeah, throwing some support behind custom S3 end points.
I am using a self-hosted solution which is entirely S3 compatible. I'd love to be able to use it in ShareX.
Because I didn't wrote this uploader, I can't add it. Especially because if I manage to add it, I can't even test it.
I can provide you with access to my S3 instance, if you would need to test. But it doesn't sound like you know how to go about it, unless I misunderstood you.
Oh well, it would have just been a nice thing to have :)
Is this still happening? If so, could you please add the AWS Ohio region? It doesn't seem to be available. Region code is us-east-2
.
Had a look through the code and couldn't seem to find the defs.
This should just need an AWS SDK NuGet package update, the region list is coming from there.
@jaex would you be able to bump the package version? I can try to get around to this within the next few days if not, or there was a breaking change.
Cheers!
If Amazon S3 was free I would write it from scratch to not have external library dependency and support all custom hosts but I can't even try free trial of it because it end years ago.
But sharex doesn't support already dreamobjects? https://github.com/ShareX/ShareX/blob/master/ShareX.UploadersLib/FileUploaders/AmazonS3.cs
BTW if you need an S3 compatible istance i could host it for you
S3 is dirt cheap, anyways. It'd cost you a few cents to test on a region close to where you live.
If you're looking to implement your own implementation of the S3 uploader, please support both S3 v2 and v4 signature types on uploads to custom origins. v2 is easier to implement than v4 if I recall correctly (I think especially for multi-chunk uploads), and some S3-compatible APIs only support v2 signatures (like pithos).
S3 v2 signatures are documented here, and S3 v4 signatures are documented here. You'll definitely have to implement v4 signatures, as newer AWS S3 regions don't support v2 signed requests.
I checked my Amazon S3 account now and its free trial was end years ago but still I can use it for free not sure how.
Therefore I'm trying to implement it from scratch now without use Amazon SDK but this v4 signature looks so overwhelmingly difficult. Amazon SDK source codes for signature creation is literally thousands lines of codes :(
Edit: I managed to fix signature issue now.
Now I wrote Amazon S3 implementation from scratch and also added custom server support like this:
It supports signature v4 currently.
But I don't have any custom server (with account) to test it. So I'm not even sure are there any servers which can work with Amazon S3 API without require any specific changes to current codes.
I also couldn't test DreamObjects which was exists in previous implementation. Because it requires credit card to even register for trial account. So I sent email to their support and requested test account.
If you want I could give you access to a test instance
Il 17 mar 2017 12:54 AM, "Jaex" notifications@github.com ha scritto:
Now I added custom server support like this:
It supports signature v4 currently.
But I don't have any custom server (with account) to test it. So I'm not even sure are there any servers which can work with Amazon S3 API without require any specific changes.
I also couldn't test DreamObjects https://www.dreamhost.com/cloud/storage/ which was exists in previous implementation. Because it requires credit card to even register for trial account. So I sent email to their support and requested test account. Not sure are they gonna give one too.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/ShareX/ShareX/issues/1058#issuecomment-287227500, or mute the thread https://github.com/notifications/unsubscribe-auth/ABzvZJieYrlHLDVvENlXl5LyKLVh3mPtks5rmcuigaJpZM4GNZ3f .
I've tested with lastest artifact:
it appears to be adding unwanted prefix (the bucket name)
Note: here https://play.minio.io:9000/minio/ you can play with a public istance
@neurogas: That means that ShareX is using virtual hosts to upload, which should be supported by most S3 implementations anyways (and would probably work with the implementation you're using if you enabled wildcard DNS for *.storage.ennetech.me
). Would it be difficult to add a checkbox to disable virtual host uploads, @Jaex?
Amazon S3 specification require bucket name in sub domain. Otherwise how storage service gonna know what bucket it is?
Paths... I know at least for v2 it's supported, I don't have any experience with v4 at all. But, storage.ennetech.me/sharex
should work just as well as sharex.storage.ennetech.me
, apart from some minor changes to the signature (at least for v2).
but the ssl certificate wouldn't be valid, i've successfully used minio with:
will test it using the dns wildcard EDIT: indeed, ssl validation problem
I've hacked the correct domain using https://ennetech.me:9000/ as hostname and a bucket named "storage", but i've realized that minio uses v4 signature
Response: <?xml version="1.0" encoding="UTF-8"?>
SignatureDoesNotMatch
I tested now, @deansheather your endpoint format works with v4 too. But I suspect it gonna solve @neurogas problem because he getting invalid signature which means his signature implementation must be different than what Amazon S3 API using.
I was pretty sure minio follows the S3 signature specification...
This doesn't seem right, shouldn't BucketName
and Key
have values?:
<Key></Key><BucketName></BucketName><Resource>/ShareX/2017/03/p49k0loaa7voyre1lh313baud927txp3.png</Resource>
That link shows how to use with AWS SDK library (I don't use library) and not tells anything about their specifications.
DreamObjects gave me account to test now and it worked on first try. Which shows that my Amazon S3 implementation don't have problem working with custom storage hosts. As long as those custom hosts support V4 signature.
Currently I'm talking with developer of Minio to solve issue.
I'm so glad to read this
I fixed problem with help of Minio developer but still there is one big problem. Minio not supports this header: headers["x-amz-acl"] = "public-read";
so returned URL becomes private. There is workaround to make bucket public but that require custom policy.
No problem at all, using minio UI setting a policy is a matter of seconds
If you mean that Minio Browser bucket policy setting. I asked is that policy also makes your file listing public and he told yes. So you need policy which will make file listing private but only reading file with url public.
Just tested with the last sharex artifact, working perfectly, thank you!
I will investigate to hopefully solve the privacy issue of leaving the whole bucket open
I'd like to add a custom S3 Server but although this pull request states "Added support for custom Amazon S3 endpoints[...]" I can't find the option to add a custom server.