Closed brianhiss closed 11 years ago
Hi Brian,
Thanks for that. Yes, I can see there might be a problem here. I'll look into your issue and the one you refer to soon. Crazy busy at the moment, but hope to get back to you with some info.
Cheers, Andy
Appreciate you taking a look. I know how busy we all can get.
FYI, as a hack for now, was able to add "NODE_TLS_REJECT_UNAUTHORIZED":0 to my environment, but that poses a lot of security risk. Working for my dev environment for now...
Thanks again.
Experiencing a similar. Any time frame on a fix? I'd put in a pull request, but I have no idea where to start.
I'm having the same issue. It's not quite clear to me what's causing this.
OK, after reading https://github.com/LearnBoost/knox/issues/153, I understand the problem.
A program specific work around:
Execute your node script like:
NODE_TLS_REJECT_UNAUTHORIZED=0 node myscript.js
or, put this before you doing anything that requires the tls
module, namely awssum
:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = 0
Either one will work and won't compromise programs outside of just what you need to interact with your S3 bucket.
This is interesting. I'm sure it should only affect requests that have buckets which have '.'s in them.
However, I'm also sure that I don't use the new style domains (www.example.com.s3.amazonaws.com) I actually use the old style ones (s3.amazonaws.com/www.example.com).
So ... if anyone here can paste me examples of what they've don't. ie. where you create the S3 client (minus your credentials of course) and the exact calls you're doing.
1) is it service operations (ie. ListBuckets)? 2) is it bucket operations (e.g. GetBucket*)? 3) are you doing object operations (e.g. GetObject)?
Examples would be great. :)
Many thanks, Andy
Also, you must tell me if you're using AwsSum v0.12.x or if you've upgraded to awssum-amazon-s3.
Using the "new style" urls, I'm doing a PutObject (using awssum-amazon-s3).
var s3 = new amazonS3.S3({
'accessKeyId' : options.key,
'secretAccessKey' : options.secret,
'region' : this.region
});
var options = {
BucketName : 'some.bucket.name.s3.amazonaws.com',
ObjectName : 'some/file.html',
ContentLength : fileStats.size,
Body : fs.createReadStream(fileStats.filePath)
};
s3.PutObject(options, function(err, data) {
});
I do believe it is an issue with buckets that have the .
in the name.
I'm having the same issue performing PutObject using awssum-amazon-s3 on bucket names with a "." in them.
(Should note that jprichardson's solution allows it to work)
Confirmed: this is only an issue with bucket names that have .
in the name.
Hint: "it's fine to use dash in bucket names" (using underscores doesn't seem to work in any region but east)
:+1: any solution other than changing bucket name?
Yup. It would be great if this could get fixed. Might be a bug in nodes certificate validation?
From what I can tell, I can't fix it. There are a number of work arounds and if any of these work, I'd suggest to use them.
Firstly though I'm going to write down some guidelines, then I'm going to suggest some workarounds if you don't stick to the guidelines.
1) if making a new bucket, use lowercase letters, numbers and dashes only. The reason I suggest this is because that's what you can have in a domain name. (I'm not including unicode domain names here.) 2) Do the usual thing whereby you don't start a bucket with a dash. 3) Don't put full stops in your bucket names.
Seriously, please do all of these things. In AwsSum, we use the subdomain end route rather than the URL route
Just to clarify some more. (1) is because we're using subdomains and the domain name system only allows those chars. (2) same. (3) because of wildcard certificates, any bucket names with full stop in them are not covered by the SSL certificate Amazon have in place.
Also, please read this page here : http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html (the bit about bucketname.s3.amazonaws.com).
Also, please read this page here : http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#VirtualHostingLimitations - Note: I am not going to change AwsSum to use the non-SSL enpoints. I believe that we should always use the SSL endpoint.
If you've not adhered to the guidelines that both Amazon and this comment suggest, then you could try doing what @jprichardson suggests above. Either this when you run the program:
NODE_TLS_REJECT_UNAUTHORIZED=0 node myscript.js
or this inside your program:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = 0
I don't personally suggest using either of these techniques, but if you just want your program to work, then that's cool. I haven't even tried either of these, so YMMV. Other people seem to have had success. Note: you are responsible for deciding to do this and you'll also understand the consequences of doing it too.
I am closing this bug - it's not that I won't fix it, it's because I can't fix it. Follow my recommendation above. Follow the guidelines above. You could follow the workaround if you choose.
This is happening to me with a bucket called "cornerstone-desktop", e.g. only a dash.
@jraede - I also just did too. In my case I'm using aws-sdk-js and was not supplying a region and when i did the error went away as described at https://github.com/aws/aws-sdk-js/issues/139#issuecomment-21552251.
I think that this issue should be opened and made a higher priority. AFAIK, buckets must contain dots in order to be used as a CNAME. This isn't bad practice or uncommon at all. If you aren't relying on Amazon's subdomain SSL, you don't care about the wildcard limitation; in my case, I'm using my own SSL certificate for the CNAME.
The workarounds listed are incredibly unideal for the same reasons you listed ("I believe that we should always use the SSL endpoint."). You recommend users turn off SSL validation application wide, rather than making it an option just for awssum or even fixing it by accessing S3 differently so that it's not necessary. Rather than compromising just a connection to AWS, the promoted workaround is to compromise an app's connections to everything.
This isn't an issue with node; Amazon's SSL certificate is for *.s3.amazonaws.com
; if I'm using sub.domain.com and awssum is using that prefixed to s3.amazonaws.com, then the SSL certificate would need to include *.*.*.s3.amazonaws.com
for it to match.
Hi @michaelhart,
I'm afraid AwsSum is now deprecated (as a whole) and I'm recommending to people to use aws-sdk (the official package) instead. I'm afraid I don't have the time to keep up with the might of Amazon. For what it's worth, I think AwsSum filled a hole when it was needed and in general the APIs Amazon produces are backwards compatible, so AwsSum will still keep working - I'm just not in the position to add new features to it any longer.
I know that doesn't help your issue but it does at least give you a data point as to whether to jump ship over to aws-sdk.
Cheers, Andy
I just upgraded to the newest version of Node v0.10.8 and this broke awssum v1.2.4. I am now getting a certificate error:
Error Connecting to AWS : { Code: 'AwsSum-Request', Message: 'Something went wrong during the request', OriginalError: [Error: Hostname/IP doesn't match certificate's altnames] }
Seems related to how the newest versions of Node are checking TLS and other libraries had this issue as well: https://github.com/LearnBoost/knox/issues/153#issuecomment-15196979.
They noticed, along with me, that bucket names with a "." in them are causing issues. (i.e. my.bucket.com)