Pradeeplogme / s3-bash

Automatically exported from code.google.com/p/s3-bash
Other
0 stars 0 forks source link

Signature is never matched #1

Closed GoogleCodeExporter closed 8 years ago

GoogleCodeExporter commented 8 years ago
What steps will reproduce the problem?
1. Download the latest version of s3-bash
2. ./s3-put -k MY_ACCESS_KEY_HERE -s secret.txt -T the_file_to_upload.txt
/bucketname/objectname
3.

What is the expected output? What do you see instead?
Not sure about the expected output as I've always received an error.

<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we
calculated does not match the signature you provided. Check your key and
signing
method.</Message><RequestId>D03B56E041F0EFCB</RequestId><SignatureProvided>YWIOm
YqG8gavrx/DWAjynHdD2AU=</SignatureProvided><StringToSignBytes>50
55 54 0a 63 56 47 59 64 4b 65 33 6e 6b 61 5a 62 65 66 5a 34 77 30 4a 71 77
3d 3d 0a 74 65 78 74 2f 70 6c 61 69 6e 0a 57 65 64 2c 20 30 33 20 4f 63 74
20 32 30 30 37 20 31 39 3a 34 32 3a 35 30 20 47 4d 54 0a 2f 61 6e 79 74 68
69 6e 67 2f 50 61 74 69 65 6e 74 5f 5f 68 69 73 74 6f 72 79 5f 66 6f 72 6d
2e 70 64
66</StringToSignBytes><AWSAccessKeyId>MYACCESSKEY</AWSAccessKeyId><HostId>qjisyJ
ouaGJqCFxzQa5+cHHjAF/+xbKSOYTRU8ufl+nfNpiXacZtSe0sWP6jbPEf</HostId><StringToSign
>PUT
cVGYdKe3nkaZbefZ4w0Jqw==
text/plain
Wed, 03 Oct 2007 19:42:50 GMT
/anything/Patient__history_form.pdf</StringToSign></Error>

What version of the product are you using? On what operating system?
Using the version from Aug 23rd on Fedora Core 6

Please provide any additional information below.

I'm attaching the signature file that was generated by s3-common-functions.
 I've removed my actual secret key, but left the line of @'s and ~'s that
was below it.

I believe this might be the cause of the issue, but am not sure how to fix it.

Original issue reported on code.google.com by kevin...@gmail.com on 3 Oct 2007 at 7:52

Attachments:

GoogleCodeExporter commented 8 years ago
Hello, I'm getting same problem. First time tried to use but never matches 
signature.
I checked it all again and checked on Amazon that everything appears correct. 
Would
love to use this tool if I can get it working.

Original comment by chrissav...@gmail.com on 4 Oct 2007 at 3:47

GoogleCodeExporter commented 8 years ago
I've examined the uploaded file, and it doesn't look right. That may because of 
an
ascii-to-binary conversion during upload, but I'm not convinced. Essentially, 
this
file should be your secret key (normally 40 bytes of ASCII data A-Za-z0-9=, etc)
padded to 64 bytes by zeros (this appears as ^@ if you use cat -v on the file). 
There
should not be a carriage return or line feed in the file - which your file 
appears to
have.

If you're pasting from the Amazon AWS Identifiers web page, some browser - 
clipboard
combinations grab additional characters. This happens on my Windows XP machine, 
for
example, with Firefox 2.

Make sure you've saved the file with 'Unix' encodings (LF) (shouldn't actually
matter, but some text editors fidget with things otherwise) with no Unicode Byte
Order Marks (BOMs) or the like, ie ASCII.

Please check your file is 64 bytes long, with 24 final bytes of ^@ at the end. 
Please
also check your original secret.txt (secret key file) is 40 bytes long.

Original comment by raphael....@gmail.com on 4 Oct 2007 at 2:14

GoogleCodeExporter commented 8 years ago
Raphael,

My secret.txt file is 41 bytes long.  I use Linux and can assure you that there
aren't any additional characters from the browser making it into the secret.txt 
file.

The file I uploaded appears exactly as it does in my linux terminal along with 
the
^@'s.  The only change I made to it was to remove my secret, but left everything
intact -- meaning that the carriage return and @^'s were put in there by the
s3-common-functions that generated the s3-put.key.o15993.

I took a look at the script, but couldn't figure out how that file was getting
created.  It's definitely this tool as I'm able to use ruby, PHP and Perl 
scripts
with my key and secret to get files in and out of S3 without any problems.

Hope this helps,
Kevin

Original comment by kevin...@gmail.com on 4 Oct 2007 at 2:23

GoogleCodeExporter commented 8 years ago
Also, check the MD5 value of Patient__history_form.pdf

To do this, type:-
openssl dgst -md5 -binary "Patient__history_form.pdf" | openssl enc -e -base64

Compare the value to cVGYdKe3nkaZbefZ4w0Jqw== (in the Amazon response above).

Also, although it is minor, I'd recommend setting the MIME type on upload to
application/pdf rather than text/plain by using the -c option of s3-put.

Original comment by raphael....@gmail.com on 4 Oct 2007 at 2:33

GoogleCodeExporter commented 8 years ago
I've verified it is the same:

[kold@kold tmp]$ openssl dgst -md5 -binary "Patient_history_form.pdf" | openssl 
enc
-e -base64
cVGYdKe3nkaZbefZ4w0Jqw==

Adding the -c 'application/pdf' didn't help either.

I think the problem is that the s3-put.key file generated by the script contains
those extra characters.  Why not just read the secret directly from the source 
to
eliminate the issues we're seeing?

Original comment by kevin...@gmail.com on 4 Oct 2007 at 2:42

GoogleCodeExporter commented 8 years ago
Good, at least the MD5 signatures are the same so we can discount any openssl
discrepancies (that means the SHA1 part of the HMAC-SHA1 is very likely signing
things the same way, too).

Now, it worries me that your secret key file is 41 bytes long - I understood 
they
should be only 40.

The reason for the padded file is quite simple - that the block size for a 
HMAC-SHA1
key is 64 bytes, and AWS only provides 40. HMAC-SHA1 requires padding with nulls
(ASCII code 00, the ^@ symbol with cat -v file) before using the key in binary 
XORs.
Creating this second file means (a) I can be certain I don't stamp on your key 
and
(b) makes the programming job much simpler, as I only need one function
(readBytesAndXorAndWriteAsBytesTo) to do the repetitive hashing needed by SHA1. 
Also,
if Amazon change their key sizes, nothing should break.

Try this: od -A n -t uC <YOURSECRETKEYFILE> where YOURSECRETKEYFILE is the 
original
secret key file.

Check the final value isn't 13 or 10 or less than 32 or greater than 127.
Check that no values in the file are less than 32 or greater than 127 (I'd be
surprised if they were much greater than 100).

Let me know - but please don't post the lot (as a Linux savvy user I'm sure you 
won't).

Original comment by raphael....@gmail.com on 4 Oct 2007 at 3:25

GoogleCodeExporter commented 8 years ago
Ok, I ran the secret key through od like you asked and the final value is 10.

Yes, my key is 41 bytes.  I opened a blank file in vi and typed my key in 
manually,
then hit save and that file was 41 bytes.

Original comment by kevin...@gmail.com on 4 Oct 2007 at 3:38

GoogleCodeExporter commented 8 years ago
The final byte, 10, is a linefeed - LF. A-ha.

I've just tried my vi and it appends a LF at the end, even a blank file. And it
doesn't hint it's there.

I use TextMate on a Mac with 'Show invisbles' on for this sort of work... vi is 
only
really the tool for me for line delimited files where white space and line 
spacing
doesn't matter...

If you don't have another text editor, try this:-
dd if=<SECRETKEYFILE_VI> of=<SECRETKEYFILE_S3_BASH> bs=1 count=40

Then do
ls -la <SECRETKEYFILE_S3_BASH> and verify it is 40 bytes.
od -A n -t uC <SECRETKEYFILE_S3_BASH> and verify their is no final '10'.
diff <SECRETKEYFILE_VI> <SECRETKEYFILE_S3_BASH> and verify that they differ by 
only a
new line.

Then use SECRETKEYFILE_S3_BASH instead.

Let me know!

Original comment by raphael....@gmail.com on 4 Oct 2007 at 3:55

GoogleCodeExporter commented 8 years ago
That did it!  Thanks for your help in resolving this!  Might want to add a note 
or
write a script that does this for others that use it.

Original comment by kevin...@gmail.com on 4 Oct 2007 at 4:12

GoogleCodeExporter commented 8 years ago
kevinold,

Glad to hear it worked for you. Phew! At least it wasn't something worse.

I've made a new release - available on the downloads section - that checks a 
secret
key file is 40 bytes long. If there's a form of words useful for the home page 
or
wiki, please post something to me and I'll put it up.

Let me know hoe you get on with the scripts and if there's anything useful we 
can add
 - start a new issue for that if you like.

Raph

Original comment by raphael....@gmail.com on 4 Oct 2007 at 9:34

GoogleCodeExporter commented 8 years ago
Awesome! Thanks for your help with this and all of the work you've don on 
s3-bash!

Original comment by kevin...@gmail.com on 4 Oct 2007 at 11:55

GoogleCodeExporter commented 8 years ago

Original comment by raphael....@gmail.com on 5 Oct 2007 at 8:43

GoogleCodeExporter commented 8 years ago
Hi there,

yes this happened to me too, the problem where a manually made (with nano) 
secret
file is 41 bytes long instead of 40.. Since it does that with nano by default 
and
(apaprently) vi too, the normal default behaviour is that most linux users not
knowing about this will have failures.

Maybe you could have the s3-bash scripts check if the last char in the secret 
file is
a linefeed, and to not process that linefeed as part of the secret access code.

What do you think about it?

Original comment by romai...@gmail.com on 26 Feb 2008 at 5:38

GoogleCodeExporter commented 8 years ago
Would be quite helpful t have a script or something to fix this 41 byte 
problem. 
Most linux users will use vi to create the file after pasting it, etc.  this 
just
adds a level of complexity to the whole mix.  Better the program can just check 
it,
but that is asking a lot from the already magnanimous developers!  Thanks for 
this
program!

Original comment by ankurset...@gmail.com on 22 Aug 2008 at 11:40

GoogleCodeExporter commented 8 years ago
there is an easy way (tested on bash) : 
echo -n my-40-bytes-string > s3-key.secret
that way the file should be 40 bytes long !

Original comment by azul...@gmail.com on 1 Oct 2008 at 4:15

GoogleCodeExporter commented 8 years ago
or you could use tr like this:
tr -d '\n' < my-41-byte-file-with-newline > my-40-byte-file

Original comment by richard....@gmail.com on 26 Jun 2009 at 5:09

GoogleCodeExporter commented 8 years ago
I tried using the s3 bash. I am able to use delete and get command but put 
command
says SignatureDoesNotMatch. The request signature we calculated does not match 
the
signature you provided. Check your key and signing method. the secret key file 
is 40
bytes long. Otherwise get and delete will not work, but they did and that also
suggests that the secret key is correct.

Original comment by akhil.an...@gmail.com on 20 Jan 2010 at 12:06

GoogleCodeExporter commented 8 years ago
Im also getting this issue. Secret key file is 40 bytes. Have tried everything 
on
this page. Using GNU/Linux 2.6.9-78.0.13.ELsmp

Original comment by booth.ge...@gmail.com on 10 Mar 2010 at 4:44

GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
To commenters #17 & 18 I had the same problem.  The fix for me was to add the 
filename found in the -T variable to the path.  For example

./s3-put  -vS -k [##KEY##] -s /[path to secret file]/secret -T ./test.txt 
/MY_GHETTO_BACKUPS/test.txt

Notice the 'test.txt' file named in two(2) places.

Cheers!

Original comment by p...@hakungala.com on 2 Mar 2011 at 2:25

GoogleCodeExporter commented 8 years ago
The echo is an simple and effective solution. Another way to do it would be 
with sed:

sed 's/.$//' s3secret.txt > news3secret.txt

Original comment by upbeat.l...@gmail.com on 17 May 2011 at 3:49