Closed GoogleCodeExporter closed 8 years ago
Hello, I'm getting same problem. First time tried to use but never matches
signature.
I checked it all again and checked on Amazon that everything appears correct.
Would
love to use this tool if I can get it working.
Original comment by chrissav...@gmail.com
on 4 Oct 2007 at 3:47
I've examined the uploaded file, and it doesn't look right. That may because of
an
ascii-to-binary conversion during upload, but I'm not convinced. Essentially,
this
file should be your secret key (normally 40 bytes of ASCII data A-Za-z0-9=, etc)
padded to 64 bytes by zeros (this appears as ^@ if you use cat -v on the file).
There
should not be a carriage return or line feed in the file - which your file
appears to
have.
If you're pasting from the Amazon AWS Identifiers web page, some browser -
clipboard
combinations grab additional characters. This happens on my Windows XP machine,
for
example, with Firefox 2.
Make sure you've saved the file with 'Unix' encodings (LF) (shouldn't actually
matter, but some text editors fidget with things otherwise) with no Unicode Byte
Order Marks (BOMs) or the like, ie ASCII.
Please check your file is 64 bytes long, with 24 final bytes of ^@ at the end.
Please
also check your original secret.txt (secret key file) is 40 bytes long.
Original comment by raphael....@gmail.com
on 4 Oct 2007 at 2:14
Raphael,
My secret.txt file is 41 bytes long. I use Linux and can assure you that there
aren't any additional characters from the browser making it into the secret.txt
file.
The file I uploaded appears exactly as it does in my linux terminal along with
the
^@'s. The only change I made to it was to remove my secret, but left everything
intact -- meaning that the carriage return and @^'s were put in there by the
s3-common-functions that generated the s3-put.key.o15993.
I took a look at the script, but couldn't figure out how that file was getting
created. It's definitely this tool as I'm able to use ruby, PHP and Perl
scripts
with my key and secret to get files in and out of S3 without any problems.
Hope this helps,
Kevin
Original comment by kevin...@gmail.com
on 4 Oct 2007 at 2:23
Also, check the MD5 value of Patient__history_form.pdf
To do this, type:-
openssl dgst -md5 -binary "Patient__history_form.pdf" | openssl enc -e -base64
Compare the value to cVGYdKe3nkaZbefZ4w0Jqw== (in the Amazon response above).
Also, although it is minor, I'd recommend setting the MIME type on upload to
application/pdf rather than text/plain by using the -c option of s3-put.
Original comment by raphael....@gmail.com
on 4 Oct 2007 at 2:33
I've verified it is the same:
[kold@kold tmp]$ openssl dgst -md5 -binary "Patient_history_form.pdf" | openssl
enc
-e -base64
cVGYdKe3nkaZbefZ4w0Jqw==
Adding the -c 'application/pdf' didn't help either.
I think the problem is that the s3-put.key file generated by the script contains
those extra characters. Why not just read the secret directly from the source
to
eliminate the issues we're seeing?
Original comment by kevin...@gmail.com
on 4 Oct 2007 at 2:42
Good, at least the MD5 signatures are the same so we can discount any openssl
discrepancies (that means the SHA1 part of the HMAC-SHA1 is very likely signing
things the same way, too).
Now, it worries me that your secret key file is 41 bytes long - I understood
they
should be only 40.
The reason for the padded file is quite simple - that the block size for a
HMAC-SHA1
key is 64 bytes, and AWS only provides 40. HMAC-SHA1 requires padding with nulls
(ASCII code 00, the ^@ symbol with cat -v file) before using the key in binary
XORs.
Creating this second file means (a) I can be certain I don't stamp on your key
and
(b) makes the programming job much simpler, as I only need one function
(readBytesAndXorAndWriteAsBytesTo) to do the repetitive hashing needed by SHA1.
Also,
if Amazon change their key sizes, nothing should break.
Try this: od -A n -t uC <YOURSECRETKEYFILE> where YOURSECRETKEYFILE is the
original
secret key file.
Check the final value isn't 13 or 10 or less than 32 or greater than 127.
Check that no values in the file are less than 32 or greater than 127 (I'd be
surprised if they were much greater than 100).
Let me know - but please don't post the lot (as a Linux savvy user I'm sure you
won't).
Original comment by raphael....@gmail.com
on 4 Oct 2007 at 3:25
Ok, I ran the secret key through od like you asked and the final value is 10.
Yes, my key is 41 bytes. I opened a blank file in vi and typed my key in
manually,
then hit save and that file was 41 bytes.
Original comment by kevin...@gmail.com
on 4 Oct 2007 at 3:38
The final byte, 10, is a linefeed - LF. A-ha.
I've just tried my vi and it appends a LF at the end, even a blank file. And it
doesn't hint it's there.
I use TextMate on a Mac with 'Show invisbles' on for this sort of work... vi is
only
really the tool for me for line delimited files where white space and line
spacing
doesn't matter...
If you don't have another text editor, try this:-
dd if=<SECRETKEYFILE_VI> of=<SECRETKEYFILE_S3_BASH> bs=1 count=40
Then do
ls -la <SECRETKEYFILE_S3_BASH> and verify it is 40 bytes.
od -A n -t uC <SECRETKEYFILE_S3_BASH> and verify their is no final '10'.
diff <SECRETKEYFILE_VI> <SECRETKEYFILE_S3_BASH> and verify that they differ by
only a
new line.
Then use SECRETKEYFILE_S3_BASH instead.
Let me know!
Original comment by raphael....@gmail.com
on 4 Oct 2007 at 3:55
That did it! Thanks for your help in resolving this! Might want to add a note
or
write a script that does this for others that use it.
Original comment by kevin...@gmail.com
on 4 Oct 2007 at 4:12
kevinold,
Glad to hear it worked for you. Phew! At least it wasn't something worse.
I've made a new release - available on the downloads section - that checks a
secret
key file is 40 bytes long. If there's a form of words useful for the home page
or
wiki, please post something to me and I'll put it up.
Let me know hoe you get on with the scripts and if there's anything useful we
can add
- start a new issue for that if you like.
Raph
Original comment by raphael....@gmail.com
on 4 Oct 2007 at 9:34
Awesome! Thanks for your help with this and all of the work you've don on
s3-bash!
Original comment by kevin...@gmail.com
on 4 Oct 2007 at 11:55
Original comment by raphael....@gmail.com
on 5 Oct 2007 at 8:43
Hi there,
yes this happened to me too, the problem where a manually made (with nano)
secret
file is 41 bytes long instead of 40.. Since it does that with nano by default
and
(apaprently) vi too, the normal default behaviour is that most linux users not
knowing about this will have failures.
Maybe you could have the s3-bash scripts check if the last char in the secret
file is
a linefeed, and to not process that linefeed as part of the secret access code.
What do you think about it?
Original comment by romai...@gmail.com
on 26 Feb 2008 at 5:38
Would be quite helpful t have a script or something to fix this 41 byte
problem.
Most linux users will use vi to create the file after pasting it, etc. this
just
adds a level of complexity to the whole mix. Better the program can just check
it,
but that is asking a lot from the already magnanimous developers! Thanks for
this
program!
Original comment by ankurset...@gmail.com
on 22 Aug 2008 at 11:40
there is an easy way (tested on bash) :
echo -n my-40-bytes-string > s3-key.secret
that way the file should be 40 bytes long !
Original comment by azul...@gmail.com
on 1 Oct 2008 at 4:15
or you could use tr like this:
tr -d '\n' < my-41-byte-file-with-newline > my-40-byte-file
Original comment by richard....@gmail.com
on 26 Jun 2009 at 5:09
I tried using the s3 bash. I am able to use delete and get command but put
command
says SignatureDoesNotMatch. The request signature we calculated does not match
the
signature you provided. Check your key and signing method. the secret key file
is 40
bytes long. Otherwise get and delete will not work, but they did and that also
suggests that the secret key is correct.
Original comment by akhil.an...@gmail.com
on 20 Jan 2010 at 12:06
Im also getting this issue. Secret key file is 40 bytes. Have tried everything
on
this page. Using GNU/Linux 2.6.9-78.0.13.ELsmp
Original comment by booth.ge...@gmail.com
on 10 Mar 2010 at 4:44
[deleted comment]
To commenters #17 & 18 I had the same problem. The fix for me was to add the
filename found in the -T variable to the path. For example
./s3-put -vS -k [##KEY##] -s /[path to secret file]/secret -T ./test.txt
/MY_GHETTO_BACKUPS/test.txt
Notice the 'test.txt' file named in two(2) places.
Cheers!
Original comment by p...@hakungala.com
on 2 Mar 2011 at 2:25
The echo is an simple and effective solution. Another way to do it would be
with sed:
sed 's/.$//' s3secret.txt > news3secret.txt
Original comment by upbeat.l...@gmail.com
on 17 May 2011 at 3:49
Original issue reported on code.google.com by
kevin...@gmail.com
on 3 Oct 2007 at 7:52Attachments: