Closed JonLoesch closed 12 years ago
Nevermind. Seems to working again. Must have been some sort of blip.
Just a heads up for people experiencing a similar problem and using an older version of the script like we are.
We're also getting a "505 HTTP Version Not Supported" but this is when we do a bucket listing and it is only happening occasionally. It seems that if there are files with spaces in the name, the script's internal use of the curl command to get the next batch of listings in the bucket fails. Since this only happens when the marker file used to delineate the batch has a space in it, the probability of hitting the error depends on what percentage of filenames have a space in it and how often the contents of the bucket changes.
When we run in verbose mode, the offending line looks something like the following:
curl -q -g -S --retry 3 -s --request GET --dump-header - --location 'https://ourbucket/?AWSAccessKeyId=ourkey&Expires=123456&Signature=somehash&marker=dir/subdir/filewitha init'
When I manually run the line with a + or a %20, S3 returns the proper next batch of files.
From inspecting the latest code it appears to be fixed. The note:
7075bff0 » timkay 2012-03-08 percent encode marker
shows that the marker term is now being url encoded.
We've been using aws to back up our files for quite a while now. Suddenly today, it started returning the error: 505 HTTP Version Not Supported I tried checking out the newest version of the code, (we were on a REALLY old revision), but got the same error.
From a bit of googling around, it seems this error code can mean a bunch of different things. Not sure if there's anything I can do to fix the issue or even to provide more debug output. Is anybody else suddenly getting this error as well?
For what it's worth, it seems to only be broken on ls, not on put.