Closed ctrlbrk42 closed 3 years ago
Read ticket #96 and tried --http-debug, which lead me to:
HEAD /xyz/00000000 HTTP/1.1 Host: s3.wasabisys.com User-Agent: s3backer/1.5.0 Accept: / x-amz-date: 20210410T032308Z The requested URL returned error: 403 Forbidden Closing connection 0 s3backer: can't read data store meta-data: Operation not permitted
403 Forbidden, ugh. I tried variations of the baseURL, including prepending my bucket to the s3.wasabisys.com. Always forbidden.
So I then copied and pasted the exact values in the ~/.wasabi file, and used the command line --accessId and --accessKey, and viola. I am now getting 404 not found, lol.
HEAD /xyz/00000000 HTTP/1.1 Host: s3.wasabisys.com User-Agent: s3backer/1.5.0 Accept: / x-amz-date: 20210410T032914Z x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 Authorization: AWS4-HMAC-SHA256 Credential=03SUQV57YGWGTZCHWJ2G/20210410/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=43b5e11fd24e0bbc3df45003e6a8151363f2929bae4931777572d837fb756905 The requested URL returned error: 404 Not Found Closing connection 0 s3backer: error: auto-detection of filesystem size failed; please specify `--size'
Added --size=1T and it fired up, but with this final complaint:
s3backer: auto-detecting block size and total file size... s3backer: auto-detection failed; using default block size 4k and file size 1t
I tried rsync and it went nuts with function not implemented
, tried a simple cp
and same (cannot create regular file, function not implemented
)
I'm happy to help you troubleshoot why it isn't working out-of-the-box with Wasabi.
Side note, the format of my ~/.wasabi is delimited with a colon:
id:key
Maybe you were expecting delimited by newline:
id key
Just a note, you might try adding :
to the list of your possible delimiters since s3fs uses that by default.
It looks like originally s3backer
was not getting properly configured with your credentials, but you solved that problem.
Then you got this:
Added --size=1T and it fired up, but with this final complaint:
s3backer: auto-detecting block size and total file size... s3backer: auto-detection failed; using default block size 4k and file size 1t
This is totally normal and expected: you're creating a new disk, so s3backer
needs to know the total size and the block size. You've specified its total size but not the block size, so s3backer
is telling you it's falling back to the default block size.
I tried rsync and it went nuts with function not implemented, tried a simple cp and same (cannot create regular file, function not implemented)
Need more details to debug this. You didn't say how you created and mounted the upper layer filesystem, for example.
Did you read and follow Creating-a-New-Filesystem?
Just a note, you might try adding
:
to the list of your possible delimiters since s3fs uses that by default.
That's already how it works... quoting the man page:
--accessFile=FILE
Specify a file containing `accessID:accessKey' pairs, one per-line. Blank lines and lines beginning with a `#' are
ignored. If no --accessKey is specified, this file will be searched for the entry matching the access ID specified
via --accessId; if neither --accessKey nor --accessId is specified, the first entry in this file will be used.
Default value is $HOME/.s3backer_passwd.
LOL, I had missed that part with the filesystem (duh). All good now, mostly.
Stats file is showing a massive (?) number of forbidden. There is also a large MD5 cache write delay number, again not sure if this is ok or not.
100% of my operations with s3backer are rsync. Same script over and over. 2T fs with 1M block size, using reiserfs defaults.
cat /wasabi-s3backer/stats
http_normal_blocks_read 339605 http_normal_blocks_written 372218 http_zero_blocks_read 1 http_zero_blocks_written 298 http_empty_blocks_read 161828 http_empty_blocks_written 2367 http_gets 339605 http_puts 372219 http_deletes 298 http_avg_get_time 0.115 sec http_avg_put_time 0.125 sec http_avg_delete_time 0.034 sec http_unauthorized 0 http_forbidden 1066696 http_stale 0 http_verified 0 http_mismatch 0 http_5xx_error 3 http_4xx_error 0 http_other_error 0 http_canceled_writes 7606 http_num_retries 3 http_total_retry_delay 0.800 sec curl_handle_reuse_ratio 0.3986 curl_timeouts 0 curl_connect_failed 0 curl_host_unknown 0 curl_out_of_memory 0 curl_other_error 0 block_cache_current_size 2000 blocks block_cache_initial_size 0 blocks block_cache_dirty_ratio 0.0020 block_cache_read_hits 27170313 block_cache_read_misses 135192 block_cache_read_hit_ratio 0.9950 block_cache_write_hits 59548884 block_cache_write_misses 0 block_cache_write_hit_ratio 1.0000 block_cache_verified 0 block_cache_mismatch 0 md5_cache_current_size 42 blocks md5_cache_data_hits 0 md5_cache_full_delays 0.000 sec md5_cache_write_delays 537887.122 sec out_of_memory_errors 0
Is this something of concern or considered normal somehow? There are no policies applied to the bucket, my group, or user to restrict any kind of access on Wasabi's side.
The forbidden errors are coming from the S3 backend.
To debug, look in syslog (e.g., /var/log/messages
) for errors, or else run s3backer
in debug/foreground mode and watch to see what operations are failing (e.g., with flags -f --debug --debug-http
). Start with minimal I/O operations to limit the amount of HTTP traffic & debug logging.
Note, this recent commit should help greatly when trying to decipher errors coming from S3:
commit 4825384fbfa6d8d8de0cf9a14f7b064650f5d1fc
Author: Archie L. Cobbs <archie.cobbs@gmail.com>
Date: Wed May 26 19:07:49 2021 -0500
Show HTTP error response payload content when `--debug-http' flag given.
s3backer --accessFile=~/.wasabi --baseURL="https://s3.wasabisys.com/" xyz /wasabi3
whoami
Not quite certain what the error is suggesting. The .wasabi file is chmod 600. Bucket "xyz" is working fine with S3QL and Goofys. The mount point of /wasabi3 is empty, created by root.