wamdam / backy2

backy2: Deduplicating block based backup software for ceph/rbd, image files and devices
http://backy2.com/
Other
195 stars 39 forks source link

initdb fails with s3's IllegalLocationConstraintException #11

Closed farcaller closed 4 years ago

farcaller commented 6 years ago

version: 2.9.17

# backy2 initdb
    INFO: $ /usr/bin/backy2 initdb
Traceback (most recent call last):
  File "/usr/bin/backy2", line 11, in <module>
    load_entry_point('backy2==2.9.17', 'console_scripts', 'backy2')()
  File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 570, in main
    commands = Commands(args.machine_output, Config)
  File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 27, in __init__
    self.backy = backy_from_config(Config)
  File "/usr/lib/python3/dist-packages/backy2/utils.py", line 45, in backy_from_config
    data_backend = DataBackendLib.DataBackend(config_DataBackend)
  File "/usr/lib/python3/dist-packages/backy2/data_backends/s3.py", line 57, in __init__
    self.bucket = self.conn.create_bucket(bucket_name)
  File "/usr/lib/python3/dist-packages/boto/s3/connection.py", line 621, in create_bucket
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>IllegalLocationConstraintException</Code><Message>The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.</Message></Error>

this targets amazon s3 in eu-central-1.

wamdam commented 6 years ago

Yeah, the s3 implementation is not tested against amazon s3, but against what ceph and riak provide. (riak was tested years ago, but I think that might still work). I think there's not much missing.

ednt commented 4 years ago

We use it (for tests) against FreeNAS S3. It works without many problems (see pull request) But it is not as fast as we hoped.

wamdam commented 4 years ago

How fast is it? What are the parameters of your backy.cfg? What throughput can you reach with s3cmd or other tools when sending a number of 4MB blocks?

ednt commented 4 years ago

via s3 with ./s3cmd --no-ssl --host=10.10.1.200:9000 --host-bucket=test --access_key=nas2_s3_ednt_de --secret_key blablabla put /root/speedtest/* s3://test/ I get an average of 27MBytes/s

If I mount the drive via NFS or SMB and use rsync for copy, I get an average of 81MBytes/s.

But something is not correct, since the link is 10G and via iperf I can see that I have 9.8G which could result in 800MBytes/s. So I have to look what's wrong.

ednt commented 4 years ago

Since we use a ZFS RAID a higher speed as 81MBytes should be possible. Even when we use 'normal' SATA 6G drives. Btw. I used the same storage pool for S3 and the SMB/NFS share so the results should be comparable.

wamdam commented 4 years ago

Well, actually that was not what I was asking for. I'll close the ticket as you seem to have internal performance issues not related to backy2.