Open Skunnyk opened 2 years ago
Experiencing exactly the same behavior.
At the moment i'm working around it by raising the multi_part_upload_threshold
to avoid hitting multipart, but it's doable until you don't have files too large.
Hi @uwburn! We've shipped a whole bunch of changes regarding the storage backends. Any chance you managed to try the latest (0.19.1) Medusa to see if it has helped you?
Hi @rzvoncek, thanks for pointing this out.
Unfortunately atm i'm running on a very outdated version of k8ssandra-operator for Kubernetes and basically i need to upgrade that for using newer medusa versions.
If i find the time to work on an upgrade strategy i will try that.
Project board link
We are testing medusa 0.13.x, and we can't backup anymore to a s3_compatible endpoint. It works with 0.11.x. We are using OVHCloud s3 storage (https://www.ovhcloud.com/en/public-cloud/object-storage/).
It looks like the region is not set at all when invoking
awscli
for multi_part uploads.And in awscli
/tmp/xxx.output
:Since https://github.com/thelastpickle/cassandra-medusa/commit/3c01bef73bf266f84c6c7357b07c72a8a5386b86 , medusa uses more botocore, and set the region with
self._env['AWS_REGION']
, no more with--region
cli switch.By changing it to
self._env['AWS_DEFAULT_REGION']
inmedusa/storage/s3_compat_storage/awscli.py
, it works fine. Looks like botocore (and python?) only usesAWS_DEFAULT_REGION
(read from various places on the Internet and https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html).Can this be fixed ? I don't know if changing this variable will break other s3_compatible storages.
┆Issue is synchronized with this Jira Story by Unito ┆Issue Number: MED-41