-
Would be nice to be able to have it stream the backup to a S3 bucket instead of local disk.
-
**Description of the feature**
With the addition of support for S3 it would be nice to add a parameter to make the @URL automatically appear in all the backup jobs.
Adding it to the initial config…
-
We have a table with a tuple partitioning like this:
`PARTITION BY (toStartOfInterval(timeStamp, toIntervalHour(1)), timePeriod)`
Partition will look like this: `1727434800-4` and file name gene…
-
### Summary
**As** user
**I want** to specify subpaths for the S3 backend
**So that** I can have a nice structure in my S3
### Context
I want the backups to be grouped by namespace, helmR…
cwrau updated
4 weeks ago
-
There are several issues with trying to use S3 Backups.
## Environment
Using the docker-compose file provided
## Issues
### Setting Backups to S3
In order to set the backups to S3, the env …
-
We cant deploy https://github.com/nftstorage/nft.storage/pull/1664/files in nft.storage because we need to do these changes in web3 also.
- [ ] Update s3 backups url schema in web3 https://github.c…
-
**AC:**
* Update the cloud native PG cluster manifest with a backup configuration that sets up a backup to AWS S3 bucket as per the example below
* Add AWS `ACCESS_KEY_ID` and `ACCESS_SECRET_KEY` to k…
-
[Project board link](https://github.com/orgs/k8ssandra/projects/8/views/1?pane=issue&itemId=64019026)
Hi,
I have Medusa 0.21 running in gRPC mode alongside Cassandra in a pod and when I invoke backup…
pvb05 updated
3 weeks ago
-
we could leverage rclone for the low level handling, however, our content-hashing system doesn't work with that directly, maybe we need completely different backup-type implementations but could lever…
-
It would be nice to be able to backup to AWS S3.
It should be possible either in the config or the settings panel to add the bucket, credentials, and the frequency for backup.
Maybe keep a spe…