Open rwos opened 5 years ago
Definitely, we can test this against a Minio that is hosted "off-cluster" and some others like Spaces
Most systems I've encountered that have AWS and other S3 compatible implementations reuse the region endpoint from AWS config section. This looks like a good design document.
Does anyone know how "Assignees" works? I was going to self-assign this but I don't have permission in the Team Hephy org.
Looks like Anton has already picked this up for review with #71
So, related to this are
teamhephy/builder#48,
teamhephy/workflow#71,
teamhephy/postgres#3,
teamhephy/registry#2
https://github.com/teamhephy/object-storage-cli/pull/1 – any others?
Having some difficulty incorporating the changes teamhephy/object-storage-cli#1
Seems I'm not sure how to build this...
@kingdonb make bootstrap build-all
worked for me. We also have our fork upload builds to circle-ci (here: https://circleci.com/gh/pngmbh/object-storage-cli/17#artifacts/containers/0), in case you just need a quick binary for testing.
@kingdonb @rwos , what is the testing status on this? Is it ready to be merged into master for all the components and any outlined steps that I can use to test overall?
Definitely needs testing. Maybe target for v2.22?
@kingdonb Is this released?
(this is a design document for https://github.com/teamhephy/workflow/issues/52 and the PRs linked to it)
Goal
Allow users to use hephy with any S3-compatible object storage (self-hosted or otherwise).
Do that by letting users specify the S3 endpoint URL via a helm chart value. That is the most flexible way to go, but users need to know that URL (except for AWS, where they can just leave it empty). It might be a good idea to pre-define some common S3-compatible storage providers later on (so that, say,
storage: "DigitalOcean"
would set the appropriate endpoint). This is out of scope for this design doc though.Code Changes
endpoint
option to thes3
storage section of the main (workflow) helm chartendpoint
in all components (i.e. pass it through to the respective S3 client in use)endpoint
into theregionendpoint
of the client used here (docker/distribution/storage/s3)REGISTRY_STORAGE_S3_REGIONENDPOINT
env var (as per the docs)WALE_S3_ENDPOINT
appropriately for the backupTests
Not sure - I guess these would have to be integration/end-to-end tests mostly, but we'd need to start some sort of S3 server (minio?). Input welcome :)