Open ThorodanBrom opened 3 months ago
S3 bucket + S3 endpoint
. If this is possible and minio virtual hosting access works with this, then it will be able to use the service name (in Docker Compose and potentially K8s as well) to refer to the minio container in the internal network
One major restriction that virtual hosting based access introduces is that network aliases need to be added per bucket. The Docker network does not do subdomain resolution automatically. https://github.com/datakaveri/ogc-resource-server/blob/0efa06c0e5fb30f6c0bc5fb6a4c21898337bfe14/docker-compose.test.yml#L92-L93 Here, we added an alias for the bucket bucket1
.
So if the server can support access to more than 1 bucket, the compose file or deployment files may need to be updated for every new bucket.
We can sort of do this for an existing deployment, because the server is designed to work with only 1 bucket. But tomorrow if the server supports multiple buckets, we have to update the aliases with every new bucket that is used, which is not feasible.
Some options around this:
Heterogeneous sources for buckets would be a good feature to support. However, that complicates the deployment configuration.
Also the codebase needs to support multiple buckets during runtime but the implementation of this is a big open question.
Open question: What happens to the auditing and metering flows?
MinIO integration is done. Configurations have to be updated and then the PR can be merged.
Bare metal deployment requires inputs from Karun and Swarup.
Notable points
UGI self hosting of the data plane
Ideal outcome: Simplified containerized deployment with minimal dependencies on weird container tech.