Open faelau opened 3 months ago
Hello @faelau Thanks for creating this issue.
From my understanding the AWS SDK v2 supports overriding through environment variables, would it be suitable for your usage ? I think using something like AWS_ENDPOINT_URL_S3 pointing to your local setup should work
Hi!
I need to use dedicated MinIO as datastore backend.
@faelau did you manage to configure the datastore components and have the set of needed environment variables?
We haven't tested it on our side but tell me if you'd really need to make minio a first-class citizen implementation
Hello!
From my understanding the AWS SDK v2 supports overriding through environment variables, would it be suitable for your usage ? I think using something like AWS_ENDPOINT_URL_S3 pointing to your local setup should work
That's right. But relying on the AWS SDK for usage with generic S3 would require to use custom STS too (e.g. MinIO Security Token Service (STS)), which is pretty tricky to deploy.
Making minio a first-class citizen implementation would result in a very broad support for S3-compatible storage, including AWS itself. It would allow to use burrito with other S3 providers like Wasabi.
Using minio-go
would be the best solution I guess.
Ok I'll add a minio provider for the datastore, I think I'll keep the AWS oriented provider though since I don't think minio-go handles the seamless authentication through workload identity
https://github.com/padok-team/burrito/issues/318 just for tracking on our end
@faelau Oh it seems to be supported : https://github.com/minio/minio-go/pull/1944 I'll experiment with it and if it's ok might replace the aws sdk entirely
I tried several things, and can't figured it out to use a minio instance on datastore with AWS SDK.
Here are the environment variable used:
AWS_ENDPOINT_URL_S3=https://minio.dev.mycompany.private:9000
AWS_CA_BUNDLE=/path/to/root.crt
AWS_ACCESS_KEY_ID=xxxxxxxxxxxxxx
AWS_SECRET_ACCESS_KEY=yyyyyyyyyyyyyyyyy
The only logs I see on datastore is like:
{"time":"2024-08-28T14:11:14.842259507Z","id":"","remote_ip":"x.x.x.x","host":"burrito-datastore.burrito-system","method":"GET","uri":"/api/plans?attempt=&format=short&layer=mylayer&namespace=burrito-mytenant&run=mylayer-plan-tw525","user_agent":"Go-http-client/1.1","status":500,"error":"","latency":369036,"latency_human":"369.036µs","bytes_in":0,"bytes_out":61
But I don't have one explaining why a 500 is raised.
Any insight to have more logs?
@Alan-pad hi, any input about datastore logs? :arrow_up:
That's weird you should have at least some logs to explain that 500
Oh I see there's no error logging on the datastore side will fix that
Thanks for the logs, this helped to identify the problems.
I created a PR for path style support https://github.com/padok-team/burrito/pull/322.
And here's my configuration, environment variables to define in a configmap and secret:
AWS_CA_BUNDLE=/run/root_ca
AWS_REGION=minio
AWS_ACCESS_KEY_IT=username
AWS_SECRET_ACCESS_KET=password
AWS_ENDPOINT_URL_S3=https://my-minio-instance:9000
Currently there are two backends which could potentially be used for storing data locally:
mock
s3
Currently the S3 sdk for
aws
is used, so you're unable to connect to another S3 API endpoint with modifying the code. As far as I can see, the AWS S3 SDK allows specifying custom endpoints (but I didn't tested it). An alternative would be to use something like minio-go, which would allow to connect to AWS and all other compatible backends.Another alternative would be to replace/complement
mock
with persisting data inredis
ortikv
. This would allow to spin up a fairly easy datastore backend for usage withburrito
.