Closed ivansabik closed 11 months ago
@Miserlou added test to keep coverage, ready for review
I would be glad to rebase this one if you think it will ever make it to merging
Interesting use case. I haven't used minio before, but it looks like an interesting tool.
I don't think I have permissions to do a proper code review, but I've been doing them all day at work today so here are some thoughts:
Perhaps generates the resource object once and store it as a private instance var and subsequent requests to the property returned the stored value rather that make a new resource each time. I don't know how expensive instantiating the resource is, but it's got to be more expensive than caching it.
I assume that having a endpoint_url of None is the same as not passing one in. Should there be a test for this use case?
In the test perhaps use isinstance()
instead of str(type())
.
Should there be an update to the documentation demonstrating this usage?
It's not up to me, but just some thoughts.
We use minio for local dev testing of s3 without needing to hit AWS. We do not use it in production at all. Agree with your comments, I could do those if this PR would ever be merged into this repo. Otherwise I'll keep using my fork :)
what about doing something like this to reduce the number of S3 resource inits and also allow more than just one type of s3 resource override?
class NoDB(object):
...
_s3 = None
_s3_used_reource_settings = None
s3_resource_settings = {}
...
@property
def s3(self):
if not (self._s3 and self._s3_used_reource_settings == self.s3_resource_settings):
# if the s3 reource settings have chanced or s3 is not initialised
# setup the internal s3
self._s3 = boto3.resource(
's3',
config=botocore.client.Config(signature_version=self.signature_version),
**self.s3_resource_settings
)
# setup the last used resource settings
self._s3_used_reource_settings = dict(**self.s3_resource_settings)
# return the internal s3
return self._s3
Problem
Need to be able to run NoDB in local environments, where AWS S3 can be some random container and not actually AWS S3 (for example using minio)
Solution
aws_s3_host
attributeself.s3
to be a@property
Manual testing
Drive Capacity: 31 GiB Free, 60 GiB Total
Endpoint: http://172.17.0.2:9000 http://127.0.0.1:9000 AccessKey: XCLV2JO1CXZV1U02DN55 SecretKey: cC6z0P3WhoPIZCSG7eK48/S12o93B2MTsc97Cse2