Closed pierreloicq closed 5 months ago
Hi, It was supposed to work with S3 compatible storage. S3_NAME doesn't matter, it's the string you choose. Do you know how many objects are in this storage? I've never tested it with really huge number of objects on the bucket and suppose that problem might be with this.
Hi, It was supposed to work with S3 compatible storage. S3_NAME doesn't matter, it's the string you choose. Do you know how many objects are in this storage? I've never tested it with really huge number of objects on the bucket and suppose that problem might be with this.
Alright, thank you. I indeed have a lot of objects, like > 50 million.
Finally, because my goal was just to know if my s3 is reachable, I created a bucket with 1 file and target it in the S3_ENDPOINT url and it works. Thank you
Hi, I don't manage to make it work on a s3 compatible storage that is not AWS. On my local computer on windows 10, docker desktop and Git Bash, I do :
docker run -p 9655:9655 -d -e LISTEN_PORT=:9655 -e S3_DISABLE_SSL=False -e S3_ENDPOINT=https://s3.waw2-1.cloudferro.com -e S3_ACCESS_KEY=xxxxxxxxx -e S3_SECRET_KEY=xxxxxxxxxxxxxxxx -e S3_NAME=s3_cloudferro -e S3_DISABLE_ENDPOINT_HOST_PREFIX=True -e LOG_LEVEL=Debug -e S3_FORCE_PATH_STYLE=True docker.io/molu8bits/s3bucket_exporter:1.0.2
Then I do:
curl -v http://127.0.0.1:9655/metrics
I get:
In the logs I have:
In the docker statistics, after one minute or so, the CPU usage increases and the network input increases by ~3 Mb/sec, reaching 500 Mb after like 2-3 minutes.
S3_NAME is not supposed to match with something, right ? It's just a string that I choose. Do you see any mistake ?
Thank you