awslabs / mountpoint-s3

A simple, high-throughput file client for mounting an Amazon S3 bucket as a local file system.
Apache License 2.0
4.66k stars 164 forks source link

Createrepo fails in mountpoint #1155

Closed JamesUoM closed 20 hours ago

JamesUoM commented 4 days ago

Mountpoint for Amazon S3 version

mount-s3 1.10.0

AWS Region

eu-west-2

Describe the running environment

ec2 with profile to allow access to s3 bucket running alma8

Mountpoint options

mount-s3 --allow-other  --allow-overwrite --allow-delete --uid ### --gid ##  bucket /var/www/bucket

What happened?

Using s3 to host RPM repo. I was using s3fs without any problems previously

# sudo createrepo .
Directory walk started
Directory walk done - 3 packages
Temporary output repo path: ./.repodata/
Preparing sqlite DBs
Critical: Cannot open ./.repodata//primary.sqlite: Can not open SQL database: disk I/O error
C_CREATEREPOLIB: Warning: cr_remove_dir_cb: Cannot remove: ./.repodata/filelists.xml.gz: Operation not permitted

Relevant log output

No errors other than these:

Nov 21 14:56:16 ip-10-231-0-5 mount-s3[387334]: [INFO] awscrt::http-connection: id=0x7f385c06c290: Shutting down connection with error code 14347 (AWS_ERROR_S3_CANCELED).
Nov 21 14:56:16 ip-10-231-0-5 mount-s3[387334]: [ERROR] awscrt::S3MetaRequest: id=0x7f383801b160 Meta request cannot recover from error 14347 (Request successfully cancelled). (request=0x7f385c06d530, response status=0)
Nov 21 14:56:16 ip-10-231-0-5 mount-s3[387334]: [INFO] awscrt::http-connection: id=0x7f385c06c290: Shutting down connection with error code 0 (AWS_ERROR_SUCCESS).
Nov 21 14:56:16 ip-10-231-0-5 mount-s3[387334]: [INFO] awscrt::http-connection: 0x7f385c06c290: Client shutdown completed with error 14347 (AWS_ERROR_S3_CANCELED).
passaro commented 3 days ago

Hi @JamesUoM, I haven't looked at createrepo in detail, but I suspect it is trying to perform some operation that is not supported by Mountpoint (see semantics doc), most likely writing out of order. If that is the case, you should be able to verify it from the logs. You can see some examples in the troubleshooting doc.

JamesUoM commented 20 hours ago

Hi @JamesUoM, I haven't looked at createrepo in detail, but I suspect it is trying to perform some operation that is not supported by Mountpoint (see semantics doc), most likely writing out of order. If that is the case, you should be able to verify it from the logs. You can see some examples in the troubleshooting doc.

Yes, I suspected so, it just that I was hoping to use mountpoint-s3 as a replacement for s3fs. I've found a workaround, the poorly described -o, --outputdir sets the temporary dir to be used, so that files are then moved from local disk to the mount.

createrepo  -o /tmp .
Directory walk started
Directory walk done - 3 packages
Temporary output repo path: /tmp/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished