noobaa / noobaa-core

High-performance S3 application gateway to any backend - file / s3-compatible / multi-clouds / caching / replication ...
https://www.noobaa.io
Apache License 2.0
271 stars 80 forks source link

s3 cp: copying to "s3:$BUCKET/." should be handled #7130

Closed vh05 closed 4 months ago

vh05 commented 1 year ago

Environment info

Actual behavior

When we copy object to a bucket (that is backed with NSFS) ending with /. (ex: s3 cp $FILE s3://$BUCKET/. ), the behavior from backend should be clear. In FS terms dot and dotdot transalates into present & parent directory and FS worker should see that the copy operation is to present directory or should throw clear error from endpoint that such filepaths are not allowed as per S3 behaviour.

Right now we are getting internal error from the front end

1. aws --endpoint-url https://192.168.49.2:31560 s3 cp nsfs-local-pvc.yaml s3://copy-bucket/. --no-verify-ssl

upload failed: ./nsfs-local-pvc.yaml to s3://copy-bucket/. An error occurred (InternalError) when calling the PutObject operation (reached max retries: 2): We encountered an internal error. Please try again.

and endpoint shows

2022-12-13 08:31:12.119877 [PID-14/TID-14] [L1] FS::FSWorker::OnError: Rename _old_path=/nsfs/noofs/copy-bucket/.noobaa-nsfs_63981fb56031770029c35b2e/uploads/76cfe862-c182-4d80-ab4c-4ee8dd79a8ea _new_path=/nsfs/noofs/copy-bucket error.Message()=Directory not empty

Expected behavior

s3 cp to s3:#BUCKET/. behavior should be clear.

Steps to reproduce

  1. Set up nsfs backed bucket
  2. Configure aws creds
  3. Create a $BUCKET
  4. Run # aws --endpoint-url https://192.168.49.2:31560 s3 cp $FILE s3://$BUCKET/. --no-verify-ssl

More information - Screenshots / Logs / Other output

2022-12-13 08:31:12.119837 [PID-14/TID-22] [L1] FS::FSWorker::Execute: Rename _old_path=/nsfs/noofs/copy-bucket/.noobaa-nsfs_63981fb56031770029c35b2e/uploads/76cfe862-c182-4d80-ab4c-4ee8dd79a8ea _new_path=/nsfs/noofs/copy-bucket took: 0.012067 ms 2022-12-13 08:31:12.119877 [PID-14/TID-14] [L1] FS::FSWorker::OnError: Rename _old_path=/nsfs/noofs/copy-bucket/.noobaa-nsfs_63981fb56031770029c35b2e/uploads/76cfe862-c182-4d80-ab4c-4ee8dd79a8ea _new_path=/nsfs/noofs/copy-bucket error.Message()=Directory not empty Dec-13 8:31:12.120 [Endpoint/14] [L3] core.rpc.rpc:: RPC ROUTER default => wss://noobaa-mgmt.default.svc:443 Dec-13 8:31:12.120 [Endpoint/14] [L2] core.rpc.rpc:: RPC _get_connection: existing address wss://noobaa-mgmt.default.svc:443 srv pool_api.update_issues_report connid wss://noobaa-mgmt.default.svc:443(1qiv44y6.za1) Dec-13 8:31:12.120 [Endpoint/14] [L1] core.rpc.rpc:: RPC._request: START srv pool_api.update_issues_report reqid 1311@wss://noobaa-mgmt.default.svc:443(1qiv44y6.za1) connid wss://noobaa-mgmt.default.svc:443(1qiv44y6.za1) Dec-13 8:31:12.120 [Endpoint/14] [L1] core.rpc.rpc:: RPC._request: SEND srv pool_api.update_issues_report reqid 1311@wss://noobaa-mgmt.default.svc:443(1qiv44y6.za1) connid wss://noobaa-mgmt.default.svc:443(1qiv44y6.za1) Dec-13 8:31:12.120 [Endpoint/14] [ERROR] core.endpoint.s3.s3_rest:: S3 ERROR <?xml version="1.0" encoding="UTF-8"?>InternalErrorWe encountered an internal error. Please try again./copy-bucket/.lblysd9z-5s6e0-100e PUT /copy-bucket/. {"host":"192.168.49.2:31560","accept-encoding":"identity","user-agent":"aws-cli/2.9.1 Python/3.9.11 Linux/5.18.16-200.fc36.x86_64 exe/x86_64.fedora.36 prompt/off command/s3.cp","content-md5":"JEw7MvX4BlFLT2Qc7g9iCw==","expect":"100-continue","x-amz-date":"20221213T083111Z","x-amz-content-sha256":"UNSIGNED-PAYLOAD","authorization":"AWS4-HMAC-SHA256 Credential=dQXm4z4fs0cASvA86EV5/20221213/us-east-1/s3/aws4_request, SignedHeaders=content-md5;host;x-amz-content-sha256;x-amz-date, Signature=6714d30148e4e66bc25c20cdd44b2ca43c7feaf7252f858b5bc018c192972b3c","content-length":"188"} Error: Directory not empty

github-actions[bot] commented 5 months ago

This issue had no activity for too long - it will now be labeled stale. Update it to prevent it from getting closed.

github-actions[bot] commented 4 months ago

This issue is stale and had no activity for too long - it will now be closed.