gaul / s3proxy

Access other storage backends via the S3 API
Apache License 2.0
1.78k stars 232 forks source link

Failed to upload files to Azure blob using s3cmd through s3proxy #534

Open HotDog215 opened 1 year ago

HotDog215 commented 1 year ago

When I used s3cmd to upload files to the blob through s3proxy, the upload failed, displaying: HttpResponseException without HttpResponse:Org.jclouds.http.HttpResponseException: Server rejected operation connecting to PUT https://sascenariodb.blob.core.chinacloudapi.cn/scenariodb/ads_list.sql HTTP/1.1, but it's easy for me to list files using s3cmd

Error message: [s3proxy] D 07-24 12:12:23.369 S3Proxy-Jetty-18 o.j.h.i.JavaUrlHttpCommandExecutorService:56 |::] Caught a protocol exception on a 100-continue PUT request. Attempting to retry. [s3proxy] W 07-24 12:12:23.370 S3Proxy-Jetty-18 o.j.a.s.h.AzureStorageClientErrorRetryHandler:74 |::] Cannot retry after server error, command is not replayable: [method=org.jclouds.azureblob.AzureBlobClient.public abstract java.lang.String org.jclouds.azureblob.AzureBlobClient.putBlob(java.lang.String,org.jclouds.azureblob.domain.AzureBlob)[scenariodb, [properties=[name=ads_list.sql, container=null, url=null, contentMetadata=[cacheControl=null, contentDisposition=null, contentEncoding=null, contentLanguage=null, contentLength=13, contentMD5=null, contentType=text/plain, expires=null], eTag=null, lastModified=null, leaseStatus=Unlocked, metadata={s3cmd-attrs=atime:1690198605/ctime:1690198596/gid:0/gname:root/md5:01bcb1fe182a23a65c5efe8326250da8/mode:33188/mtime:1690198596/uid:0/uname:root}, type=BlockBlob]]], request=PUT https://sascenariodb.blob.core.chinacloudapi.cn/scenariodb/ads_list.sql HTTP/1.1] [s3proxy] D 07-24 12:12:23.372 S3Proxy-Jetty-18 o.g.s.S3ProxyHandlerJetty:88 |::] HttpResponseException without HttpResponse: org.jclouds.http.HttpResponseException: Server rejected operation connecting to PUT https://sascenariodb.blob.core.chinacloudapi.cn/scenariodb/ads_list.sql HTTP/1.1 at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:120) at org.jclouds.rest.internal.InvokeHttpMethod.invoke(InvokeHttpMethod.java:91) at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:74) at org.jclouds.rest.internal.InvokeHttpMethod.apply(InvokeHttpMethod.java:45) at org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156) at org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123) at com.sun.proxy.$Proxy58.putBlob(Unknown Source) at org.jclouds.azureblob.blobstore.AzureBlobStore.putBlob(AzureBlobStore.java:240) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.base/java.lang.reflect.Method.invoke(Unknown Source) at com.google.inject.internal.DelegatingInvocationHandler.invoke(DelegatingInvocationHandler.java:50) at com.sun.proxy.$Proxy59.putBlob(Unknown Source) at org.gaul.s3proxy.S3ProxyHandler.handlePutBlob(S3ProxyHandler.java:1983) at org.gaul.s3proxy.S3ProxyHandler.doHandle(S3ProxyHandler.java:759) at org.gaul.s3proxy.S3ProxyHandlerJetty.handle(S3ProxyHandlerJetty.java:77) at org.gaul.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) at org.gaul.shaded.org.eclipse.jetty.server.Server.handle(Server.java:516) at org.gaul.shaded.org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:487) at org.gaul.shaded.org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:732) at org.gaul.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:479) at org.gaul.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:277) at org.gaul.shaded.org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311) at org.gaul.shaded.org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:105) at org.gaul.shaded.org.eclipse.jetty.io.ChannelEndPoint$1.run(ChannelEndPoint.java:104) at org.gaul.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:883) at org.gaul.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1034) at java.base/java.lang.Thread.run(Unknown Source) Caused by: java.net.ProtocolException: Server rejected operation

alifirat commented 1 year ago

Hey @HotDog215 Can you try by setting S3PROXY_IGNORE_UNKNOWN_HEADERS to true ?

HotDog215 commented 1 year ago

@alifirat Yes, I set it to true, but I still reported the same error

alifirat commented 1 year ago

Is it working with the aws-cli ? Also @HotDog215 did you check that:

HotDog215 commented 1 year ago

Yes,When I use aws-cli, it works normally But my configuration is slightly different

$ aws configure AWS Access Key ID []: name-of-your-azure-storage-account AWS Secret Access Key []: access-key-of-your-storage-account Default region name [None]: Default output format [None]:

Is it related to my S3cmd configuration? Causing inability to upload when using s3cmd @alifirat

HotDog215 commented 1 year ago

[default] access_key = name-of-your-azure-storage-account add_encoding_exts = add_headers = bucket_location = s3proxy.s3proxy # access address of s3proxy, port 80 ca_certs_file = cache_file = check_ssl_certificate = True check_ssl_hostname = True cloudfront_host = cloudfront.amazonaws.com connection_max_age = 5 connection_pooling = True content_disposition = content_type = default_mime_type = binary/octet-stream delay_updates = False delete_after = False delete_after_fetch = False delete_removed = False dry_run = False enable_multipart = True encoding = UTF-8 encrypt = False expiry_date = expiry_days = expiry_prefix = follow_symlinks = False force = False get_continue = False gpg_command = None gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s gpg_passphrase = guess_mime_type = True **host_base = s3proxy.s3proxy # access address of s3proxy, port 80

host_bucket = %(bucket)s.s3proxy.s3proxy

host_bucket = s3proxy.s3proxy # access address of s3proxy, port 80 human_readable_sizes = False invalidate_default_index_on_cf = False invalidate_default_index_root_on_cf = True invalidate_on_cf = False kms_key = limit = -1 limitrate = 0 list_md5 = False log_target_prefix = long_listing = False max_delete = -1 mime_type = multipart_chunk_size_mb = 15 multipart_copy_chunk_size_mb = 1024 multipart_max_chunks = 10000 preserve_attrs = True progress_meter = True proxy_host = proxy_port = 0 public_url_use_https = False put_continue = False recursive = False recv_chunk = 65536 reduced_redundancy = False requester_pays = False restore_days = 1 restore_priority = Standard secret_key = access-key-of-your-storage-account** send_chunk = 65536 server_side_encryption = False signature_v2 = False signurl_use_https = False simpledb_host = sdb.amazonaws.com skip_existing = False socket_timeout = 300 ssl_client_cert_file = ssl_client_key_file = stats = False stop_on_error = False storage_class = throttle_max = 100 upload_id = urlencoding_mode = normal use_http_expect = False use_https = False use_mime_magic = True verbosity = WARNING website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/ website_error = website_index = index.html

alifirat commented 1 year ago

If it's working with another client it's already good news and yeah I guess something wrong with s3cmd (don't know it personally).

Maybe @timuralp or @gaul has any clue on that ?

bdeluca-igenius commented 6 months ago

I was having the same issue and I realised that s3Proxy was trying to store the file in a Storage container that did not exist.

gaul commented 1 month ago

Could you try testing with the new azureblob-sdk provider from #606?