Closed chaosbunker closed 10 months ago
Hi there, I am coming across this now almost three years later and having the same issue. I had it set up and working just fine with v2 signatures on Wasabi, but due to pricing structure differences between Wasabi and Backblaze B2, I decided to migrate to Backblaze B2. Unfortunately, Backblaze B2 only supports v4 signatures, and I am running into the exact same issue where the calculated signature does not match, despite the fact that everything appears to be configured correctly.
Not much else I can meaningfully share on my end since everything from my side is the same as that which has already been mentioned in the original post, except that I'm on Backblaze B2 and my region is us-east-005
.
I will also add that if I instead set use_https = false
it outright fails to connect to Backblaze (s3 libcurl Couldn't connect to server
). Presumably this is because Backblaze requires HTTPS.
Seafile is otherwise really perfect for my use case, but unfortunately this is a dealbreaking issue for me - I'm really disappointed that this is an issue from almost three years ago that has gotten little attention and hasn't been fixed (despite v4 signatures being a thing for quite a while now). I am bumping this issue to hopefully see if there is any possibility that this issue could be revisited and fixed. Thank you!
In 11.0.2-pro edition, we fixed some potential incompatibilities with S3 providers. You can test it again.
Hello, thank you for your response. I pulled the latest docker container version (docker.seadrive.org/seafileltd/seafile-pro-mc:latest
), ran docker-compose up -d --build
, and tried again, but I am still getting the same error.
2024-01-05 02:07:14 ../common/s3-client.c(949): S3 error status for HEAD: 403.
2024-01-05 02:07:14 ../common/s3-client.c(725): S3 error status for PUT: 403.
2024-01-05 02:07:14 ../common/s3-client.c(726): Request URL: https://s3.us-east-005.backblazeb2.com/[redacted]
2024-01-05 02:07:14 ../common/s3-client.c(727): Request headers:
2024-01-05 02:07:14 ../common/s3-client.c(624): Date: Fri, 05 Jan 2024 07:07:14 +0000
2024-01-05 02:07:14 ../common/s3-client.c(624): Authorization: AWS4-HMAC-SHA256 Credential=[redacted]/20240105/us-east-005/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256,Signature=b7a9080ba5cee2036fed6cc3bba01be6>
2024-01-05 02:07:14 ../common/s3-client.c(624): x-amz-content-sha256: f6e1fea018c3edb42d1f9af6cc7f01eb769c8b841b1f584bf9c2893c472f022e
2024-01-05 02:07:14 ../common/obj-backend-s3.c(348): Put object 0fa2883ace0cf8002b16ad9803474bad87c5a0b9 error 403. Response:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>Signature validation failed</Message>
</Error>
�NV3�^U��^]^[ڧ~�
2024-01-05 02:07:14 repo-mgr.c(7184): Failed to add commit.
You need to check the version of the "latest" tag. Version 11 may not be production ready yet. So it's not tagged as the latest.
What is the tag I should be using instead of latest
? 11.0.2-pro
and 11.0.2
both do not work, and I cannot seem to find this information in the documentation.
Edit: nevermind, found 11.0.2-testing
by just logging into docker.seadrive.org with the credentials in documentation. Trying it now.
Well, now it just gives error 502 bad gateway. I will need to troubleshoot this separately before being able to test if 11.0.2 fixes the S3 signature issue. I will report back on whether it is working or not when I am able to do so. Thanks very much for your help so far! 😃
Edit to ask as well: does 11.0 use a different docker-compose.yml
? If so, I might need that as well. I got the 10.0 one from https://manual.seafile.com/docker/pro-edition/10.0/docker-compose.yml
but https://manual.seafile.com/docker/pro-edition/11.0/docker-compose.yml
does not exist.
I did a fresh install to ensure that the 502 bad gateway isn't being caused by something else that was misconfigured. It worked and I was able to access the web UI and log in normally. I then added the S3 configuration in seafile.conf
according to this documentation page, and then it starts giving 502 bad gateway. This is even after waiting a while for the server to initialize.
The logs seem to suggest that the server started fine:
seafile | Starting seafile server, please wait ...
seafile | ** Message: 18:49:38.868: seafile-controller.c(861): loading seafdav config from /opt/seafile/conf/seafdav.conf
seafile |
seafile | Seafile server started
seafile |
seafile | Done.
seafile |
seafile | Starting seahub at port 8000 ...
seafile |
seafile | Seahub is started
seafile |
seafile | Done.
seafile |
There are a bunch of elasticsearch errors, but these are present even when the server is working fine I don't think these are relevant, but i will include it anyways just in case:
seafile-elasticsearch | {"@timestamp":"2024-01-05T23:50:43.638Z", "log.level":"ERROR", "message":"fatal exception while booting Elasticsearch", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"main","log.logger":"org.elasticsearch.bootstrap.Elasticsearch","elasticsearch.node.name":"b1c1a9da84a3","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalStateException","error.message":"failed to obtain node locks, tried [/usr/share/elasticsearch/data]; maybe these locations are not writable or multiple nodes were started on the same data path?","error.stack_trace":"java.lang.IllegalStateException: failed to obtain node locks, tried [/usr/share/elasticsearch/data]; maybe these locations are not writable or multiple nodes were started on the same data path?\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:285)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.node.Node.<init>(Node.java:478)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.node.Node.<init>(Node.java:322)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch$2.<init>(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch.initPhase3(Elasticsearch.java:214)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:67)\nCaused by: java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:230)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:198)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.env.NodeEnvironment.<init>(NodeEnvironment.java:277)\n\t... 5 more\nCaused by: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/data/node.lock\n\tat java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:92)\n\tat java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)\n\tat java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\n\tat java.base/sun.nio.fs.UnixPath.toRealPath(UnixPath.java:825)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:94)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:43)\n\tat org.apache.lucene.core@9.4.2/org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:44)\n\tat org.elasticsearch.server@8.6.2/org.elasticsearch.env.NodeEnvironment$NodeLock.<init>(NodeEnvironment.java:223)\n\t... 7 more\n\tSuppressed: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/node.lock\n\t\tat java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90)\n\t\tat java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:106)\n\t\tat java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111)\n\t\tat java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:218)\n\t\tat java.base/java.nio.file.Files.newByteChannel(Files.java:380)\n\t\tat java.base/java.nio.file.Files.createFile(Files.java:658)\n\t\tat org.apache.lucene.core@9.4.2/org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:84)\n\t\t... 10 more\n"}
seafile-elasticsearch | ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
seafile-elasticsearch |
seafile-elasticsearch | ERROR: Elasticsearch exited unexpectedly
seafile-elasticsearch exited with code 1
I otherwise cannot see anything else in the logs that explains why the S3 configuration causes the entire server to stop working.
Here is my seafile.conf
:
[fileserver]
port = 8082
[database]
type = mysql
host = db
port = 3306
user = [redacted]
password = [redacted]
db_name = seafile_db
connection_charset = utf8
[notification]
enabled = false
host = 127.0.0.1
port = 8083
log_level = info
jwt_private_key = [redacted]
[commit_object_backend]
name = s3
# bucket name can only use lowercase characters, numbers, periods and dashes. Period cannot be used in Frankfurt region.
bucket = [redacted]
key_id = [redacted]
key = [redacted]
host = s3.us-east-005.backblazeb2.com
path_style_request = true
use_v4_signature = true
aws_region = us-east-005
use_https = true
[fs_object_backend]
name = s3
# bucket name can only use lowercase characters, numbers, periods and dashes. Period cannot be used in Frankfurt region.
bucket = [redacted]
key_id = [redacted]
key = [redacted]
host = s3.us-east-005.backblazeb2.com
path_style_request = true
use_v4_signature = true
aws_region = us-east-005
use_https = true
[block_backend]
name = s3
# bucket name can only use lowercase characters, numbers, periods and dashes. Period cannot be used in Frankfurt region.
bucket = [redacted]
key_id = [redacted]
key = [redacted]
host = s3.us-east-005.backblazeb2.com
path_style_request = true
use_v4_signature = true
aws_region = us-east-005
use_https = true
[memcached]
memcached_options = --SERVER=memcached --POOL-MIN=10 --POOL-MAX=100
S3 bucket names are all lowercase letters and hyphens, and do not begin or end with a hyphen.
Just to be extra sure that this is the cause, I commented out the three sections [commit_object_backend]
, [fs_object_backend]
, and [block_backend]
, and restarted. Sure enough, it works when the S3 configuration is commented out.
Hello @CursedBlackCat, can you show the error in seaifle.log?
Oh, sorry, I guess between all the different logs I forgot to check that one 😅
It's the same error as before. I ran a docker compose restart
just now and this is the entirety of the log starting from when I ran the restart command, until the end of the log.
2024-01-05 21:25:07 ../common/seaf-utils.c(540): Use database Mysql
2024-01-05 21:25:07 http-server.c(249): fileserver: worker_threads = 10
2024-01-05 21:25:07 http-server.c(262): fileserver: backlog = 32
2024-01-05 21:25:07 http-server.c(277): fileserver: fixed_block_size = 8388608
2024-01-05 21:25:07 http-server.c(289): fileserver: skip_block_hash = 0
2024-01-05 21:25:07 http-server.c(301): fileserver: verify_client_blocks = 0
2024-01-05 21:25:07 http-server.c(316): fileserver: web_token_expire_time = 3600
2024-01-05 21:25:07 http-server.c(331): fileserver: max_indexing_threads = 1
2024-01-05 21:25:07 http-server.c(346): fileserver: max_index_processing_threads= 3
2024-01-05 21:25:07 http-server.c(368): fileserver: cluster_shared_temp_file_mode = 600
2024-01-05 21:25:07 http-server.c(435): fileserver: enable_async_indexing = 0
2024-01-05 21:25:07 http-server.c(447): fileserver: async_indexing_threshold = 700
2024-01-05 21:25:07 http-server.c(459): fileserver: fs_id_list_request_timeout = 300
2024-01-05 21:25:07 http-server.c(472): fileserver: max_sync_file_count = 100000
2024-01-05 21:25:07 http-server.c(487): fileserver: put_head_commit_request_timeout = 10
2024-01-05 21:25:07 ../common/license.c(709): License file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users
2024-01-05 21:25:07 filelock-mgr.c(1389): Cleaning expired file locks.
2024-01-05 21:25:07 socket file exists, delete it anyway
2024-01-05 21:25:07 ../common/s3-client.c(984): S3 error status for HEAD: 403.
2024-01-05 21:25:07 ../common/s3-client.c(752): S3 error status for PUT: 403.
2024-01-05 21:25:07 ../common/s3-client.c(753): Request URL: https://s3.us-east-005.backblazeb2.com/[redacted]
2024-01-05 21:25:07 ../common/s3-client.c(754): Request headers:
2024-01-05 21:25:07 ../common/s3-client.c(648): Date: Sat, 06 Jan 2024 02:25:07 GMT
2024-01-05 21:25:07 ../common/s3-client.c(648): Authorization: AWS4-HMAC-SHA256 Credential=[redacted]/20240106/us-east-005/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=a2023297243057a5fc52a>
2024-01-05 21:25:07 ../common/s3-client.c(648): x-amz-content-sha256: 9c5f4d9e01dd5584348c49e17c31a869d740e7b2eb36b2edcc15d4612c490289
2024-01-05 21:25:07 ../common/s3-client.c(648): x-amz-date: 20240106T022507Z
2024-01-05 21:25:07 ../common/obj-backend-s3.c(348): Put object ae62718cc5f71fae3f1d14c524b93d7dc940950d error 403. Response:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>Signature validation failed</Message>
</Error>
pJ^?
2024-01-05 21:25:07 repo-mgr.c(7626): Failed to add commit.
2024-01-05 21:25:07 seafile-session.c(660): Failed to create system default repo.
2024-01-05 21:25:11 start to serve on pipe client
Hello @CursedBlackCat , this should be a B2 compatibility issue, can you try removing the path_style_request configuration.
Commenting out the path_style_request
line did indeed fix the signature mismatch. seafile.log
now looks like this:
2024-01-05 23:12:11 ../common/seaf-utils.c(540): Use database Mysql
2024-01-05 23:12:11 http-server.c(249): fileserver: worker_threads = 10
2024-01-05 23:12:11 http-server.c(262): fileserver: backlog = 32
2024-01-05 23:12:11 http-server.c(277): fileserver: fixed_block_size = 8388608
2024-01-05 23:12:11 http-server.c(289): fileserver: skip_block_hash = 0
2024-01-05 23:12:11 http-server.c(301): fileserver: verify_client_blocks = 0
2024-01-05 23:12:11 http-server.c(316): fileserver: web_token_expire_time = 3600
2024-01-05 23:12:11 http-server.c(331): fileserver: max_indexing_threads = 1
2024-01-05 23:12:11 http-server.c(346): fileserver: max_index_processing_threads= 3
2024-01-05 23:12:11 http-server.c(368): fileserver: cluster_shared_temp_file_mode = 600
2024-01-05 23:12:11 http-server.c(435): fileserver: enable_async_indexing = 0
2024-01-05 23:12:11 http-server.c(447): fileserver: async_indexing_threshold = 700
2024-01-05 23:12:11 http-server.c(459): fileserver: fs_id_list_request_timeout = 300
2024-01-05 23:12:11 http-server.c(472): fileserver: max_sync_file_count = 100000
2024-01-05 23:12:11 http-server.c(487): fileserver: put_head_commit_request_timeout = 10
2024-01-05 23:12:11 ../common/license.c(709): License file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users
2024-01-05 23:12:11 filelock-mgr.c(1389): Cleaning expired file locks.
2024-01-05 23:12:11 socket file exists, delete it anyway
2024-01-05 23:12:15 start to serve on pipe client
and on Backblaze B2's web console I can see that the buckets had data written to them, which is a good sign. The server continues to produce 502 bad gateway errors, but I would imagine that this is likely for a different reason (as Seafile seems to be able to communicate with the S3 backend without issue now). As far as this issue is concerned, using a S3-compatible backend with v4 signature on 11.0.2 works now, so this seems to be a resolved issue. Thanks very much to you both for your help.
I don't want to take this issue post off topic, so I will attempt to continue troubleshooting the 502 on my own and will report back here if I determine the cause to be related to this S3 issue. Thanks again!
Hi. I have reported this issue on the seafile forum in February, unfortunately back then you said you don't have the resources to focus on this. I'm adding it here again, as an issue, because it is a bug and not a usage issue and still doesn't work .. and it would add a lot of value to seafile to have this fixed. I hope you can look into it, soon.
Following my original message:
I’ve been troubleshooting this for hours now and can’t figure out why this isn’t working with wasabi s3 backend. The same config works just fine with AWS S3
I have the following three buckets in wasabi in the region eu-central-1 (note, at wasabi this is in Amsterdam, not Frankfurt):
I want to use AWS signature version 4, but when setting this
And also do in ~/.boto
the log shows
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>JAHDSALGFLJAREDACTED</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256

20200212/eu-central-1/s3/aws4_request
45d0ec0b950c9ac55a8b42369a44dcef392ba18ea56da56923ece407a5a96c76</StringToSign><SignatureProvided>b8a516cb34937033ce224e7a32e51c12ab7f823baba9f52f2174940dd5ef53fd</SignatureProvided><StringToSignBytes>41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 a a 32 30 32 30 30 32 31 32 2f 65 75 2d 63 65 6e 74 72 61 6c 2d 31 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 a 34 35 64 30 65 63 30 62 39 35 30 63 39 61 63 35 35 61 38 62 34 32 33 36 39 61 34 34 64 63 65 66 33 39 32 62 61 31 38 65 61 35 36 64 61 35 36 39 32 33 65 63 65 34 30 37 61 35 61 39 36 63 37 36</StringToSignBytes><CanonicalRequest>PUT
/034b94fb-2fb6-4bbc-b86d-590d8b9e0786/1160a2cfa2bfad637db0e26a9ed718828e016fdc

host:xie8do-commit-objects.s3.eu-central-1.wasabisys.com
x-amz-content-sha256:1d5e9fe4dad7f0d131574e979eca1472e215aa43795f0ece39ab72edab57128f

host;x-amz-content-sha256
1d5e9fe4dad7f0d131574e979eca1472e215aa43795f0ece39ab72edab57128f</CanonicalRequest><CanonicalRequestBytes>50 55 54 a 2f 30 33 34 62 39 34 66 62 2d 32 66 62 36 2d 34 62 62 63 2d 62 38 36 64 2d 35 39 30 64 38 62 39 65 30 37 38 36 2f 31 31 36 30 61 32 63 66 61 32 62 66 61 64 36 33 37 64 62 30 65 32 36 61 39 65 64 37 31 38 38 32 38 65 30 31 36 66 64 63 a a 68 6f 73 74 3a 63 68 6f 6f 34 69 78 6f 6f 39 6f 6f 2d 63 6f 6d 6d 69 74 2d 6f 62 6a 65 63 74 73 2e 73 33 2e 65 75 2d 63 65 6e 74 72 61 6c 2d 31 2e 77 61 73 61 62 69 73 79 73 2e 63 6f 6d a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 31 64 35 65 39 66 65 34 64 61 64 37 66 30 64 31 33 31 35 37 34 65 39 37 39 65 63 61 31 34 37 32 65 32 31 35 61 61 34 33 37 39 35 66 30 65 63 65 33 39 61 62 37 32 65 64 61 62 35 37 31 32 38 66 a a 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 a 31 64 35 65 39 66 65 34 64 61 64 37 66 30 64 31 33 31 35 37 34 65 39 37 39 65 63 61 31 34 37 32 65 32 31 35 61 61 34 33 37 39 35 66 30 65 63 65 33 39 61 62 37 32 65 64 61 62 35 37 31 32 38 66</CanonicalRequestBytes><RequestId>5D92B42A52124773</RequestId><HostId>nmG9t6bD75KoJSb2Ddkum01+O/F0v/TXznfkXaQwU28L64roIfLHCgKNdLsuUiXhK9zVAJEIqVXY</HostId></Error>
If I change in seafile.conf
use_v4_signature = false
and in ~/.boto
[s3]
use-sigv4 = False
it works fine.
taking out
path_style_request = true
did not make a difference.I could successfully put files into the buckets using v4 signatures by following instructions found in the wasabi knowledge base. So from wasabi’s side this works.
I’m wondering where and how does the v4 signature get calculated in seafile pro? is it possible that seafile is getting confused with the wasabi region eu-central-1 not being the same as eu-central-1 in AWS? Is it possible that seafile is calculating the signature based on the wrong region?
I tried taking out aws_region from seafile.conf but then seafile server can’t be started.
Any help would be appreciated!