Closed Lisio closed 2 years ago
I have been experiencing the same issue. Files have been transferring at 1 byte per second,
My questions is: 1) did you installed sdfs INTO backblaze or it is via network? 2) did you enabled backblaze encryption?
mkfs.sdfs --volume-name=storage --volume-capacity=4TB --backup-volume --chunk-store-encrypt=true --chunk-store-encryption-key=*** --backblaze-enabled --cloud-access-key=*** --cloud-secret-key=*** --cloud-bucket-name=***
mkfs.sdfs --volume-name=blaze4 --volume-capacity=10GB --backblaze-enabled --cloud-access-key= --cloud-secret-key= --cloud-bucket-name= --enable-replication-master --sdfscli-password=admin
I have tested the same settings with AWS and the issue is not present.
I believe this issue has been addressed in SDFS 3.7. What version are you running?
On Wed, Jul 25, 2018 at 10:08 AM, elanou12 notifications@github.com wrote:
mkfs.sdfs --volume-name=blaze4 --volume-capacity=10GB --backblaze-enabled --cloud-access-key= --cloud-secret-key= --cloud-bucket-name= --enable-replication-master --sdfscli-password=admin
I have tested the same settings with AWS and the issue is not present.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/opendedup/sdfs/issues/82#issuecomment-407827311, or mute the thread https://github.com/notifications/unsubscribe-auth/ADa7qChIOrv2sKh4R3aXK2fQF0rDFIj9ks5uKKYJgaJpZM4VZWLo .
I'm currently running 3.7. ....I believe the version right after you fixed the sdfscli --list-cloud-volumes bug that was present for backblaze
I'm seeing this same issue with sdfs-3.7.8 on CentOS 7. BackBlaze is astonishingly slow, and used up 2500 class C transactions in less than an hour.
My mkfs.sdfs command was:
mkfs.sdfs --volume-name=pool2 --volume-capacity=9GB --backblaze-enabled --cloud-access-key='[REMOVED]' --cloud-secret-key='[REMOVED]' --cloud-bucket-name='opendedup-pool1' --backup-volume --chunk-store-encrypt false --chunk-store-compress false
Update: I just tested the mkfs.sdfs same command against Azure, and it was roughly 15x faster copying the exact same data.
I'm seeing the same issue on centos 7 with sdfs 3.7.8
B2 seems to use an inordinate number of transactions when operating and it takes ten seconds to touch a file.
There are no errors being produced, nothing in logs to indicate what the issue might be. More than happy to do some more digging if I get the time or someone wants to suggest a good starting place.
For some reason this appears to be somewhat mitigated by using Ubuntu Server 18.04.2 LTS. Anyone who can't be arsed troubleshooting this on Centos, I advise you to try Ubuntu first before smashing up the server.
The issue is still present and it is still considerably slower than Azure, but the single file read / write time is significantly improved. I guess that interactions with some filesystem related package might be contributing to the issue.
I decided to run a veeam backup at the dedup server to generate some load. There seems to be a ridiculously large number of transactions being generated, but "b2_list_file_names" seems to be the worst.
Also seeing lots of these in the logs:
SEVERE: Cannot retry after server error, command is not replayable: [method=org.jclouds.b2.features.ObjectApi.public abstract org.jclouds.b2.domain.UploadFileResponse org.jclouds.b2.features.ObjectApi.uploadFile(org.jclouds.b2.domain.UploadUrlResponse,java.lang.String,java.lang.String,java.util.Map,org.jclouds.io.Payload)[UploadUrlResponse{bucketId=[redacted], uploadUrl=https://pod-000-1121-07.backblaze.com/b2api/v1/b2_upload_file/[redacted], null, {owner=sdfscluster, lz4compress=true, DSHLNGmtime=1553613716000, DSHINTmode=33188, encrypt=false, lastmodified=1553613716000, md5sum=NT25aGiGxY7MPvNTz3i+Ww==, DSHINTgid=0, DSHINTuid=0}, [content=true, contentMetadata=[cacheControl=null, contentDisposition=null, contentEncoding=null, contentLanguage=null, contentLength=18631427, contentMD5=null, contentType=application/octet-stream, expires=null], written=false, isSensitive=false]], request=POST https://pod-000-1121-07.backblaze.com/b2api/v1/b2_upload_file
Ubuntu: 16.04
For absolutely new volume with empty bucket it spend up to 10 seconds for each new empty directory. What can be the problem with it? Network connection is not an issue, I have stable 100Mbit/s.