rclone / rclone

"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Azure Blob, Azure Files, Yandex Files
https://rclone.org
MIT License
47.43k stars 4.24k forks source link

Backblaze directory markers PBS datastore #8071

Closed lemeshovich closed 2 months ago

lemeshovich commented 2 months ago

What is your current rclone version (output from rclone version)?

rclone v1.67.0

What problem are you are trying to solve?

I'm trying to use rclone with BackBlaze to mount it as a datastore for Proxmox Backup Server. When PBS creates a datastore, it making a thousands of empty folders inside .chunk folder, but rclone is not creating them on external B2 datastore, seems like because we can't store empty directories on S3 (B2 is S3-Compatible).

How do you think rclone should be changed to solve that?

Please add --b2-directory-markers

The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

rclone mount \
  --allow-other \
  --allow-non-empty \
  --vfs-cache-mode writes \
  --vfs-cache-min-free-space 4G \
  --vfs-cache-max-age=5m \
  --vfs-disk-space-total-size 2T \
  --vfs-used-is-size=true \
  b2:pbs/datastore/backups /mnt/datastore/backups \
  --s3-directory-markers \
  --log-level INFO \
  --log-file=/tmp/rclone-mount.log
kapitainsky commented 2 months ago

What you are trying to do won't work. PBS is designed to operate on local SSD drive. It is painfully slow when you try to put storage on local network NFS/SMB. If you put storage in a remote cloud expect PBS operations to take days/weeks or even months to complete:).

EDIT - even more important PBS storage have to have files access time available. Not only modification time. It is not supported by B2 cloud storage.

ncw commented 2 months ago

Note that you can view b2 buckets using the s3 protocol which does support directory markers so you could try that and see if it fits your needs @lemeshovich

According to the b2 docs file names cannot end with / which means that we can't use the same scheme for this as used in s3.

However you can get a view on a b2 bucket using the s3 protocol so maybe that isn't true any more?

lemeshovich commented 2 months ago

However you can get a view on a b2 bucket using the s3 protocol so maybe that isn't true any more?

how can i check that?

lemeshovich commented 2 months ago

I changed external storage to Azure Blob and fixed current problem with --azureblob-directory-markers