Open ossarchitect opened 6 months ago
This should work, and it does work on other filesystems. Please report this to the CephFS developers.
Will do, thanks!
I opened https://tracker.ceph.com/issues/63939 for this.
In connection with the testing I frequently get the following error:
$ fscrypt encrypt
This is particularly evident when trying to manipulate the formerly encrypted directory (lock/unlock/encrypt) that is marked as not encrypted after the remount.
Interestingly the issue with 'is not encrypted' does not seem to be limited to umount and remount:
root@ubuntu:/mnt# mkdir test4 root@ubuntu:/mnt# fscrypt encrypt test4 Should we create a new protector? [y/N] Enter custom passphrase for protector "thecrypta": "test4" is now encrypted, unlocked, and ready for use. root@ubuntu:/mnt# vi test4/testfile root@ubuntu:/mnt# ls test4 testfile root@ubuntu:/mnt# fscrypt lock test4 "test4" is now locked. root@ubuntu:/mnt# ls test4 I1eTGrt5j2K08BlIdpUs++w4tFnJtes5JuY7n1,gja4 root@ubuntu:/mnt# fscrypt unlock test4 [ERROR] fscrypt unlock: file or directory "test4" is not encrypted root@ubuntu:/mnt#
For some reason the metadata read does not seem to work right. the error 'is not encrypted' is in metadata/policy.go in the function GetPolicy. It is triggered whem getPolicyIoctl gets the error unix.ENODATA So I infer the policy is not written properly on CephFS
Update: the problem appears to be writing into the encrypted (unlocked) CephFS directory. Locking and unlocking of an encrypted empty directory works Locking of an encrypted directory with a file written in it does work (or at least does not produce an error) Unlocking of an encrypted directory with a file written in it does not. I get the error message "[ERROR] fscrypt unlock: file or directory "crypt" is not encrypted"
Upgrade of the Ceph cluster to 18.2.1 (Reef) did not change the behavior.
This issue is present in Ubuntu 23.10 with Kernel 6.6.7 from mainline. (6.6+ is required for fscrypt support on CephFS). fscrypt is installed via apt. What I am doing:
Am I missing something or is this a bug? And if it is the latter, do I need to file this with the kernel team for the CephFS driver, too?