Open shirady opened 3 months ago
@shirady This only happens because you use /tmp/
and not /private/tmp/
as the bucket path on your Mac.
/tmp/
on Mac is a symlink to /private/tmp/
and when checking the boundaries we check the real path of the object /private/tmp/shira-1001-bucket-1/hello_world.txt
is in the boundaries of the bucket path which is /tmp/shira-1001-bucket-1
. If we do want to support bucket path as symlink it's a very small fix, @guymguym do you see a reason for not allowing bucket path to be a symlink?
@romayalon @shirady Same reason as we protect it for things inside a bucket - to avoid exposing sensitive data from the system. For example what if someone managed to set the bucket path to ln -s /etc /fs/bucketpath
and then tried to download passwd, or worse - upload it... This config option is meant to prevent that by default. If you want to allow it for your dev env, override the config.
@guymguym, I don't understand what is the exact difference if this link is a symlink or an absolute path in which we protect it?
Currently, as I understand it what blocks a new_buckets_path
is the check of in the CLI is_dir_rw_accessible
and this dir will not be accessible to a user that is not uid and gid 0 (root) - which means that your concern of exposing sensitive data can happen if someone passes to the CLI, for example: sudo node src/cmd/manage_nsfs account add --name shira-path --new_buckets_path /etc/noobaa.conf.d --uid 0 --gid 0
(I took our config path as an example of a directory in /etc
)
output (omitting details of access_key
, secret_key
and master_key_id
):
{
"response": {
"code": "AccountCreated",
"reply": {
"_id": "6683f6f16a973327fcf08dfc",
"name": "shira-path",
"email": "shira-path",
"creation_date": "2024-07-02T12:47:45.608Z",
"access_keys": [
{
"access_key": "",
"secret_key": ""
}
],
"nsfs_account_config": {
"uid": 0,
"gid": 0,
"new_buckets_path": "/etc/noobaa.conf.d"
},
"allow_bucket_creation": true,
"master_key_id": ""
}
}
}
I changed the title of the issue to match what I wrote in the commented above, and so we will define it and solve it.
@naveenpaul1 suggested on #8225 to add also a check for absolute paths when restricting path and new_buckets_path values.
Environment info
Actual behavior
(Originally it was with the title "Check Bucket Boundaries Fails Upload an Object")
Expected behavior
Steps to reproduce
sudo node src/cmd/manage_nsfs account add --name <account-name> --new_buckets_path /tmp/nsfs_root1 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid>
. Note: Before creating the account need to give permission to the new_buckets_path:chmod 777 /tmp/nsfs_root1
.sudo node src/cmd/nsfs --debug 5
alias s3-nc-user-1='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443'
.s3-nc-user-1 s3 mb s3://shira-1001-bucket-1
.touch hello_world.txt
and thens3-nc-user-1 s3 cp hello_world.txt s3://shira-1001-bucket-1
, see the error:Note: after changing this line in the config:
config.NSFS_CHECK_BUCKET_BOUNDARIES = false; // SDSD
and restarting the server (ctrl + c and rerunsudo node src/cmd/nsfs --debug 5
) we do not have an error:s3-nc-user-1 s3 ls s3://shira-1001-bucket-1/
2024-07-02 13:32:07 0 hello_world.txt
More information - Screenshots / Logs / Other output
Logs from the server: