Closed asomers closed 10 months ago
The test failure is not caused by this PR.
I've rebased it on the latest master to fix the merge conflicts and fix the tests.
Rebased.
OK, so there is quite a bit to go through here...
Is this intended to work on FreeBSD while resulting in a no-op on other OSes?
Is this intended to work on FreeBSD while resulting in a no-op on other OSes?
Yes, exactly.
When I run cargo doc --workspace --no-deps
I get some warnings. We provide a makefile target pr-prep
that you can run to check if all is ready for a pull request.
TLS does not work yet; see https://github.com/bolcom/libunftp/discussions/480.
I think you got TLS to work right?
When I run
cargo doc --workspace --no-deps
I get some warnings. We provide a makefile targetpr-prep
that you can run to check if all is ready for a pull request.
I get a bunch of warnings on the master branch, too. Do you?
I think you got TLS to work right?
Yes.
When I run
cargo doc --workspace --no-deps
I get some warnings. We provide a makefile targetpr-prep
that you can run to check if all is ready for a pull request.I get a bunch of warnings on the master branch, too. Do you?
No works fine my side. I'm on Rust 1.72.0 and MacOs.
No works fine my side. I'm on Rust 1.72.0 and MacOs.
Ahh, it's a nightly thing. I'll fix the warnings that are branch-specific.
Does this solution actually work on FreeBsd? were you able to connect to the server and perform some uploads / downloads etc?
Curious about your usecase by the way if you feel free to share.
Does this solution actually work on FreeBsd? were you able to connect to the server and perform some uploads / downloads etc?
Yes I can. I invoke it like cargo run --example cap-ftpd -- ../../../cap-ftpd/doc/cap-ftpd-users.json
, where that json file looks like:
[
{
"username": "alice",
"password": "Wonderland",
"home": "/tmp/alice"
},
{
"username": "bob",
"password": "dylan",
"home": "/usr/home/bob"
}
]
And regarding my use-case, I'm working on a backup and recovery product. The recovery path can work over FTP/SSL. And multiple customers' data may be stored within the same server. So a vulnerability in the FTP server could theoretically allow one customer to read another's data. Using Capsicum to isolate each FTP session gives us an extra layer of security. And since recovery usually involves a small number of connections each transferring a large amount of data, we don't mind the small performance penalty entailed by the per-connection fork/exec.
I'm working on a backup and recovery product
Cool usecase!
I think I'm happy with these changes. @robklg let me know if you're happy for me to get this into master.
I think I'm happy with these changes. @robklg let me know if you're happy for me to get this into master.
I've looked into it this weekend, will continue this week.
@robklg with the latest commit, listen() won't get called until the PASV command is received. However, the worker process will still use the same port for every PASV. There's no easy way around that because capability mode does not allow bind(). The options are:
1) Accept that each worker will use the same passive port over and over.
2) Preallocate multiple ports per worker, use one for each PASV, and start failing connections if we run out.
3) Preallocate infinite ports per worker.
3) Skip bind altogether and use any port of the kernel's choosing.
4) Use the cap_bind function. The downsides are that it would make the code more complicated and slightly slower. But it would work. It would also require either:
a) Adding a FreeBSD-specific dependency to libunftp and including all of the cap_bind code, or
b) Allowing the user to set a callback function on ServerBuilder
which will be called in place of try_port_range
. That way the cap_bind stuff could live in the user's code.
What do you think?
@robklg with the latest commit, listen() won't get called until the PASV command is received. However, the worker process will still use the same port for every PASV. There's no easy way around that because capability mode does not allow bind(). The options are:
I'd rather not make concessions on the efficiency, resources, and security of the port allocation. At least not for the common use case (non-FreeBSD-capabilities-mode)
- Accept that each worker will use the same passive port over and over.
As mentioned, the next port should not be predictable. So this is not an option.
- Preallocate multiple ports per worker, use one for each PASV, and start failing connections if we run out.
This would indeed significantly reduce the concurrent session limit especially with a small pool of ports.
- Preallocate infinite ports per worker.
This one I don't understand. Can you elaborate?
- Skip bind altogether and use any port of the kernel's choosing.
In most cases, I think users would like to configure the passive port range. We could make this the behaviour for capability mode, assuming that capability mode users would accept this tradeoff?
- Use the cap_bind function. The downsides are that it would make the code more complicated and slightly slower. But it would work.
Would it be slower only for capability mode? I think specific tradeoffs can be acceptable for users who want added security at the expense of slightly less performance?
It would also require either: a) Adding a FreeBSD-specific dependency to libunftp and including all of the cap_bind code, or b) Allowing the user to set a callback function on
ServerBuilder
which will be called in place oftry_port_range
. That way the cap_bind stuff could live in the user's code.
Sounds to me like a choice between added complexity in either that FreeBSD-specific component, or the libunftp-user who wants to make use of capabilities? I think that might be acceptable. I'm specifically concerned about adding a lot of complexit to the general code or changes of how it works now, if it's for a feature that is lesser used, like in this case since it's currently FreeBSD-only.
What do you think @hannesdejager ?
@robklg with the latest commit, listen() won't get called until the PASV command is received. However, the worker process will still use the same port for every PASV. There's no easy way around that because capability mode does not allow bind(). The options are:
I'd rather not make concessions on the efficiency, resources, and security of the port allocation. At least not for the common use case (non-FreeBSD-capabilities-mode)
- Accept that each worker will use the same passive port over and over.
As mentioned, the next port should not be predictable. So this is not an option.
- Preallocate multiple ports per worker, use one for each PASV, and start failing connections if we run out.
This would indeed significantly reduce the concurrent session limit especially with a small pool of ports.
- Preallocate infinite ports per worker.
This one I don't understand. Can you elaborate?
That was sort-of a joke. It's the solution to the drawback of the previous option, but obviously impossible.
- Skip bind altogether and use any port of the kernel's choosing.
In most cases, I think users would like to configure the passive port range. We could make this the behaviour for capability mode, assuming that capability mode users would accept this tradeoff?
I agree. Any user with a firewall would want to configure the passive port range.
- Use the cap_bind function. The downsides are that it would make the code more complicated and slightly slower. But it would work.
Would it be slower only for capability mode? I think specific tradeoffs can be acceptable for users who want added security at the expense of slightly less performance?
Yes, slower for capability mode only.
It would also require either: a) Adding a FreeBSD-specific dependency to libunftp and including all of the cap_bind code, or b) Allowing the user to set a callback function on
ServerBuilder
which will be called in place oftry_port_range
. That way the cap_bind stuff could live in the user's code.Sounds to me like a choice between added complexity in either that FreeBSD-specific component, or the libunftp-user who wants to make use of capabilities? I think that might be acceptable. I'm specifically concerned about adding a lot of complexit to the general code or changes of how it works now, if it's for a feature that is lesser used, like in this case since it's currently FreeBSD-only.
What do you think @hannesdejager ?
This one I don't understand. Can you elaborate?
That was sort-of a joke. It's the solution to the drawback of the previous option, but obviously impossible.
Haha :-)
- Use the cap_bind function. The downsides are that it would make the code more complicated and slightly slower. But it would work.
Would it be slower only for capability mode? I think specific tradeoffs can be acceptable for users who want added security at the expense of slightly less performance?
Yes, slower for capability mode only.
Sounds acceptable then from the libunftp point of view at least.
It would also require either: a) Adding a FreeBSD-specific dependency to libunftp and including all of the cap_bind code, or b) Allowing the user to set a callback function on
ServerBuilder
which will be called in place oftry_port_range
. That way the cap_bind stuff could live in the user's code.Sounds to me like a choice between added complexity in either that FreeBSD-specific component, or the libunftp-user who wants to make use of capabilities? I think that might be acceptable. I'm specifically concerned about adding a lot of complexit to the general code or changes of how it works now, if it's for a feature that is lesser used, like in this case since it's currently FreeBSD-only. What do you think @hannesdejager ?
There's also another slight concern I have here. I mentioned allocating the port in a similar way as in proxy protocol mode, by requesting that port through a channel. I have actually wanting to explore that idea before, as I believe it has various benefits. So I'm wondering a bit what this might mean for the future compatibility of the proposed solutions for cap_bind. But I guess it should not matter.
Which of these two options has your preference @asomers ?
There's also another slight concern I have here. I mentioned allocating the port in a similar way as in proxy protocol mode, by requesting that port through a channel. I have actually wanting to explore that idea before, as I believe it has various benefits. So I'm wondering a bit what this might mean for the future compatibility of the proposed solutions for cap_bind. But I guess it should not matter.
Which of these two options has your preference @asomers ?
Actually , allocating through a channel is what cap_bind
does. It forks a separate process before entering the Capsicum sandbox. So the separate process has the authority to call bind
. It does that, and passes the bound socket back via a Unix-domain socket.
@robklg the latest commit removes the preallocated passive listener port in favor of using cap_bind .
@robklg all rebased!
@asomers this is now depending on a specific rev in capsicum-net and capsicum-rs. Any chance we can make it depend on a released crate?
Yes, @robklg I'll ask the maintainer of Capsicum to make a release.
That would be great. Thanks!
Capsicum is a security technology that restricts processes from accessing any global namespaces. After entering capsicum mode, a process may no longer use a syscall like open(); instead it's restricted to openat().
This PR:
Fixes #475