fatedier / frp

A fast reverse proxy to help you expose a local server behind a NAT or firewall to the internet.
Apache License 2.0
86.45k stars 13.37k forks source link

[Feature Request] Add documentation to deploy in K8s, scaling parameters and resource consumption #3138

Closed isurulucky closed 2 years ago

isurulucky commented 2 years ago

Describe the feature request

Hello there,

Since there are not much resources related to using frp in K8s, I went ahead and tried a POC with the usecase of exposing the services privately. I would like to get your feedback and see if we can include this information in the FRP documentation for the benefit of others. Particularly, I think we need:

This is the basic sample architecture I came up with:

frp-k8s drawio

Here, the external secret client visitor (or any external client in the general case) should use TLS to communicate with the ingress controller. In addition, the token based authentication can be used between the FRP client and the server. Any internal communication between FRPS and FRPC within the K8s cluster need not use TLS. In addition, the external client is using websocket protocol to communicate with FRPS as the community ingress controller supports websockets out of the box, whereas TCP is not properly supported (TCP with ingress controller requires a port per ingress if am not mistaken). As next steps we would need to to figure out autoscaling resource (CPU and memory) requests and limits. Any idea on what would be a resource requirements in a general level? Also, any other additional improvements for the suggested approach are welcome!

Describe alternatives you've considered

No response

Affected area

fatedier commented 2 years ago
  1. From development-status , we are developing v2 to replace current version and won't put a lot of effort into adding features to the current version.
  2. frps is not scalable now based on the current architecture. It's not cloud native. You may encounter a lot of problems.
  3. You're welcome to ask me questions about your POC issues.
isurulucky commented 2 years ago

Thanks for very much for the reply. I tried the POC out in a minimal K8s cluster, and it did seem to work. But my idea was to scale this based on need, such as when memory pressure increases etc., using the k8s autoscaling techniques. But if the current frp implementation does not support scaling, that would not work. Would you be able to explain a bit why it won't scale as of now? And I assume the next version would address these shortcomings?

fatedier commented 2 years ago

It's not stateless. All route configurations and frpc's meta infos are stored in memory.

When a request come in, frps will parse it and decide which frpc to forward by route configurations. If one frpc connect to frps A and you send requests to frps B, frps B will not know how to route this request.

isurulucky commented 2 years ago

It's not stateless. All route configurations and frpc's meta infos are stored in memory.

When a request come in, frps will parse it and decide which frpc to forward by route configurations. If one frpc connect to frps A and you send requests to frps B, frps B will not know how to route this request.

I see, I think I get the idea. If we take the exposing a private service as an example, both secret client and secret client visitor connects to frps separately. Since its a point to point connection, even if another frps comes up, it will not know about the routing configuration. So the only way to make it work is make sure that there are only one frps running.. as I see, scaling frpc is not an issue as the configurations are read from the frpc.ini file. I assume if we can guarantee that there is one frps running, this setup would work?

fatedier commented 2 years ago

Yes.

isurulucky commented 2 years ago

Thanks @fatedier. Will close this.

isurulucky commented 2 years ago

Re-opening to discuss a point we missed to discuss - for FRPS and FRPC, how do we need to set resources (memory and CPU) in a K8s environment? This is not for scaling but for the initial scheduling of FRPS and FRPC pods.

fatedier commented 2 years ago

It depends on your usage scenarios. Small resource is fine for your demo.

isurulucky commented 2 years ago

Thanks @fatedier. In a test running for a few hours, noted that all frp pods were consuming around 50MB memory and around couple of millicores of CPU. I think something around this should be a good starting point.