A HTTP 1.1 reverse proxy written in Rust using hyper (tokio version).
Popular proxies are configured using a static file and are not easily made highly available. It is now common to treat servers like cattle, use blue/green deployments and scale up/down based on traffic. Having the list of servers specified in the proxy configuration file makes the aforementioned ideas more difficult to implement. Creating active/passive proxy clusters requires the use of something like keepalived. There is a lot of unnecessary complexity setting up keepalived for each proxy instance. Worse still, it is significantly harder to automate this setup using something like puppet or chef.
The goal is to build an AWS ELB-like reverse proxy that works well in the dynamic VM/container environments that are starting to be more common. Of particular focus is the ability to manage origins from the pool via some API.
An eventual goal is to have the pool managed by Raft. This will allow a cluster of redundant weldr servers. This provides an active/passive setup out of the box. Note: The raft-rs crate does not currently support dynamic membership.
The production versions of weldr are deployed as static binaries. There are two general methods of installation:
Installing requirements on Ubuntu:
$ apt-get update && apt-get install gcc libssl-dev pkg-config capnproto
See DOCKER.md for details.
RUST_LOG=weldr cargo run --bin weldr
or RUST_LOG=weldr /path/to/weldr
curl localhost:8687/servers -d '{"url":"http://127.0.0.1:12345"}'
cargo run --bin test-server
- start test origin server. This is not provided by packages or the container.
/
and /large
curl -vvv localhost:8080/
curl -vvv localhost:8080/large
RUST_LOG=test_proxy,weldr cargo test
will execute the tests and provide log level output for both the proxy and the integration tests.rustup run nightly cargo bench
will execute some basic benchmarking.See benchmark/ for details on setting up real world benchmarks.
Weldr does not use any threads. The process that is started is the manager process. That process will spawn worker processes to handle the requests. The manager process will listen for API requests and perform periodic health checks on the backend servers in the pool. Changes to the pool, caused by API requests or health checks, are sent to all the workers.
Weldr uses active health checks. As long as the health check passes, the pool will keep the server active and send it requests. A health checks is run, by default, every 30 seconds using tokio-timer. The health check makes a request to, by default, /
and expects a 2xx
HTTP response code. Each server is assumed active when added to the pool. If a server fails the check, by default, 3 consecutive times, the manager will mark that server as down and then send a message to the workers to mark that same server as down. If a server marked as down later returns a 2xx
HTTP response code, by default, 2 consecutive times, it will be marked as active again.
The management API will allow the addition and removal of origins from the pool. It will also allow for the dynamic configuration of other options, such as the health check.
POST /servers
{
"url": "http://120.0.0.1"
}
Example: curl -vvv localhost:8687/servers -d '{"url":"http://127.0.0.1"}'
Note: It is more common for a server to fall out of the pool after n
health checks fail.
DELETE /servers/:ip/:port
Example: curl -vvv -X DELETE localhost:8687/servers/127.0.0.1/12345
Work in progress.
GET /stats
{
"client": {
"success": 34534,
"failed": 33,
},
"server": {
"success": 33770,
"failed": 15,
}
}
Work in progress.
GET /stats/detail
[{
"id": "...",
"ip": "127.0.0.1",
"port": "8080",
"success": 33770,
"failed": 15,
},{
...
}]
Licensed under either of
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.