Closed andodeki closed 8 months ago
Can you show a minimal example where you trigger this error?
With #884 following code can be used for scoped router configuration:
use xitca_web::{ handler::{handler_service, state::StateRef}, route::get, App, NestApp, }; #[tokio::main] async fn main() -> std::io::Result<()> { let state = build_state().await; let mut app = App::new(); app = mod1::route(app); app.with_state(state) .serve() .bind("127.0.0.1:8080")? .run() .await } async fn build_state() -> String { String::from("hello,world!") } mod mod1 { use super::*; pub(super) fn route(app: NestApp<String>) -> NestApp<String> { app.at("/", get(handler_service(handler))) } async fn handler(StateRef(state): StateRef<'_, String>) -> String { state.clone() } }
error[E0277]: the trait bound `for<'r2> (dyn for<'r> ServiceObject<WebContext<'r, AppState>, for<'r> Error = RouterError<xitca_web::error::Error<AppState>>, for<'r> Response = xitca_http::Response<xitca_http::ResponseBody>> + 'static): xitca_service::Service<WebContext<'r2, Arc<AppState>>>` is not satisfied --> src/http/xitcav_http/http_server.rs:205:10 | 205 | .serve() | ^^^^^ the trait `for<'r2> xitca_service::Service<WebContext<'r2, Arc<AppState>>>` is not implemented for `(dyn for<'r> ServiceObject<WebContext<'r, AppState>, for<'r> Error = RouterError<xitca_web::error::Error<AppState>>, for<'r> Response = xitca_http::Response<xitca_http::ResponseBody>> + 'static)` | error[E0277]: the size for values of type `(dyn for<'r> ServiceObject<WebContext<'r, AppState>, for<'r> Error = RouterError<xitca_web::error::Error<AppState>>, for<'r> Response = xitca_http::Response<xitca_http::ResponseBody>> + 'static)` cannot be known at compilation time --> src/http/xitcav_http/http_server.rs:205:10 | 205 | .serve() | ^^^^^ doesn't have a size known at compile-time |
say when its as below with Arc
from the provided example
async fn build_state() -> Arc<String> {
Arc::new(String::from("hello,world!"))
}
we get the same behaviour
The code you reference compile for me. I don't know where you get the error from it.
and if you are using Arc<String>
as state just simply replace all types like this:
use std::sync::Arc;
use xitca_web::{
handler::{handler_service, state::StateRef},
route::get,
App, NestApp,
};
#[tokio::main]
async fn main() -> std::io::Result<()> {
let state = build_state().await;
let mut app = App::new();
app = mod1::route(app);
app.with_state(state)
.serve()
.bind("127.0.0.1:8080")?
.run()
.await
}
async fn build_state() -> Arc<String> {
Arc::new(String::from("hello,world!"))
}
mod mod1 {
use super::*;
pub(super) fn route(app: NestApp<Arc<String>>) -> NestApp<Arc<String>> {
app.at("/", get(handler_service(handler)))
}
async fn handler(StateRef(state): StateRef<'_, Arc<String>>) -> String {
state.to_string()
}
}
Ok i didnt change in all instances of AppState
How can i get the clients ip from this definition
pub fn request_diagnostics<E, C>(
next: &mut Next<E>,
mut ctx: WebContext<'_, C>,
) -> Result<WebResponse<()>, E>
where
// S: for<'r> Service<WebContext<'r, C>, Response = WebResponse, Error = Error<C>>,
C: Borrow<AppState>, // annotate we want to borrow &String from generic C state type.
{
let ip_address = ctx.req().uri().host().unwrap().parse().unwrap();
}
Let's take the previous example and expand on it:
use std::sync::Arc;
use xitca_web::{
handler::{handler_service, state::StateRef},
route::get,
App, NestApp,
};
#[tokio::main]
async fn main() -> std::io::Result<()> {
let state = build_state().await;
let mut app = App::new();
app = mod1::route(app);
app.with_state(state)
.serve()
.bind("127.0.0.1:8080")?
.run()
.await
}
async fn build_state() -> Arc<String> {
Arc::new(String::from("hello,world!"))
}
mod mod1 {
use super::*;
pub(super) fn route(app: NestApp<Arc<String>>) -> NestApp<Arc<String>> {
app.at("/", get(handler_service(handler)))
}
use xitca_web::http::RequestExt;
async fn handler(
ext: &RequestExt<()>,
StateRef(state): StateRef<'_, String>
) -> String {
println!("{}", ext.socket_addr());
state.to_string()
}
}
in xitca-web client address is exposed to you as std::net::SocketAddr
type. But in general you should not trust it to be the real client ip address. It's only useful as a hint when your application is not behind reverse proxy.
And in middleware you can acquire it with this api:
fn get_addr<C>(ctx: &WebContext<'_, C>) -> std::net::SocketAddr {
*ctx.req().body().socket_addr()
}
it compiles now how can i handle CORS
xitca-web is compatable with tower-http
which is also what axum uses so if you have a tower-http
compat middleware you can use it in xitca-web like this:
Cargo.toml
xitca-web = { version = "0.2", features = ["codegen", "cookie", "tower-http-compat", "json"] }
tower-http = { version = "0.5", features = ["full"] }
main.rs
use std::sync::Arc;
use tower_http::cors::CorsLayer;
use xitca_web::{
handler::{handler_service, state::StateRef},
middleware::{eraser::TypeEraser, tower_http_compat::TowerHttpCompat, Group},
route::get,
service::ServiceExt,
App, NestApp,
};
#[tokio::main]
async fn main() -> std::io::Result<()> {
let state = build_state().await;
let mut app = App::new();
app = mod1::route(app);
app
// start a group of middlewares for low cost type compat
.enclosed(
Group::new()
// add tower-http middlewares.
.enclosed(TowerHttpCompat::new(CorsLayer::very_permissive()))
// other tower-http middlewares
// erase tower-http compat types
.enclosed(TypeEraser::response_body()),
)
.with_state(state)
.serve()
.bind("127.0.0.1:8080")?
.run()
.await
}
async fn build_state() -> Arc<String> {
Arc::new(String::from("hello,world!"))
}
mod mod1 {
use super::*;
pub(super) fn route(app: NestApp<Arc<String>>) -> NestApp<Arc<String>> {
app.at("/", get(handler_service(handler)))
}
async fn handler(StateRef(state): StateRef<'_, String>) -> String {
state.to_string()
}
}
that said there is a bug preventing above code to compile and thanks to your question i'm getting a fix for it.
yeah i think so because i had an almost similar type of setup and not compiling i followed the tower-http example
yeah i think so because i had an almost similar type of setup and not compiling i followed the tower-http example
If you update your patch with the latest main branch hash the above cors example would be working.
[patch.crates-io]
xitca-http = { git = "https://github.com/HFQR/xitca-web.git", rev = "912c707" }
xitca-router = { git = "https://github.com/HFQR/xitca-web.git", rev = "912c707" }
xitca-web = { git = "https://github.com/HFQR/xitca-web.git", rev = "912c707" }
ok let me set it up.
As for multiple tower-http
middlewares for now you can use tower's ServiceBuilder
type as work around. example:
app
.enclosed(TowerHttpCompat::new(
tower::ServiceBuilder::new()
.layer(CorsLayer::very_permissive())
.layer(..other tower-http layers),
))
the fix on xitca-web's part would be looked into later. hopefully before 0.2 release.
i have a configure_cors()
function that returns CorsLayer
from tower_http
CorsLayer::new()
.allow_methods(allowed_methods)
.allow_origin(allowed_origins)
.allow_headers(allowed_headers)
.expose_headers(exposed_headers)
.allow_credentials(config.allow_credentials)
.allow_private_network(config.allow_private_network)
this now compiles
also had it tested it works
Vary:
origin, access-control-request-method, access-control-request-headers
While we at it how can one implement rate-limiting and concurrent-request limiting
also had it tested it works
Vary: origin, access-control-request-method, access-control-request-headers
While we at it how can one implement rate-limiting and concurrent-request limiting
There is no built in feature for them. In general if you add Connection: Close
header to a http response xitca-web would try to gracefully shutdown the connection when possible. So a middleware that conditionally close connection would suffice for normal rate-limiting.
As for concurrency limiting I don't get what you mean exactly. For http/1 there is no concurrent request existing. You handle request in serial so the only concurrency is how many tcp connections you keep open. For http/2 there is indeed concurrent request existing and for now xitca-web
hard coded the concurrency to 200 which is the default value of h2
crate.
i meant in the context of limiting concurrent request from a single ip
i meant in the context of limiting concurrent request from a single ip
Then like my previous reply. There is no concurrent request in http/1 for a single tcp connection and everything is serialized. For http/2 it's hard coded to 200 concurrent multiplexed streams per tcp connection. Of course a client can possibly make multiple tcp connections to your server and in most case the number is limited by your firewall/router/etc which is not in the control of xitca-web
In general rate limiting should be about guarding your expensive business logic. How many TCP/UDP connections per client can keep open is usually task for your reverse proxy and firewall. That said you can config them if you want with xitca-http
. xitca-web
by itself does not expose any interfaces for layer 4 connection types(TCP and UDP)
Just for illustration how would it happen with this
That said you can config them if you want with xitca-http.
i have tried using
.enclosed(TowerHttpCompat::new(
tower::ServiceBuilder::new().layer(CorsLayer::very_permissive()),
))
i get this error
error[E0282]: type annotations needed
--> src/http/xitcav_http/http_server.rs:162:19
|
162 | .enclosed(TowerHttpCompat::new(
| ^^^^^^^^^^^^^^^^^^^^ cannot infer type of the type parameter `C` declared on the struct `TowerHttpCompat`
|
help: consider specifying the generic arguments
|
162 | .enclosed(TowerHttpCompat::<ServiceBuilder<tower::layer::util::Stack<CorsLayer, tower::layer::util::Identity>>, C, xitca_http::RequestBody, CompatResBody<CompatResBody<xitca_http::ResponseBody>>, xitca_web::error::Error<Arc<AppState>>>::new(
| ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Just for illustration how would it happen with this
That said you can config them if you want with xitca-http.
use xitca_http::{http::StatusCode, HttpServiceBuilder};
use xitca_io::net::Stream;
use xitca_web::{
error::Error,
handler::{handler_service, Responder},
http::WebResponse,
route::get,
service::{Service, ServiceExt},
App, WebContext,
};
#[tokio::main]
async fn main() -> std::io::Result<()> {
let service = App::new()
.at("/", get(handler_service(|| async { "hello,world!" })))
// middleware before App::finish have access to http request types.
.enclosed_fn(request_limit)
.finish()
.enclosed(HttpServiceBuilder::new())
// middleware after http service have access to raw connection types.
.enclosed_fn(connection_limit);
xitca_server::Builder::new()
.bind("service_name", "127.0.0.1:8080", service)?
.build()
.await
}
async fn request_limit<S, C>(service: &S, ctx: WebContext<'_, C>) -> Result<WebResponse, Error<C>>
where
S: for<'r> Service<WebContext<'r, C>, Response = WebResponse, Error = Error<C>>,
{
let addr = ctx.req().body().socket_addr();
// rate limit based on client addr
if check_addr(addr) {
return StatusCode::TOO_MANY_REQUESTS.respond(ctx).await;
}
service.call(ctx).await
}
async fn connection_limit<S>(service: &S, conn: Stream) -> Result<S::Response, S::Error>
where
S: Service<Stream, Response = ()>,
{
match &conn {
Stream::Tcp(.., addr) => {
// drop connection on condition.
if check_addr(addr) {
return Ok(());
}
// delay handling on condition.
if check_addr(addr) {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
}
}
_ => {}
}
service.call(conn).await
}
// arbitrary function for checking client address
fn check_addr(_: &std::net::SocketAddr) -> bool {
false
}
This is a simple example of how to tap into both high level WebContext
and low level Stream
type for different layer of rate limiting. The high level one is what you want to do in most case. The latter goes into layer 4 types handling directly and can be error prone if you are not familiar with it.
And also do take these note into mind:
SocketAddr
is not a reliable source of identifier. You should factor into request headers like X-Forwarded-For
provided by proxies.enclosed_fn
is stateless so the example is purely laying out the position and possible logic of limiting and in a practical sense you have to use ServiceExt::enclosed
API and stateful middleware. You can reference it here. request_limit
doing in above example.request_limit
then it's not necessary to interact with xitca-xxx
other than xitca-web
@andodeki This is an example of rate limiting with xitca-web
Cargo.toml
[dependencies]
xitca-web = { version = "0.2.2", features = ["rate-limit"] }
main.rs
use xitca_web::{handler::handler_service, middleware::rate_limit::RateLimit, route::get, App};
fn main() -> std::io::Result<()> {
App::new()
.at("/", get(handler_service(|| async { "hello,world!" })))
// limit client to 60rps based on it's ip address.
.enclosed(RateLimit::per_minute(60))
.serve()
.bind("127.0.0.1:8080")?
.run()
.wait()
}
alright let me test this out.
xitca-web 0.2 has been released addressing everything in this issue.
I am trying to implement a Xitca router with state but I am having trouble. I have an axum implementation as below in system.rs
then in main.rs i have
So the question is can i return a router from another file and merger the router in the main where Xitca server is started?