aya-rs / aya

Aya is an eBPF library for the Rust programming language, built with a focus on developer experience and operability.
https://aya-rs.dev/book/
Apache License 2.0
3.25k stars 291 forks source link

Forwarding more packets then sending #1071

Open Enneking666 opened 1 month ago

Enneking666 commented 1 month ago

Hello, i have a problem with my forwarding in kernel space using aya ebpf and tokio and thought maybe someone here knows why this happens. for the ebpf program i use a simple stream_verdict to forward to another hashmap entry

#![no_std]
#![no_main]

use aya_ebpf::{
    macros::{map, stream_verdict},
    maps::SockHash,
    programs::SkBuffContext,
};
use aya_forwarder_common::SocketKey;
use aya_log_ebpf::info;

#[map(name = "INTERCEPT_EGRESS")]
static mut INTERCEPT_EGRESS: SockHash<SocketKey> = SockHash::with_max_entries(8, 0);

#[stream_verdict]
pub fn aya_forwarder(ctx: SkBuffContext) -> u32 {
    let mut key = SocketKey {
        ip: u32::from_be(ctx.skb.remote_ipv4()),
        port: u32::from_be(ctx.skb.remote_port()),
    };
    unsafe { INTERCEPT_EGRESS.redirect_skb(&ctx, &mut key, 0) as u32 }
}

#[cfg(not(test))]
#[panic_handler]
fn panic(_info: &core::panic::PanicInfo) -> ! {
    loop {}
}

i then create the entries like this


        self.sock_hash
            .insert(
                Self::create_key(&egress_stream),
                ingress_stream.as_raw_fd(),
                0,
            )
            .unwrap();

        println!("set egress hash entry");
        self.sock_hash
            .insert(
                Self::create_key(&ingress_stream),
                egress_stream.as_raw_fd(),
                0,
            )
            .unwrap();

i can show more of the connecting and accepting code to create the streams but that is very basic so for brevity only on request. then for writing i use a simple write_all call on a Vec for reading i use the TcpStream read function with a [u8; 65535] buffer.

this is just a testing program and all addresses are on local host with different ports for the client, server and the proxy. This works completely fine and as expected. But for big data. (>~ 50mb) i read more then the send bytes to the server stream. eg when sending 50 mb i will receive 50 mb in the kernel program but on the listener on the egress i will receive usually some thousand bytes more than that so eg 50,001,245 byte. As already described the kernel receives the correct amount so it must happen somewhere from the hashmap redirect_skb to the TcpStream read(); if you need i can certainly provide more of the code