surban / aggligator

Aggregates multiple links (TCP, Bluetooth, USB or similar) into one connection having their combined bandwidth and provides resiliency against failure of individual links.
https://crates.io/crates/aggligator
Other
129 stars 8 forks source link

Doesn't build on FreeBSD #6

Closed cmspam closed 4 months ago

cmspam commented 6 months ago

See here:

https://github.com/ikatson/rqbit/pull/53/files

There's an issue with network-interface that causes it to not build on BSD. Unfortunately, it works into further issues using the fork they use, because we end up with this:


   --> aggligator-util/src/transport/tcp.rs:485:21
    |
485 |                 let Some(addr) = ifn.addr else { continue };
    |                     ^^^^^^^^^^   -------- this expression has type `Vec<Addr>`
    |                     |
    |                     expected `Vec<Addr>`, found `Option<_>`
    |
    = note: expected struct `Vec<Addr>`
                 found enum `Option<_>`
PaperStrike commented 6 months ago

The build error can be fixed with #3 on Windows, it may also help BSD.

You may meet #4 later on.

surban commented 6 months ago

Please try with aggligator-util 0.12.1.

cmspam commented 5 months ago

Thank you. I have tried. Initially we get this error (same as before):

error[E0433]: failed to resolve: use of undeclared crate or module `libc`
  --> /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/network-interface-1.1.1/src/target/getifaddrs.rs:25:18
   |
25 |         unsafe { libc::freeifaddrs(self.base) }
   |                  ^^^^ use of undeclared crate or module `libc`

Of course this is not a problem with aggligator, rather a problem with network-interface (but since aggligator is REALLY useful for any OS that doesn't support MPTCP like FreeBSD, I think it would be really nice to get it working, perhaps without using network-interface if at all possible to ensure the compatibility.)

We then try using the branch of network-interface that compiles on BSD mentioned in my first post

And we get a successful build. But running agg-speed client we get Error: No connection transports.

And running as server, we get a blank screen with no interfaces shown.

I think that it's just incompatible with FreeBSD and network-interface, honestly.

I'm not familiar enough with rust and with what is available to use instead of network-interface, but if there is some cross-platform alternative with BSD support, would be really lovely if you could look into implementing that instead.

surban commented 5 months ago

Why not contribute FreeBSD support to network-interface? They aim to be cross-platform and are surely open to contributions.

Since you are familiar with FreeBSD, it should be pretty straightforward to implement support for it, given that other UNIXes (MacOS, Linux) are already supported.

cmspam commented 5 months ago

Why not contribute FreeBSD support to network-interface? They aim to be cross-platform and are surely open to contributions.

Since you are familiar with FreeBSD, it should be pretty straightforward to implement support for it, given that other UNIXes (MacOS, Linux) are already supported.

It's something I would do, but I'm sadly not all that familiar with FreeBSD (also I'm not so familiar with rust). I've been trying to get into FreeBSD, but have a need for multipath networking, hence my report here. If I get to the point where I can, though, I will consider it for sure!

cmspam commented 4 months ago

It's been a while. I just want to give you a HUGE thank you. Aggligator is life-changing for my use case. I've had to depend on mptcp, which means always running linux VMs or linux, always making sure that mptcp is supported, doing various workarounds...

I've just done a bit of testing (albeit linux-based) for all of the workloads that I use MPTCP for, and aggligator actually is on par with or even outperforms the previous mptcp implementations in many cases!

I also want to thank you for your PR to get FreeBSD working with network-interface.

I will continue to experiment with aggligator over the upcoming days, particularly with BSD.

Again, a huge thanks.

cmspam commented 4 months ago

And an update, I can confirm that with your patch to network-interface it works on FreeBSD. I was able to build it, and run the features from the utility.

For whatever reason, unfortunately, performance is not good in FreeBSD. Neither natively, nor with the Linux binary compatibility layer. I guess something is just too different architecturally. 600-800mbps with iperf3 but only 50mbps with aggligator speed test, while Linux shows 1200-1500mbps with dual connections to the same server with aggligator speed test.

Even running a local connection, it caps out around 50mbps.

This is on a VM. Bare metal may be different.

I don't expect to troubleshoot the speed issues much further, and will just give up on multiplexed connections on native FreeBSD for the time being. Still, a big thanks.

surban commented 4 months ago

Could you check with the raw-speed tool for aggligator-util on FreeBSD for comparison?

surban commented 4 months ago

For whatever reason, unfortunately, performance is not good in FreeBSD. Neither natively, nor with the Linux binary compatibility layer. I guess something is just too different architecturally. 600-800mbps with iperf3 but only 50mbps with aggligator speed test, while Linux shows 1200-1500mbps with dual connections to the same server with aggligator speed test.

Are you sure you are using a release build?

I have 300 MB/s upstream and 900 MB/s downstream between Linux and a FreeBSD 14 VM using agg-speed.

cmspam commented 4 months ago

For whatever reason, unfortunately, performance is not good in FreeBSD. Neither natively, nor with the Linux binary compatibility layer. I guess something is just too different architecturally. 600-800mbps with iperf3 but only 50mbps with aggligator speed test, while Linux shows 1200-1500mbps with dual connections to the same server with aggligator speed test.

Are you sure you are using a release build?

I have 300 MB/s upstream and 900 MB/s downstream between Linux and a FreeBSD 14 VM using agg-speed.

I will try testing under some other conditions. It seems it's CPU bottlenecked on the VM. The best I can do is about 1/5 Linux's performance in VMs, but I need to try on bare metal to be really confident about it, which isn't something I can do immediately, especially server-side.

I built with cargo build --release, so unless it is another way, I was using release.

I'm happy to see that it is resolved upstream now! I will continue testing and looking into it. Even if there is a performance issue, I don't think your software is to blame. I think it is just architecture differences or VM issues. Is 300MB/s upstream and 900MB/s downstream similar to what you get between two Linux machines, out of curiosity?

cmspam commented 4 months ago

So, I hadn't tried since your last version release, so I gave it another try in both Linux and FreeBSD VMs running on the same computer with the same specs, speed testing to the same remote server.

The speeds vary a lot with the speed test, but this is more or less the highest speed I see after a few seconds.

Linux: TX 80MB/s RX 160MB/s FreeBSD: TX 20MB/s RX 80MB/s

I made some configuration changes on the FreeBSD VM in sysctl.conf but I didn't find any difference.

I might note that 80MB/s is basically the max I can get with one of the connections... but I can tell both connections are being used from the speedtest UI.

VM -> VM on the same machine was slower, due to CPU bottlenecks.

I suspect that this has to do with bugs with FreeBSD and virtio network drivers, however. So I will make some time soon to try on bare metal.

surban commented 4 months ago

Strange, my rates are much higher. What VM hypervisor are you using? I use KVM with virt-manager GUI.

cmspam commented 4 months ago

Strange, my rates are much higher. What VM hypervisor are you using? I use KVM with virt-manager GUI.

I am using KVM/qemu also, but managed with incus. But the hardware is not powerful. It's just celeron N5100. Still, it is enough for a Linux VM to show good performance, and for other software in FreeBSD to show higher network performance with curl/iperf/other network utilities, so it's kind of a mystery to me.

Still, I have seen cases where FreeBSD network performance is horrible, until it is run on bare metal due to issues with hardware offloading on virtio, and I wouldn't rule that out as the cause, so I really need to get around to trying it on other machines and/or bare metal soon to see if there is a big difference.

cmspam commented 4 months ago

I put FreeBSD on a cloud server (aarch64) in order to give some more testing.

Unfortunately, I got a compile error:

error[E0308]: mismatched types
   --> /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/network-interface-1.1.2/src/target/unix/mod.rs:75:31
    |
75  |     let len = unsafe { strlen(data as *const i8) };
    |                        ------ ^^^^^^^^^^^^^^^^^ expected `*const u8`, found `*const i8`
    |                        |
    |                        arguments to this function are incorrect
    |
    = note: expected raw pointer `*const u8`
               found raw pointer `*const i8`
note: function defined here
   --> /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/libc-0.2.153/src/unix/mod.rs:541:12
    |
541 |     pub fn strlen(cs: *const c_char) -> size_t;
    |            ^^^^^^

For more information about this error, try `rustc --explain E0308`.
error: could not compile `network-interface` (lib) due to 1 previous error
warning: build failed, waiting for other jobs to finish...
error: failed to compile `aggligator-util v0.14.0`, intermediate artifacts can be found at `/tmp/cargo-installCHLPdm`.
cmspam commented 4 months ago

Using: network-interface = { git = 'https://github.com/ikatson/network-interface', branch = "compile-on-freebsd" }

I was able to build it on aarch64 FreeBSD, and the speed test results are promising. They seem to be equal to Linux in this case. So I believe it's likely that my issues stemmed from, probably, VM-related issues.

cmspam commented 4 months ago

Regarding the speed issues, I can confirm that they are not issues. Using bare metal hardware, speed is as good as linux.