Open dtaht opened 1 year ago
Not sure if that is IPv6 specific. Unable to measure latency to server
is due to no pong
replies from the server. That could be due to packet loss or timeout (500 ms).
same test, same platform, over ipv4, does work, but yes, could be something else.
Perhaps a firewall preventing IPv6 UDP connectivity?
no firewall in play here.
@richb-hanover - you using ipv6?
My configuration works fine for IPv4. I can make a crash happen when I use IPv6...
Test connectivity using address found from ifconfig
?134 release % ping6 fd7e:5a1e:b4b3::cd8
PING6(56=40+8+8 bytes) fd7e:5a1e:b4b3:0:462:4d7f:4274:9df0 --> fd7e:5a1e:b4b3::cd8
16 bytes from fd7e:5a1e:b4b3::cd8, icmp_seq=0 hlim=64 time=3.024 ms
16 bytes from fd7e:5a1e:b4b3::cd8, icmp_seq=1 hlim=64 time=9.159 ms
16 bytes from fd7e:5a1e:b4b3::cd8, icmp_seq=2 hlim=64 time=8.820 ms
...
Server log
llladmin@odroid:~/src/crusader/src/target/release$ ./crusader serve
Server running...
Error from client [fd7e:5a1e:b4b3:0:462:4d7f:4274:9df0]:58779: Expected object
Client log
?134 release % RUST_BACKTRACE=full ./crusader test fd7e:5a1e:b4b3::cd8
Connected to server [fd7e:5a1e:b4b3::cd8]:35481
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 57, kind: NotConnected, message: "Socket is not connected" }', crusader-lib/src/test.rs:1318:10
stack backtrace:
0: 0x106c76ef4 - __mh_execute_header
1: 0x106c929cb - __mh_execute_header
2: 0x106c719f8 - __mh_execute_header
3: 0x106c7842d - __mh_execute_header
4: 0x106c7817e - __mh_execute_header
5: 0x106c78968 - __mh_execute_header
6: 0x106c788a3 - __mh_execute_header
7: 0x106c77377 - __mh_execute_header
8: 0x106c7857a - __mh_execute_header
9: 0x106ca9c93 - __mh_execute_header
10: 0x106ca9df5 - __mh_execute_header
11: 0x106b7d9dc - __mh_execute_header
12: 0x106a91279 - __mh_execute_header
13: 0x106a95936 - __mh_execute_header
zsh: abort RUST_BACKTRACE=full ./crusader test fd7e:5a1e:b4b3::cd8
Tested with v0.0.10-testing and it seems to work fine with IPv6 (public IPv6 or local IPv6)
Server: Ubuntu 22.04 Proxmox PVE 8.0 LxC container (Intel N100 mini PC connected to RT-AX86U 2.5G LAN port) Client: Windows 11 x84 using wireless (Acer Windows 11 laptop with Intel AX201 WiFi 6 adapter, Asus RT-AX86U router)
PS C:\work\speedtest\crusader-x86_64-pc-windows-msvc> .\crusader.exe test 2400:d802:xxxx:xxxx:xxxx:xxxx:feda:fbd0
Connected to server [2400:d802:xxxx:xxxx:xxxx:xxxx:feda:fbd0]:35481
Latency to server 2.06 ms
Testing download...
Testing upload...
Testing both download and upload...
Writing data...
Saved raw data as data 2024.03.24 10-26-03.crr
Saved plot as plot 2024.03.24 10-26-03.png
PS C:\work\speedtest\crusader-x86_64-pc-windows-msvc> .\crusader.exe test fe80::xxxx:xxxx:feda:fbd0
Connected to server [fe80::xxxx:xxxx:feda:fbd0]:35481
Latency to server 1.73 ms
Testing download...
Testing upload...
Testing both download and upload...
Warning: Load termination timed out. There may be residual untracked traffic in the background.
Writing data...
Saved raw data as data 2024.03.24 10-26-51.crr
Saved plot as plot 2024.03.24 10-26-51.png
I re-ran my experiment at https://github.com/Zoxc/crusader/issues/9#issuecomment-1890951312 ensuring that both server and client were the current commit - a89abd26a. It ran successfully without an error/crash. I think this issue can be closed.
Hmm, my previous test was done using local network test.
Today I am trying to test over the internet and it has the same problem now. IPv4 passed. IPv6 failed.
ping and iperf3 over IPv6 are okay.
crusader v0.0.10-testing
Server: OpenWRT 23.05 Proxmox PVE 8.0 virtual router (Intel N100 mini PC) -- this is on network 1 (one public IPv4 address and /56 IPv6)
Client: Windows 11 x64 using wireless (Acer Windows 11 laptop with Intel AX201 WiFi 6 adapter), using wireless connection through Asus RT-AX86U -- this is on network 2 (another public IPv4 address, another /56 IPv6)
C:\work\speedtest\crusader-x86_64-pc-windows-msvc> .\crusader.exe test 219.75.xx.xxx --download
Connected to server 219.75.xx.xxx:35481
Latency to server 2.43 ms
Testing download...
Writing data...
Saved raw data as data 2024.03.26 11-45-43.crr
Saved plot as plot 2024.03.26 11-45-43.png
C:\work\speedtest\crusader-x86_64-pc-windows-msvc> .\crusader.exe test 219.75.xx.xxx --upload
Connected to server 219.75.xx.xxx:35481
Latency to server 2.64 ms
Testing upload...
Writing data...
Saved raw data as data 2024.03.26 11-46-06.crr
Saved plot as plot 2024.03.26 11-46-06.png
C:\work\speedtest\crusader-x86_64-pc-windows-msvc> .\crusader.exe test 2400:d802:xxx::1:xxxx --download
Connected to server [2400:d802:xxx::1:xxxx]:35481
thread 'main' panicked at crusader-lib\src\test.rs:1318:10:
called `Result::unwrap()` on an `Err` value: "Unable to measure latency to server"
stack backtrace:
0: 0x7ff632d4ec83 - <unknown>
1: 0x7ff632d69eed - <unknown>
2: 0x7ff632d4b7d1 - <unknown>
3: 0x7ff632d4ea8a - <unknown>
4: 0x7ff632d50b79 - <unknown>
5: 0x7ff632d5083b - <unknown>
6: 0x7ff632d51064 - <unknown>
7: 0x7ff632d50f35 - <unknown>
8: 0x7ff632d4f319 - <unknown>
9: 0x7ff632d50c44 - <unknown>
10: 0x7ff632d92277 - <unknown>
11: 0x7ff632d92733 - <unknown>
12: 0x7ff632c0f1ea - <unknown>
13: 0x7ff632b996f6 - <unknown>
14: 0x7ff632b9da46 - <unknown>
15: 0x7ff632b9da8c - <unknown>
16: 0x7ff632d46fe8 - <unknown>
17: 0x7ff632b9da78 - <unknown>
18: 0x7ff632d70ee0 - <unknown>
19: 0x7ff973547344 - BaseThreadInitThunk
20: 0x7ff9740426b1 - RtlUserThreadStart
C:\work\speedtest\crusader-x86_64-pc-windows-msvc> .\crusader.exe test 2400:d802:xxx::1:xxxx --upload
Connected to server [2400:d802:xxx::1:xxxx]:35481
thread 'main' panicked at crusader-lib\src\test.rs:1318:10:
called `Result::unwrap()` on an `Err` value: "Unable to measure latency to server"
stack backtrace:
0: 0x7ff632d4ec83 - <unknown>
1: 0x7ff632d69eed - <unknown>
2: 0x7ff632d4b7d1 - <unknown>
3: 0x7ff632d4ea8a - <unknown>
4: 0x7ff632d50b79 - <unknown>
5: 0x7ff632d5083b - <unknown>
6: 0x7ff632d51064 - <unknown>
7: 0x7ff632d50f35 - <unknown>
8: 0x7ff632d4f319 - <unknown>
9: 0x7ff632d50c44 - <unknown>
10: 0x7ff632d92277 - <unknown>
11: 0x7ff632d92733 - <unknown>
12: 0x7ff632c0f1ea - <unknown>
13: 0x7ff632b996f6 - <unknown>
14: 0x7ff632b9da46 - <unknown>
15: 0x7ff632b9da8c - <unknown>
16: 0x7ff632d46fe8 - <unknown>
17: 0x7ff632b9da78 - <unknown>
18: 0x7ff632d70ee0 - <unknown>
19: 0x7ff973547344 - BaseThreadInitThunk
20: 0x7ff9740426b1 - RtlUserThreadStart
C:\work\speedtest\crusader-x86_64-pc-windows-msvc> ping 2400:d802:xxx::1:xxxx
Pinging 2400:d802:xxx::1:xxxx with 32 bytes of data:
Reply from 2400:d802:xxx::1:xxxx : time=3ms
Reply from 2400:d802:xxx::1:xxxx : time=3ms
Reply from 2400:d802:xxx::1:xxxx : time=3ms
Reply from 2400:d802:xxx::1:xxxx : time=3ms
Ping statistics for 2400:d802:xxx::1:xxxx :
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 3ms, Maximum = 3ms, Average = 3ms
C:\work\speedtest\iperf-3.16-win64> .\iperf3.exe -c 2400:d802:xxx::1:xxxx -R
Connecting to host 2400:d802:xxx::1:xxxx, port 5201
Reverse mode, remote host 2400:d802:xxx::1:xxxx is sending
[ 5] local 2400:d802:xxxx:xxxx:xxx:xxxx:xxxx:xxxx port 63833 connected to 2400:d802:xxx::1:xxxx port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.01 sec 36.6 MBytes 304 Mbits/sec
[ 5] 1.01-2.00 sec 72.6 MBytes 617 Mbits/sec
[ 5] 2.00-3.01 sec 72.4 MBytes 602 Mbits/sec
[ 5] 3.01-4.01 sec 71.6 MBytes 602 Mbits/sec
[ 5] 4.01-5.02 sec 85.4 MBytes 710 Mbits/sec
[ 5] 5.02-6.01 sec 77.1 MBytes 650 Mbits/sec
[ 5] 6.01-7.01 sec 75.4 MBytes 636 Mbits/sec
[ 5] 7.01-8.01 sec 93.9 MBytes 781 Mbits/sec
[ 5] 8.01-9.01 sec 62.9 MBytes 531 Mbits/sec
[ 5] 9.01-10.01 sec 59.5 MBytes 496 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.03 sec 710 MBytes 594 Mbits/sec 137 sender
[ 5] 0.00-10.01 sec 707 MBytes 593 Mbits/sec receiver
iperf Done.
C:\work\speedtest\iperf-3.16-win64> .\iperf3.exe -c 2400:d802:xxx::1:xxxx
Connecting to host 2400:d802:xxx::1:xxxx, port 5201
[ 5] local 2400:d802:xxxx:xxxx:xxx:xxxx:xxxx:xxxx port 63840 connected to 2400:d802:xxx::1:xxxx port 5201
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 35.4 MBytes 296 Mbits/sec
[ 5] 1.00-2.00 sec 51.9 MBytes 435 Mbits/sec
[ 5] 2.00-3.01 sec 69.2 MBytes 579 Mbits/sec
[ 5] 3.01-4.01 sec 74.5 MBytes 622 Mbits/sec
[ 5] 4.01-5.00 sec 75.4 MBytes 637 Mbits/sec
[ 5] 5.00-6.01 sec 72.5 MBytes 601 Mbits/sec
[ 5] 6.01-7.00 sec 77.0 MBytes 654 Mbits/sec
[ 5] 7.00-8.01 sec 66.8 MBytes 555 Mbits/sec
[ 5] 8.01-9.01 sec 54.2 MBytes 457 Mbits/sec
[ 5] 9.01-10.01 sec 73.9 MBytes 616 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-10.01 sec 651 MBytes 545 Mbits/sec sender
[ 5] 0.00-10.03 sec 651 MBytes 544 Mbits/sec receiver
iperf Done.
The OpenWRT firewall rule should be correct.
IPv4 Test results:
@richb-hanover
Just wondering if you can test over the internet to see if you can reproduce the issue or not. Thanks.
Two thoughts:
I can't easily test over IPv6 across the public internet. My ISP doesn't provide native IPv6, and I don't have an IPV6 tunnel set up right now. I won't have time to set that up soon, but if I get that going, I'll report back.
I see the log shows "Connected to server..." then the "Main thread panicked..." message. Has anyone looked at the code to see if you can winkle out what exactly is happening? Thanks.
@mcuee Was both UDP and TCP forwarded? There seems to be some problem with IPv6 UDP here.
@mcuee Was both UDP and TCP forwarded? There seems to be some problem with IPv6 UDP here.
In this paticular case, I am running crusader directly under OpenWRT router. Therefore I am not using port forwarding, but rather using Firewall -- Traffic Rules.
And yes both TCP and UDP are allowed.
If you feel like investigating you could try to run Wireguard and see which directions send or receives UDP traffic.
If you feel like investigating you could try to run Wireguard and see which directions send or receives UDP traffic.
You mean to say Wireshark, right?
I will need to learn Wireshark first. Haha. I will see what I can do.
Yeah I meant Wireshark =P
Simple things first -- to use the debug version and get the trace.
PS C:\work\speedtest\crusader\src\target\debug> .\crusader test 2400:d802:xxxx::1:xxxx --download
Connected to server [2400:d802:xxxx::1:xxxx]:35481
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Unable to measure latency to server"', crusader-lib\src\test.rs:1318:10
stack backtrace:
0: 0x7ff7fe84fa22 - std::backtrace_rs::backtrace::dbghelp::trace
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\..\..\backtrace\src\backtrace\dbghelp.rs:98
1: 0x7ff7fe84fa22 - std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: 0x7ff7fe84fa22 - std::sys_common::backtrace::_print_fmt
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\sys_common\backtrace.rs:65
3: 0x7ff7fe84fa22 - std::sys_common::backtrace::_print::impl$0::fmt
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\sys_common\backtrace.rs:44
4: 0x7ff7fe86bddb - core::fmt::write
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\core\src\fmt\mod.rs:1232
5: 0x7ff7fe84b85a - std::io::Write::write_fmt<std::sys::windows::stdio::Stderr>
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\io\mod.rs:1684
6: 0x7ff7fe84f76b - std::sys_common::backtrace::_print
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\sys_common\backtrace.rs:47
7: 0x7ff7fe84f76b - std::sys_common::backtrace::print
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\sys_common\backtrace.rs:34
8: 0x7ff7fe851d79 - std::panicking::default_hook::closure$1
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:271
9: 0x7ff7fe8519fb - std::panicking::default_hook
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:290
10: 0x7ff7fe8524a8 - std::panicking::rust_panic_with_hook
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:692
11: 0x7ff7fe85239e - std::panicking::begin_panic_handler::closure$0
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:583
12: 0x7ff7fe850409 - std::sys_common::backtrace::__rust_end_short_backtrace<std::panicking::begin_panic_handler::closure_env$0,never$>
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\sys_common\backtrace.rs:150
13: 0x7ff7fe852050 - std::panicking::begin_panic_handler
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:579
14: 0x7ff7fe877705 - core::panicking::panic_fmt
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\core\src\panicking.rs:64
15: 0x7ff7fe877c26 - core::result::unwrap_failed
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\core\src\result.rs:1750
16: 0x7ff7fe31d2af - enum2$<core::result::Result<crusader_lib::file_format::RawResult,alloc::boxed::Box<dyn$<core::error::Error>,alloc::alloc::Global> > >::unwrap<crusader_lib::file_format::RawResult,alloc::boxed::Box<dyn$<core::error::Error>,alloc::alloc::Global> >
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc\library\core\src\result.rs:1090
17: 0x7ff7fe31595a - crusader_lib::test::test
at C:\work\speedtest\crusader\src\crusader-lib\src\test.rs:1316
18: 0x7ff7fe1a3fbd - crusader::main
at C:\work\speedtest\crusader\src\crusader\src\main.rs:120
19: 0x7ff7fe1af62b - core::ops::function::FnOnce::call_once<void (*)(),tuple$<> >
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc\library\core\src\ops\function.rs:250
20: 0x7ff7fe1ab53e - std::sys_common::backtrace::__rust_begin_short_backtrace<void (*)(),tuple$<> >
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc\library\std\src\sys_common\backtrace.rs:134
21: 0x7ff7fe1ab53e - std::sys_common::backtrace::__rust_begin_short_backtrace<void (*)(),tuple$<> >
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc\library\std\src\sys_common\backtrace.rs:134
22: 0x7ff7fe1a71c1 - std::rt::lang_start::closure$0<tuple$<> >
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc\library\std\src\rt.rs:166
23: 0x7ff7fe84635e - core::ops::function::impls::impl$2::call_once
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\core\src\ops\function.rs:287
24: 0x7ff7fe84635e - std::panicking::try::do_call
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:487
25: 0x7ff7fe84635e - std::panicking::try
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:451
26: 0x7ff7fe84635e - std::panic::catch_unwind
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panic.rs:140
27: 0x7ff7fe84635e - std::rt::lang_start_internal::closure$2
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\rt.rs:148
28: 0x7ff7fe84635e - std::panicking::try::do_call
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:487
29: 0x7ff7fe84635e - std::panicking::try
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panicking.rs:451
30: 0x7ff7fe84635e - std::panic::catch_unwind
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\panic.rs:140
31: 0x7ff7fe84635e - std::rt::lang_start_internal
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc/library\std\src\rt.rs:148
32: 0x7ff7fe1a719a - std::rt::lang_start<tuple$<> >
at /rustc/84c898d65adf2f39a5a98507f1fe0ce10a2b8dbc\library\std\src\rt.rs:165
33: 0x7ff7fe1a6fb9 - main
34: 0x7ff7fe875650 - invoke_main
at D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:78
35: 0x7ff7fe875650 - __scrt_common_main_seh
at D:\a\_work\1\s\src\vctools\crt\vcstartup\src\startup\exe_common.inl:288
36: 0x7ffa67cb257d - BaseThreadInitThunk
37: 0x7ffa698aaa58 - RtlUserThreadStart
Yeah I meant Wireshark =P
I am using a Proxmox PVE 8.0 LxC container as the client this time.
crusader v0.0.10-testing Server: OpenWRT 23.05 Proxmox PVE 8.0 virtual router (Intel N100 mini PC) -- this is on network 1 (one public IPv4 address and /56 IPv6 addresses) Client: Debian 12 LxC container under PVE 8.0 (Intel N100 mini PC) -- this is on network 2 (another public IPv4 address and /56 IPv6 addresses)
Here is the output from tcpdump. Hopefully it will give some hints about the potential issue. Note: I did not edit the public IPv6 addresses of the server and client this time as they are anyway dynamic. tcpdump2.txt
More detailed tcpdump output using -v
.
tcpdump3.txt
@dtaht
Just wondering if you can test over the internet again to see if the issue exists on your side or not. Thanks.
My setup is a bit strange that the two home networks are actually sharing the same upstream ONT and a smart switch. This is the reason I can only test either download or upload but not both.
ISP ONT -- TP-Link TL-SG105E smart switch --> two networks Network 1 -- Asus RT-AX86U router -- PVE 8.0 host (Intel N100 mini PC 1 with dual 2.5G NICs) Network 2 -- OpenWRT virtual router on PVE 8.0 host (Intel N100 mini PC 2 with quad 2.5G NICs) -- Asus RT-AX82U AP
oot@lqos:~# crusader test --stream-stagger 5 --streams 2 --load-duration 10 fd77::2 Connected to server [fd77::2]:35481 thread 'main' panicked at 'called
1: 0x7fdf8f53bc0c -
2: 0x7fdf8f4fc771 -
3: 0x7fdf8f503335 -
4: 0x7fdf8f503056 -
5: 0x7fdf8f5038c6 -
6: 0x7fdf8f5037b7 -
7: 0x7fdf8f502294 -
8: 0x7fdf8f5034e9 -
9: 0x7fdf8f388103 -
10: 0x7fdf8f3881f3 -
11: 0x7fdf8f3e6c8a -
12: 0x7fdf8f38e850 -
13: 0x7fdf8f392c63 -
Aborted (core dumped)
Result::unwrap()
on anErr
value: "Unable to measure latency to server"', crusader-lib/src/test.rs:1304:10 stack backtrace: 0: 0x7fdf8f501ddd -