Closed neetdai closed 1 year ago
Could not reproduce. Could you provide more information? Which operation caused this error? bind
, connect
, or accept
?
I guess it's connect
caused this problem. A client need to bind to a localhost port to connect to a remote address. Your firewall may forbid you binding so many ports in a short time, or there are many services already binding many ports in your machine. I think it's not a bug.
After set RUST_BACKTRACE=1
Gnuplot not found, using plotters backend
Benchmarking tcp/compio: Warming up for 3.0000 sthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 52, kind: Uncategorized, message: "由于网络上有重名,没有连接。如果加入域,请转到“控制面板”中的“系统”更改计算机名,然后重试。如果加入工作组,请选择其他工作组名。" }', benches\net.rs:49:65
stack backtrace:
0: std::panicking::begin_panic_handler
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library\std\src\panicking.rs:593
1: core::panicking::panic_fmt
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library\core\src\panicking.rs:67
2: core::result::unwrap_failed
at /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be/library\core\src\result.rs:1651
3: async_task::raw::RawTask<F,T,S,M>::run
4: compio::task::runtime::Runtime::block_on
5: compio::task::block_on
6: criterion::bencher::AsyncBencher<A,M>::iter
7: <criterion::routine::Function<M,F,T> as criterion::routine::Routine<M,T>>::warm_up
8: criterion::routine::Routine::sample
9: criterion::analysis::common
10: criterion::benchmark_group::BenchmarkGroup<M>::bench_function
11: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::take_box
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'main' panicked at 'cannot access a Thread Local Storage value during or after destruction: AccessError', /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be\library\std\src\thread\local.rs:246:26
stack backtrace:
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
thread 'main' panicked at 'aborting the process', C:\Users\neet\.cargo\registry\src\mirrors.ustc.edu.cn-61ef6e0cd06fb9b8\async-task-4.4.0\src\utils.rs:17:5
stack backtrace:
error: bench failed, to rerun pass `--bench net`
Caused by:
process didn't exit successfully: `E:\rust\compio\target\release\deps\net-bf4587aad27d5461.exe --bench` (exit code: 0xc0000409, STATUS_STACK_BUFFER_OVERRUN)
Only to run bench tcp compio
twice, it reproduces.
This doesn't look like a problem caused by too many clients. Because same run bench tcp tokio
twice, it append
Benchmarking tcp/tokio: Warming up for 3.0000 sthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { code: 10048, kind: AddrInUse, message: "通常每个套接字地址(协议/网络地址/端口)只允许使用一次。" }', benches\net.rs:33:66
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: bench failed, to rerun pass `--bench net`
Only to run bench
tcp compio
twice, it reproduces.
What do you mean by twice? Did you changed the code? Or running cargo bench
twice?
Because same run bench
tcp tokio
twice, it append
Do you mean that tokio also causes such panic?
I can reproduce this, but not steadily. I still guess it is because the client or server is created too much.
What do you mean by twice? Did you changed the code? Or running cargo bench twice?
use criterion::{async_executor::AsyncExecutor, criterion_group, criterion_main, Criterion};
// criterion_group!(net, tcp, udp);
criterion_group!(net, tcp);
criterion_main!(net);
struct CompioRuntime;
impl AsyncExecutor for CompioRuntime {
fn block_on<T>(&self, future: impl std::future::Future<Output = T>) -> T {
compio::task::block_on(future)
}
}
fn tcp(c: &mut Criterion) {
const PACKET_LEN: usize = 1048576;
static PACKET: &[u8] = &[1u8; PACKET_LEN];
let mut group = c.benchmark_group("tcp");
// group.bench_function("tokio", |b| {
// let runtime = tokio::runtime::Builder::new_current_thread()
// .enable_all()
// .build()
// .unwrap();
// b.to_async(&runtime).iter(|| async {
// use tokio::io::{AsyncReadExt, AsyncWriteExt};
// let listener = tokio::net::TcpListener::bind("127.0.0.1:9999").await.unwrap();
// let addr = listener.local_addr().unwrap();
// let tx = tokio::net::TcpStream::connect(addr);
// let rx = listener.accept();
// let (mut tx, (mut rx, _)) = tokio::try_join!(tx, rx).unwrap();
// tx.write_all(PACKET).await.unwrap();
// let mut buffer = Vec::with_capacity(PACKET_LEN);
// while buffer.len() < PACKET_LEN {
// rx.read_buf(&mut buffer).await.unwrap();
// }
// buffer
// })
// });
group.bench_function("compio", |b| {
b.to_async(CompioRuntime).iter(|| async {
let listener = compio::net::TcpListener::bind("127.0.0.1:9998").unwrap();
let addr = listener.local_addr().unwrap();
let tx = compio::net::TcpStream::connect(addr);
let rx = listener.accept();
let (tx, (rx, _)) = futures_util::try_join!(tx, rx).unwrap();
tx.send_all(PACKET).await.0.unwrap();
let buffer = Vec::with_capacity(PACKET_LEN);
let (recv, buffer) = rx.recv_exact(buffer).await;
recv.unwrap();
buffer
})
});
group.finish();
}
// fn udp(c: &mut Criterion) {
// const PACKET_LEN: usize = 1024;
// static PACKET: &[u8] = &[1u8; PACKET_LEN];
// let mut group = c.benchmark_group("udp");
// // The socket may be dropped by firewall when the number is too large.
// #[cfg(target_os = "linux")]
// group
// .sample_size(16)
// .measurement_time(std::time::Duration::from_millis(2))
// .warm_up_time(std::time::Duration::from_millis(2));
// group.bench_function("tokio", |b| {
// let runtime = tokio::runtime::Builder::new_current_thread()
// .enable_all()
// .build()
// .unwrap();
// b.to_async(&runtime).iter(|| async {
// let rx = tokio::net::UdpSocket::bind("127.0.0.1:0").await.unwrap();
// let addr_rx = rx.local_addr().unwrap();
// let tx = tokio::net::UdpSocket::bind("127.0.0.1:0").await.unwrap();
// let addr_tx = tx.local_addr().unwrap();
// rx.connect(addr_tx).await.unwrap();
// tx.connect(addr_rx).await.unwrap();
// {
// let mut pos = 0;
// while pos < PACKET_LEN {
// let res = tx.send(&PACKET[pos..]).await;
// pos += res.unwrap();
// }
// }
// {
// let mut buffer = vec![0; PACKET_LEN];
// let mut pos = 0;
// while pos < PACKET_LEN {
// let res = rx.recv(&mut buffer[pos..]).await;
// pos += res.unwrap();
// }
// buffer
// }
// })
// });
// group.bench_function("compio", |b| {
// b.to_async(CompioRuntime).iter(|| async {
// let rx = compio::net::UdpSocket::bind("127.0.0.1:0").unwrap();
// let addr_rx = rx.local_addr().unwrap();
// let tx = compio::net::UdpSocket::bind("127.0.0.1:0").unwrap();
// let addr_tx = tx.local_addr().unwrap();
// rx.connect(addr_tx).unwrap();
// tx.connect(addr_rx).unwrap();
// {
// let mut pos = 0;
// while pos < PACKET_LEN {
// let (res, _) = tx.send(&PACKET[pos..]).await;
// pos += res.unwrap();
// }
// }
// {
// let mut buffer = Vec::with_capacity(PACKET_LEN);
// let mut res;
// while buffer.len() < PACKET_LEN {
// (res, buffer) = rx.recv(buffer).await;
// res.unwrap();
// }
// buffer
// }
// })
// });
// group.finish();
// }
cargo bench --bench net
Run bench, until bench complete, run bench again.
rustc 1.72.0 (MVSC) windows 10
Benched your code on Windows 11 but still cannot reproduce the problem.
By the way, I've fixed some bugs in the master. Please ensure you are using the latest code.
The problem still continue after upgrading to the lastest code. I guess it isn't connect cause.
3: async_task::raw::RawTask<F,T,S,M>::run
4: compio::task::runtime::Runtime::block_on
5: compio::task::block_on
thread 'main' panicked at 'cannot access a Thread Local Storage value during or after destruction: AccessError', /rustc/5680fa18feaa87f3ff04063800aec256c3d4b4be\library\std\src\thread\local.rs:246:26
The log you pasted is not the root cause, but just a chain reaction after the first panic.
Besides the benchmark, did you find any other panics? If no, I think it's not a big problem... maybe:)
Any updates?
Feel free to open new issues if you find any other panics.