Closed llvm-x86 closed 1 month ago
that's fine. (side note async-std is now built w/ smol so it's nice overlap)
I was interested in your rng change to Xorshift64::from(simple_seed()),
. this is expected to be faster? impressive library.
I was interested in your rng change to
Xorshift64::from(simple_seed()),
. this is expected to be faster? impressive library.
wtx
is going through a high development phase and unfortunately the RNG area had to be modified to resemble something more logically accurate. Specifically, Xorshift64
isn't more performant than the old NoStdRng
as both use the same algorithm (https://en.wikipedia.org/wiki/Xorshift).
If desirable, it is possible to use other random number generators like https://github.com/smol-rs/fastrand or https://crates.io/crates/rand_chacha.
In my evenly distributed benchmarks, the last performance measurement of monoio
wasn't much different than tokio
.
Maybe this has changed? I will be back with more data.
Oh i forgot u tested it. Was trying to bring something new.
Library is awesome thank's for open-sourcing it.
use wtx::{
misc::{simple_seed, Vector, Xorshift64},
web_socket::{FrameBufferVec, OpCode, WebSocket, WebSocketBuffer},
};
use monoio_compat::TcpStreamCompat;
use monoio::net::TcpListener;
#[monoio::main]
async fn main() {
let listener = TcpListener::bind("0.0.0.0:9000").unwrap();
loop {
let (stream, _) = listener.accept().await.unwrap();
let _jh = monoio::spawn(async move {
let mut ws = WebSocket::accept(
(),
Xorshift64::from(simple_seed()),
TcpStreamCompat::new(stream),
WebSocketBuffer::with_capacity(0, 1024 * 16).unwrap(),
|_| wtx::Result::Ok(()),
)
.await
.unwrap();
let mut fb = FrameBufferVec::new(Vector::with_capacity(1024 * 16).unwrap());
loop {
let mut frame = ws.read_frame(&mut fb).await.unwrap();
match frame.op_code() {
OpCode::Binary | OpCode::Text => {
ws.write_frame(&mut frame).await.unwrap();
}
OpCode::Close => break,
_ => {}
}
}
});
}
}
https://github.com/c410-f3r/wtx/tree/monoio
In a WebSocket benchmark monoio
scored ~3565 milliseconds while tokio
scored ~1780 milliseconds.
monoio
was 2x worse than tokio
. It's not clear to me if TcpStreamCompat
creates overhead or if something is missing.
How should I properly use a standard tls stream for https endpoints?
Connected to ("ws.bitskins.com", 443)
connecting to "ws.bitskins.com"
thread 'main' panicked at src/ws_trader.rs:33:10:
called `Result::unwrap()` on an `Err` value: IoError(Custom { kind: InvalidData, error: InvalidCertificate(UnknownIssuer) })
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
use std::sync::OnceLock;
use tokio::net::TcpStream;
use wtx::{
misc::{simple_seed, TokioRustlsConnector, Uri, Xorshift64},
web_socket::{FrameBufferVec, FrameMutVec, OpCode, WebSocketBuffer, WebSocketClient},
};
use crate::{http_client::HttpClient, structs::Listing, telegram::TelegramBot};
pub const LISTED: &str = "listed";
const URL: &str = "wss://ws.bitskins.com";
const WS_AUTH_APIKEY: &str = "WS_AUTH_APIKEY";
const WS_SUB: &str = "WS_SUB";
pub async fn ws_loop(api_key: String) -> wtx::Result<()> {
let uri = Uri::new(URL);
let fb = &mut FrameBufferVec::default();
let tls = TokioRustlsConnector::default();
let stream = TcpStream::connect(uri.hostname_with_implied_port())
.await
.unwrap();
println!("Connected to {:?}", uri.hostname_with_implied_port());
println!("connecting to {:?}", uri.hostname());
let stream = tls
.connect_without_client_auth(uri.hostname(), stream)
.await
.unwrap();
let mut ws = WebSocketClient::connect(
(),
[],
Xorshift64::from(simple_seed()),
stream,
&uri.to_ref(),
WebSocketBuffer::default(),
|_| wtx::Result::Ok(()),
)
.await?;
}
wtx
used to support +6 runtime executors, includingasync-std
, as a proof-of-concept but most were removed because no one cared.If you want,
async-std
can be put back but keep in mind thatwtx
is still a relatively new project that may contain unknown bugs.