minghuaw / toy-rpc

An async RPC in rust-lang that mimics golang's net/rpc
29 stars 2 forks source link

Error on server compile #27

Open DevasiaThomas opened 3 years ago

DevasiaThomas commented 3 years ago

I'm not sure what I am doing wrong, but I'm getting an error(shared below) while compiling the server and I'm at a loss here. Below is the code. My message was an enum initially - I thought it was related to that but that doesn't seem to be the case. I'm running rust v1.54.0. I do a cargo publish --dry-run and point to the packaged version of the service definition project.

Service Definition project

//lib.rs
pub use async_trait::async_trait;
use toy_rpc::macros::export_trait;
use anyhow::{Result};

#[async_trait]
#[export_trait(impl_for_client)]
pub trait Consumer {
    #[export_method]
    async fn consume (&self, message: String) -> Result<u8>;
}
//Cargo.toml
[package]
name = "sf-proto-server"
version = "0.1.0"
edition = "2018"

[dependencies]
anyhow = "1.0"
toy-rpc = { version = "0.8", features = [] }
async-trait = "0.1"
serde = {version = "1.0", features = ["derive"]}

Server Implementation project

//main.rs
mod recv;

use anyhow::Result;

#[tokio::main(flavor = "multi_thread", worker_threads = 4)]
async fn main() -> Result<()> {
    println!("Starting");
    recv::start_listener().await?; 
    Ok(())
}
//recv::mod.rs
pub async fn start_listener() -> Result<()> {
    let mut listener = tokio::net::TcpListener::bind("0.0.0.0:6969").await.map_err(|err|
        println!("{}", err.to_string())).unwrap();
    println!("Listening");
    let consumer = Arc::new(rpc_impl::Consumer {});
    let server = Server::builder().register(consumer).build();
    task::spawn(async move  {
        server.accept(listener).await.unwrap();
    }).await?;

    Ok(())
}
//recv::rpc_impl.rs
use sf_proto_server::*;
use toy_rpc::macros::export_trait_impl;
use anyhow::Result;

pub struct Consumer {}

#[async_trait]
#[export_trait_impl]
impl sf_proto_server::Consumer for Consumer {
    async fn consume(&self, message: String) -> Result<u8> {
        println!("Message {}", message.0);
        Ok(0)
    }
}
//Cargo.toml
[package]
name = "sf-edge-server"
version = "0.1.0"
edition = "2018"

[dependencies]
anyhow = "1"
sf-proto-server = { path = "../SF-Proto/target/package/sf-proto-server-0.1.0" }
tokio = { version = "1", features = ["full"] }
toy-rpc = { version = "0.8", features = ["tokio_runtime", "server", "serde_bincode", "anyhow"] }
//The Error
error: custom attribute panicked
  --> src/recv/rpc_impl.rs:8:1
   |
8 | #[export_trait_impl]
   | ^^^^^^^^^^^^^^^^^^^^
   |
   = help: message: called `Option::unwrap()` on a `None` value
minghuaw commented 3 years ago

I'm looking into it now.

minghuaw commented 3 years ago

@DevasiaThomas I have found the problem. The macro did not consider the case where a trait path is specified instead of the trait ident, and I also took the wrong way to extract ident from path. This should be fixed now in version 0.8.2 (macro version 0.6.2), and I have updated example-service, example-server, and example-client to showcase that this should work.

Please let me know if this fixes your problem.

DevasiaThomas commented 3 years ago

Hi @minghuaw First off, thanks for pushing out a fix at break neck speed - appreciate it :). Unfortunately, the new crate wants my project to switch to rust nightly. It fails to download the macro crate

Caused by:
  feature `edition2021` is required

  this Cargo does not support nightly features, but if you
  switch to nightly channel you can add
  `cargo-features = ["edition2021"]` to enable this feature

Is it required for me to switch? I'm on stable right now.

DevasiaThomas commented 3 years ago

I just noticed 2021 was released to stable a few days back. Let me update and check again :)

minghuaw commented 3 years ago

Sorry I forgot to mention that the edition was also updated.

minghuaw commented 3 years ago

@DevasiaThomas I have reverted back to 2018 edition in version 0.8.3. Version 0.8.2 is yanked on crates.io for now.

DevasiaThomas commented 3 years ago

Oh, you didn't have to. I already updated my code to move to 2021, I was just writing the client side right now before I responded again.

minghuaw commented 3 years ago

Oh, you didn't have to. I already updated my code to move to 2021, I was just writing the client side right now before I responded again.

I was actually only testing whether the new edition compiles or not, but forgot to change it back and published it. I think edition 2018 would be more friendly for people who haven't upgraded their rust version yet.

DevasiaThomas commented 3 years ago

@minghuaw Good news :) It's working as intended. Couple of questions

I have to say @minghuaw your framework had me up and running in a few hours, I spent days on cap'n proto and a few others and just gave up. For now I'll ditch the cross-language support for speed of development.

minghuaw commented 3 years ago
* Is there a retry mechanism for client connects within the library?

The library doesn't have built-in mechanism right now. #5 mentioned a crate called stubborn-io which can be used with Client::with_stream(stubborn_tcp_stream)

* I would like to know the clients network info within my rpc call, anyway that can be passed down to the server impl? for eg: ip/node name

This has not been implemented yet. I am considering adding a context which can be accessed within the server impl in probably version 0.9, and I guess the client network info can be included in that context.

* Any mechanism to deal with disconnects on both sides or which errors should I watch for so that I can retry on the client side or on the server side make a note that a particular client disconnected.

Good point. I haven't thought much about this before in terms of error handling, so right now it would just treat a dropped connection as if the remote peer decided to disconnect. This is clearly a bad design. I will fix this in the next minor update (likely in 0.8.4).

* Is there a way to disable nagle buffer and TCP delayed ack on the sockets?

tokio::net::TcpStream provides this method set_nodelay which can be used to disable the nagle buffer. However, I haven't found a low level API from tokio or the std library to disable TCP delayed ack. What you can do is then something like below

let stream = TcpStream::connect("127.0.0.1:8080").await?;
stream.set_nodelay(true)?;
let client = Client::with_stream(stream);

However, it looks like stubborn-io doesn't provide such an option yet.

* Lastly, can the clients also have rpc calls that the server can call once the client connects in?

I didn't implement this because it was originally simply trying to mimic the APIs of go's net/rpc. However there is a workaround right now, which is to use the pubsub. You can create a publisher and/or a subscriber on the server side to communicate with the client side. There is a chapter in the book and an example here

For now I'll ditch the cross-language support for speed of development.

Do you have a specific (or some) cross language protocol that you were considering? I was thinking about adding cross language protocol support but wasn't sure which one people need the most. I guess what you need can be a good start point :)

DevasiaThomas commented 3 years ago

The library doesn't have built-in mechanism right now. #5 mentioned a crate called stubborn-io which can be used with Client::with_stream(stubborn_tcp_stream)

Nice! I'll definitely give it a shot. In this case you just won't dial I'm assuming

This has not been implemented yet. I am considering adding a context which can be accessed within the server impl in probably version 0.9, and I guess the client network info can be included in that context.

Awesome! I hope the otel part of the context isn't mandatory :)

Good point. I haven't thought much about this before in terms of error handling, so right now it would just treat a dropped connection as if the remote peer decided to disconnect. This is clearly a bad design. I will fix this in the next minor update (likely in 0.8.4).

I just saw stubborn-io, it allows a bunch of callbacks to be registered incase of failure to connect vs drops.. Maybe you can mimc that? stubborn-io only cares about client side though.

tokio::net::TcpStream provides this method set_nodelay which can be used to disable the nagle buffer. However, I haven't found a low level API from tokio or the std library to disable TCP delayed ack. What you can do is then something like below

let stream = TcpStream::connect("127.0.0.1:8080").await?;
stream.set_nodelay(true)?;
let client = Client::with_stream(stream);

However, it looks like stubborn-io doesn't provide such an option yet.

I'll raise an issue there to see if they are opening to address this. stubborn feels like something I should use client side now :)

I didn't implement this because it was originally simply trying to mimic the APIs of go's net/rpc. However there is a workaround right now, which is to use the pubsub. You can create a publisher and/or a subscriber on the server side to communicate with the client side. There is a chapter in the book and an example here

I thought you would come back with this :) . I might transition to this when I finally have the need to push configuration data from upstream to clients.

Do you have a specific (or some) cross language protocol that you were considering? I was thinking about adding cross language protocol support but wasn't sure which one people need the most. I guess what you need can be a good start point :)

How would you achieve this? Generate code for those languages? Or have them copy the schema instead and support the serde in that language? As such I don't know the language I have to interact with. I might have to get back to you on this one within a few weeks :)

minghuaw commented 3 years ago

Awesome! I hope the otel part of the context isn't mandatory :)

I haven't decided the design of the Context yet. I will definitely open an issue to get some feedback before finalizing it.

I just saw stubborn-io, it allows a bunch of callbacks to be registered incase of failure to connect vs drops.. Maybe you can mimc that? stubborn-io only cares about client side though.

I am indeed thinking about that, but this will probably be low on the priority list for 0.9.

I thought you would come back with this :) . I might transition to this when I finally have the need to push configuration data from upstream to clients.

I am open to such design in the future. Right now I am just not sure about the server side API like how does it know which client it's calling (which I guess client connection info will be necessary), and etc.

How would you achieve this? Generate code for those languages? Or have them copy the schema instead and support the serde in that language? As such I don't know the language I have to interact with. I might have to get back to you on this one within a few weeks :)

I was indeed considering generating code lol, but since I haven't chosen a protocol, I am open to any approach. This is also kind of why I started working on an AMQP1.0 project (which isn't complete and thus not public yet).

minghuaw commented 3 years ago
* Any mechanism to deal with disconnects on both sides or which errors should I watch for so that I can retry on the client side or on the server side make a note that a particular client disconnected.

All errors related to connection (read/write) have been unified to Error::IoError(_) in version 0.8.4. However, I haven't done much test to emulate unstable connection. This will likely be the last version for 0.8, and I will start working on 0.9