theStack / bip324-proxy

BIP324 Proxy: Easy v2 transport protocol integration for light clients
11 stars 1 forks source link

Add `bitcoin.conf` or description of how to redirect to proxy. #1

Open rustaceanrob opened 6 months ago

rustaceanrob commented 6 months ago

Naive question, but I have what should be the starter code of a proxy in Rust, and I am scratching my head as to how to reconnect outbound connections to the proxy. For people that would like to run your project on their Signet node, I think a short setup description or example conf would be beneficial. Thanks!

Update: I am trying to get things going with the proxy flag in bitcoin.conf, but I am receiving a strange sequence of 4 bytes from my node: 05 02 00 02. Obviously doesn't match any network magic, a bit confused how you were able to get the version messages sent straight to your proxy. thanks again.

theStack commented 6 months ago

Hi Rob, great first issue! I agree that there should be a documentation about the redirection part. Note that the redirection has to be done by the light client, by initiating all outbound connections to localhost:1324 (only on the TCP socket level, not from the bitcoin p2p perspective). As there is usually no option available for this, clients that want to take use of bip324-proxy have to be patched and recompiled. That sounds tedious, but the good news is that such a patch is usually quite small, in many cases a one-liner. There are some examples for redirection patches in the presentation slides, it might make sense to show some of those directly in README.md for better illustration.

You mention bitcoin.conf in the issue title, which surprised me, as it suggests that you want to use the bip324-proxy for outbound connections of Bitcoin Core. That shouldn't be needed in practice, as Bitcoin Core already supports v2 transport protocol since last release v26.0 (by passing -v2transport=1). If you still want to do that (it could make sense for testing purposes), you find a patch for Bitcoin Core on the slides as well.

I overthought this way too much. A mention of the -proxy flag would still be cool (:

Assuming that you are again talking about Bitcoin Core here: -proxy doesn't help, as it tries to connect using the SOCKS5 protocol, which we don't use for this project. bip324-proxy works as a transparent proxy, meaning that there is no dedicated proxy protocol in place. We expect p2p v1 messages right from the start without any additional proxy negotiation in front.

I have what should be the starter code of a proxy in Rust

That's great! Do you mind to share it? I'm currently stuck at porting the main loop, as there is apparently no select equivalent available in Rust:

def main_loop(local_socket, remote_socket, send_l, send_p, recv_l, recv_p):
    while True:
        r, _, _ = select([local_socket, remote_socket], [], [])
        if local_socket in r:   # [local] v1 ---> v2 [remote]
            msgtype, payload = recv_v1_message(local_socket)
            send_v2_message(remote_socket, send_l, send_p, msgtype, payload)
            log_recv('-->', msgtype, payload)
        if remote_socket in r:  # [local] v1 <--- v2 [remote]
            msgtype, payload = recv_v2_message(remote_socket, recv_l, recv_p)
            send_v1_message(local_socket, msgtype, payload)
            log_recv('<--', msgtype, payload)

Would love to have your input on this, especially if you think a async networking framework like tokio is needed. My plan was doing only blocking network I/O, as I thought that's good enough for a proxy.

rustaceanrob commented 6 months ago

Ah okay that makes so much sense that Core was trying to speak a different protocol to my code. I was hoping to create a half-made proxy that performs the handshake and disconnects, my current work is here. Of course, that doesn't work as I described with Core, but should technically be a good start for the light client messages, assuming they start by sending a version message to my port. May I ask what light client you are using?

On using std versus tokio or async-std , while I don't think tokio is needed, I feel that it makes the developer experience far easier. As you described, a select between two threads that are reading from remote and writing to the client should not be too difficult to get functioning using tokio. Spawning the async threads is also incredibly cheap, so the proxy can theoretically serve many clients without performance bottlenecks that come with OS threads. Unless you are strongly opposed to using a framework, I would use tokio.

I encourage you to look around our repository, as I think the scaffolding is there to complete the proxy in a relatively compact amount of code. Feel free to modify and test the code in the example. I misinterpreted how to test this proxy, so I think I am going to put a pause on this and focus on integration with the current Rust clients, Floresta and Nakamoto

theStack commented 6 months ago

Ah okay that makes so much sense that Core was trying to speak a different protocol to my code. I was hoping to create a half-made proxy that performs the handshake and disconnects, my current work is here. Of course, that doesn't work as I described with Core, but should technically be a good start for the light client messages, assuming they start by sending a version message to my port. May I ask what light client you are using?

I've tested with nakamoto, neutrino, bcoin and Bitcoin Core as clients. The last two are arguably not light clients, but the proxy works with any piece of software that connects to the bitcoin P2P network, if properly patched. For example, the patch I use for nakamoto looks like this:

diff --git a/net/poll/src/reactor.rs b/net/poll/src/reactor.rs
index 436f606..c1f1b25 100644
--- a/net/poll/src/reactor.rs
+++ b/net/poll/src/reactor.rs
@@ -468,7 +468,7 @@ fn dial(addr: &net::SocketAddr) -> Result<net::TcpStream, io::Error> {
     sock.set_write_timeout(Some(WRITE_TIMEOUT))?;
     sock.set_nonblocking(true)?;

-    match sock.connect(&(*addr).into()) {
+    match sock.connect(&(net::SocketAddr::from(([127,0,0,1], 1324))).into()) {
         Ok(()) => {}
         Err(e) if e.raw_os_error() == Some(libc::EINPROGRESS) => {}
         Err(e) if e.raw_os_error() == Some(libc::EALREADY) => {

If you fire up your proxy and start the nakamoto node with the patch above via $ cargo run -p nakamoto-node -- --signet --connect 5.6.7.8:8333, you should see the initial VERSION message with 5.6.7.8:8333 set in the addr_recv field. For the patches to the other clients mentioned above, please see the presentation slides in the repository.

On using std versus tokio or async-std , while I don't think tokio is needed, I feel that it makes the developer experience far easier. As you described, a select between two threads that are reading from remote and writing to the client should not be too difficult to get functioning using tokio. Spawning the async threads is also incredibly cheap, so the proxy can theoretically serve many clients without performance bottlenecks that come with OS threads. Unless you are strongly opposed to using a framework, I would use tokio.

Okay, good to know, I'll give it a try. Note that the select I used in the Python implementation (being a wrapper around the select from the UNIX world) is not using or working with threads, but merely has the task of answering the simple question "has any of the passed sockets new data for me?". As long as it doesn't, it blocks. If it does, it returns a list of socket that has new data, which we then read and forward after v1<->v2 transformation into the other direction. I was hoping to keep the main loop that simple in the Rust implementation as well, but if spawning async threads is needed (and tokio can help there), that's also not the end of the world I guess.

I encourage you to look around our repository, as I think the scaffolding is there to complete the proxy in a relatively compact amount of code. Feel free to modify and test the code in the example. I misinterpreted how to test this proxy, so I think I am going to put a pause on this and focus on integration with the current Rust clients, Floresta and Nakamoto

Cool, will take a look :+1: Feel free to continue with the proxy if you want, I think the only "secret sauce" you missed fort further testing was that the client to use the BIP324 Proxy needs be patched.

rustaceanrob commented 6 months ago

Oh my, I did not see the "Load more" button on the slides... I was wondering what I was missing there. I'll use that patch and continue on!

Okay, good to know, I'll give it a try. Note that the select I used in the Python implementation (being a wrapper around the > select from the UNIX world) is not using or working with threads, but merely has the >task of answering the simple question "has any of the passed sockets new data for me?"

From my interpretation, it sounds like using a loop and tokio::select! should have a similar effect. From the docs:

The tokio::select! macro allows waiting on multiple async computations and returns when a single computation completes.

The tokio::select may wait for a message from remote, or a message from the client, and just resolve whichever one comes first and loop to the next message? I think that is the approach I will take. Will keep in touch