buckyos / CYFS

CYFS is the next-generation technology to build real Web3 by upgrading the basic protocol of Web (TCP/IP+DNS+HTTP),is short for CYberFileSystem. https://www.cyfs.com/, cyfs://cyfs/index_en.html.
https://www.cyfs.com/
BSD 2-Clause "Simplified" License
2k stars 276 forks source link

Need a tool that can simply test BDT protocol connectivity #141

Open weiqiushi opened 1 year ago

weiqiushi commented 1 year ago

We need a tool to enter a DeviceId that can test the connectivity between the local machine and this DeviceId. In this way:

In addition, a simple Ping/Pong protocol can be used to test whether data can be transferred between BDT protocol stacks, and by extending the contents of the Pong return packet, it is also possible to

jing-git commented 1 year ago

Hi, the code for this feature has been submitted to the 853d4c88fe552c06661b56f4f64a8a37e528f7c1 of Git.

Assuming that the remote host is u1 and the local host is u2, with the IP addresses of 192.168.10.100 and 192.168.20.100, respectively.

On u2, with the certificate of u1 named u1.desc available, the following steps are taken to determine the BDT connectivity to u1:

1.Compile the debug program cargo build --release -p bdt-debuger-deamon cargo build --release -p bdt-debuger

Copy bdt-debugger-daemon and bdt-debugger to u2, and put sn-miner.desc, sn-miner.sec, u1.desc, and u1.sec into the same directory.

2.Start the local debug program on u2 ./bdt-debuger-deamon --listen 2021 --port 12345 --ep L4udp192.168.20.100:8060 --quiet 0

3.Perform ping debugging ./bdt-debuger -h 127.0.0.1 -p 12345 ping u1.desc 3 3

4.The results

Online, ping successfully: send ping. respose. time: 10.3 ms send ping. respose. time: 1.9 ms send ping. respose. time: 2.0 ms

Offline, ping failed: send ping. timeout send ping. timeout send ping. timeout For more error information, please check the log file.

weiqiushi commented 1 year ago

Great Job! But why need 2 programs to test bdt prorocol stack? Is it possible use single program to test it? And If I want to initialize the local BDT stack with a SN list, how do I do it? @jing-git

jing-git commented 1 year ago

Hello. It is inconvenient to debug the BDT protocol stack using only one program. The bdt-debuger-deamon integrates all the functions of the BDT protocol stack and outputs logs during runtime. bdt-debuger serves as a bridge between bdt-debuger-deamon and the user, transmitting commands to bdt-debuger-deamon for execution and displaying the results. Currently, if there is only one program(bdt-debuger-deamon), the logs and inputs during runtime will be mixed together, which can affect the interaction.

And the feature to load multiple SNs will be added later.

jing-git commented 1 year ago

Hi, you can initialize the local BDT stack with a SN list now. For bdt-debuger-deamon: ./bdt-debuger-deamon --sn sn1.desc --sn sn2.desc ......

lurenpluto commented 1 year ago

Hi, you can initialize the local BDT stack with a SN list now. For bdt-debuger-deamon: ./bdt-debuger-deamon --sn sn1.desc --sn sn2.desc ......

Is this tool available for use now? Are there any detailed usage processes and instructions?

jing-git commented 1 year ago

Hi, you can initialize the local BDT stack with a SN list now. For bdt-debuger-deamon: ./bdt-debuger-deamon --sn sn1.desc --sn sn2.desc ......

Is this tool available for use now? Are there any detailed usage processes and instructions?

Yes, both bdt-debuger-deamon and bdt-debuger are available. For specific usage, please refer to the document at https://github.com/buckyos/documents/blob/main/bdt/en/9.Debug.md. Additional features will be added in the future, and any changes in usage will be updated in this document.

lurenpluto commented 1 year ago

Hi, you can initialize the local BDT stack with a SN list now. For bdt-debuger-deamon: ./bdt-debuger-deamon --sn sn1.desc --sn sn2.desc ......

Is this tool available for use now? Are there any detailed usage processes and instructions?

Yes, both bdt-debuger-deamon and bdt-debuger are available. For specific usage, please refer to the document at https://github.com/buckyos/documents/blob/main/bdt/en/9.Debug.md. Additional features will be added in the future, and any changes in usage will be updated in this document.

A nice tool, thank you for providing it! I will give it a try with my OOD.

weiqiushi commented 1 year ago

@jing-git I want to integrate this function into the cyfs-monitor program to detect whether a given OOD is connectable or not. What would be the best way for me to do this?

lurenpluto commented 1 year ago

There is a key question The tool itself is similar to nc/ping tool in nix, why do you need a daemon and a tool, so that the mental cost of using it is too high for the user, and the multi-process coordinated start and use itself is very unfriendly, as a tool, all logic should be integrated into an executable program, convenient for distribution to users and users to use

weiqiushi commented 1 year ago

I think you can refer to the design of cyfs-meta-client and cyfs-meta-lib.

cyfs-meta-lib is a rust lib, and this lib defines the MetaClient structure. By constructing the MetaClient example. Third-party programs can interact with the Meta chain through the interface of this instance

Further, cyfs-meta-client is a command line tool through which it is possible to interact with the MetaChain in a shell.

This tool has several functions, but the logic of implementing each function is similar: 1:

  1. create a MetaClient instance with a part of the arguments
  2. calling the corresponding interface of MetaClient with a subcommand and another part of the arguments, and returning the result
  3. print the result and exit the program
lizhihongTest commented 1 year ago

What is the purpose of this BDT tool?

As a network protocol testing tool, if it is only used to diagnose the running status of the server, we can use simple bdt client tools like ping/nc to achieve it, in order to diagnose whether the server BDT protocol can be connected correctly.

But if this tool is used to diagnose the connectivity and data transmission performance of BDT between two nodes, we need more professional testing tools, such as iperf, and specify whether to run as a client or server using -c and -s.

It seems that the current implementation leans towards providing a server + client tool that is similar to iperf for professional users. However, the daemon and debug programs can be combined into one program.

For example nc:

```shell root@filecoin2:/# nc -help OpenBSD netcat (Debian patchlevel 1.218-4ubuntu1) usage: nc [-46CDdFhklNnrStUuvZz] [-I length] [-i interval] [-M ttl] [-m minttl] [-O length] [-P proxy_username] [-p source_port] [-q seconds] [-s sourceaddr] [-T keyword] [-V rtable] [-W recvlimit] [-w timeout] [-X proxy_protocol] [-x proxy_address[:port]] [destination] [port] Command Summary: -4 Use IPv4 -6 Use IPv6 -b Allow broadcast -C Send CRLF as line-ending -D Enable the debug socket option -d Detach from stdin -F Pass socket fd -h This help text -I length TCP receive buffer length -i interval Delay interval for lines sent, ports scanned -k Keep inbound sockets open for multiple connects -l Listen mode, for inbound connects -M ttl Outgoing TTL / Hop Limit -m minttl Minimum incoming TTL / Hop Limit -N Shutdown the network socket after EOF on stdin -n Suppress name/port resolutions -O length TCP send buffer length -P proxyuser Username for proxy authentication -p port Specify local port for remote connects -q secs quit after EOF on stdin and delay of secs -r Randomize remote ports -S Enable the TCP MD5 signature option -s sourceaddr Local source address -T keyword TOS value -t Answer TELNET negotiation -U Use UNIX domain socket -u UDP mode -V rtable Specify alternate routing table -v Verbose -W recvlimit Terminate after receiving a number of packets -w timeout Timeout for connects and final net reads -X proto Proxy protocol: "4", "5" (SOCKS) or "connect" -x addr[:port] Specify proxy address and port -Z DCCP mode -z Zero-I/O mode [used for scanning] Port numbers can be individual or ranges: lo-hi [inclusive] ```

For example iperf:

``` shell root@filecoin2:/# iperf --help Usage: iperf [-s|-c host] [options] iperf [-h|--help] [-v|--version] Client/Server: -b, --bandwidth #[kmgKMG | pps] bandwidth to read/send at in bits/sec or packets/sec -e, --enhanced use enhanced reporting giving more tcp/udp and traffic information -f, --format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes --hide-ips hide ip addresses and host names within outputs -i, --interval # seconds between periodic bandwidth reports -l, --len #[kmKM] length of buffer in bytes to read or write (Defaults: TCP=128K, v4 UDP=1470, v6 UDP=1450) -m, --print_mss print TCP maximum segment size (MTU - TCP/IP header) -o, --output output the report or error message to this specified file -p, --port # client/server port to listen/send on and to connect --permit-key permit key to be used to verify client and server (TCP only) --sum-only output sum only reports -u, --udp use UDP rather than TCP -w, --window #[KM] TCP window size (socket buffer size) -z, --realtime request realtime scheduler -B, --bind [:][%] bind to , ip addr (including multicast address) and optional port and device -C, --compatibility for use with older versions does not sent extra msgs -M, --mss # set TCP maximum segment size (MTU - 40 bytes) -N, --nodelay set TCP no delay, disabling Nagle's Algorithm -S, --tos # set the socket's IP_TOS (byte) field -Z, --tcp-congestion set TCP congestion control algorithm (Linux only) Server specific: -p, --port #[-#] server port(s) to listen on/connect to -s, --server run in server mode -1, --singleclient run one server at a time --histograms enable latency histograms --permit-key-timeout set the timeout for a permit key in seconds --tcp-rx-window-clamp set the TCP receive window clamp size in bytes --tap-dev #[] use TAP device to receive at L2 layer -t, --time # time in seconds to listen for new connections as well as to receive traffic (default not set) --udp-histogram #,# enable UDP latency histogram(s) with bin width and count, e.g. 1,1000=1(ms),1000(bins) -B, --bind [%] bind to multicast address and optional device -U, --single_udp run in single threaded UDP mode --sum-dstip sum traffic threads based upon destination ip address (default is src ip) -D, --daemon run the server as a daemon -V, --ipv6_domain Enable IPv6 reception by setting the domain and socket to AF_INET6 (Can receive on both IPv4 and IPv6) Client specific: -c, --client run in client mode, connecting to --connect-only run a connect only test --connect-retries # number of times to retry tcp connect -d, --dualtest Do a bidirectional test simultaneously (multiple sockets) --fq-rate #[kmgKMG] bandwidth to socket pacing --full-duplex run full duplex test using same socket --ipg set the the interpacket gap (milliseconds) for packets within an isochronous frame --isochronous :, send traffic in bursts (frames - emulate video traffic) --incr-dstip Increment the destination ip with parallel (-P) traffic threads --incr-dstport Increment the destination port with parallel (-P) traffic threads --incr-srcip Increment the source ip with parallel (-P) traffic threads --incr-srcport Increment the source port with parallel (-P) traffic threads --local-only Set don't route on socket --near-congestion=[w] Use a weighted write delay per the sampled TCP RTT (experimental) --no-connect-sync No sychronization after connect when -P or parallel traffic threads --no-udp-fin No final server to client stats at end of UDP test -n, --num #[kmgKMG] number of bytes to transmit (instead of -t) -r, --tradeoff Do a fullduplexectional test individually --tcp-write-prefetch set the socket's TCP_NOTSENT_LOWAT value in bytes and use event based writes -t, --time # time in seconds to transmit for (default 10 secs) --trip-times enable end to end measurements (requires client and server clock sync) --txdelay-time time in seconds to hold back after connect and before first write --txstart-time unix epoch time to schedule first write and start traffic -B, --bind [ | ] bind ip (and optional port) from which to source traffic -F, --fileinput input the data to be transmitted from a file -H, --ssm-host set the SSM source, use with -B for (S,G) -I, --stdin input the data to be transmitted from stdin -L, --listenport # port to receive fullduplexectional tests back on -P, --parallel # number of parallel client threads to run -R, --reverse reverse the test (client receives, server sends) -S, --tos IP DSCP or tos settings -T, --ttl # time-to-live, for multicast (default 1) -V, --ipv6_domain Set the domain to IPv6 (send packets over IPv6) -X, --peer-detect perform server version detection and version exchange Miscellaneous: -x, --reportexclude [CDMSV] exclude C(connection) D(data) M(multicast) S(settings) V(server) reports -y, --reportstyle C report as a Comma-Separated Values -h, --help print this message and quit -v, --version print version information and quit ```
weiqiushi commented 1 year ago

What is the purpose of this BDT tool?

As a network protocol testing tool, if it is only used to diagnose the running status of the server, we can use simple client tools such as ping/nc to achieve it, in order to diagnose whether the server BDT protocol can be connected correctly.

But if this tool is used to diagnose the connectivity and data transmission performance of BDT between two nodes, we need more professional testing tools, such as iperf, and specify whether to run as a client or server using -c and -s.

It seems that the current implementation leans towards providing a server + client tool that is similar to iperf for professional users. However, the daemon and debug programs can be combined into one program.

For example nc:

For example iperf:

How do you test the connectivity between the local machine and an already started, BDT stack via nc? I think only BDT provides a specific tool to test

lurenpluto commented 1 year ago

What is the purpose of this BDT tool?

As a network protocol testing tool, if it is only used to diagnose the running status of the server, we can use simple bdt client tools like ping/nc to achieve it, in order to diagnose whether the server BDT protocol can be connected correctly.

But if this tool is used to diagnose the connectivity and data transmission performance of BDT between two nodes, we need more professional testing tools, such as iperf, and specify whether to run as a client or server using -c and -s.

It seems that the current implementation leans towards providing a server + client tool that is similar to iperf for professional users. However, the daemon and debug programs can be combined into one program.

For example nc:

For example iperf:

It mainly depends on the different levels of requirements

jing-git commented 1 year ago

@jing-git I want to integrate this function into the cyfs-monitor program to detect whether a given OOD is connectable or not. What would be the best way for me to do this?

hello, you can directly use the ping interface of Pinger.:https://github.com/buckyos/CYFS/blob/main/src/component/cyfs-bdt/src/debug/ping.rs example:

/*
stack: WeakStack
remote: Device
timeout: Duration
*/
let pinger = Pinger::open(stack.clone()).unwrap();
match pinger.ping(remote.clone(), timeout, "debug".as_ref()).await {
        Ok(rtt) => {
                match rtt {
                        Some(rtt) => {
                                println!("ping success, rtt is {:.2} ms", rtt as f64 / 1000.0);
                        },
                        None => {
                                println!("connected, but ping's seq mismatch");
                        }
                }
        },
        Err(e) => {
                println!("ping err={}", e);
        }
}
jing-git commented 1 year ago

There is a key question The tool itself is similar to nc/ping tool in nix, why do you need a daemon and a tool, so that the mental cost of using it is too high for the user, and the multi-process coordinated start and use itself is very unfriendly, as a tool, all logic should be integrated into an executable program, convenient for distribution to users and users to use

The previous design was more focused on debugging and observing the running process. For ease of use, i will combine the daemon and debugger into one program later, which can directly receive commands, process them, and return results.

jing-git commented 1 year ago

Hello, you can use easier-to-use tool right now. It provides ping and nc functions to quickly check the connectivity of the peer device and has only one program.

compile: cargo build --release -p bdt-tool

Command line: ./bdt-tool --ep BDTStackAddress [--sn SN's Desc, default is sn-miner.desc] [--log_level none/info/debug/error, default is none] --cmd [ping/nc parameters]

Examples: Running Environment: Local BDT Stack address: 192.168.200.100:8060 tcp&udp Remote device's desc: u1.desc Remote device open a stream on VPort 2023 SNs' desc: sn1.desc & sn2.desc

  1. ping ./bdt-tool --ep L4udp192.168.20.100:8060 --ep L4tcp192.168.20.100:8060 --sn sn1.desc --sn sn2.desc --cmd ping u1.desc 3 3 Results: ping success, rtt is 7.46 ms ping success, rtt is 1.54 ms ping success, rtt is 1.58 ms

  2. nc ./bdt-tool --ep L4udp192.168.20.100:8060 --ep L4tcp192.168.20.100:8060 --sn sn1.desc --sn sn2.desc --cmd nc u1.desc 2023 Results: connect vport=2021 success!

If you want to see more information about the running process, you can add the parameter log_level, like "--log_level info".

lizhihongTest commented 1 year ago

@jing-git I think you should put our beta and nighlty SN into the tool, other people may not know our complete SN List,You can set beta sn list default ,and use channel param to config it . beta.zip nightly.zip

jing-git commented 1 year ago

So should we public the beta & nightly desc file

@jing-git I think you should put our beta and nighlty SN into the tool, other people may not know our complete SN List,You can set beta sn list default ,and use channel param to config it . beta.zip nightly.zip

Of course they can be integrated into the tool. @weiqiushi The CYFS protocol stack how to determine the beta and nightly environment, and then select the corresponding SN server, I refer to the implementation :)

lizhihongTest commented 1 year ago

@jing-git @lurenpluto I don't think the current bdt-tool is friendly for novice users to diagnose the BDT network.

Just like the traditional ping tool, there is no need for users to input their own IP and DNS host as this is handled internally by the tool,

And similarly, endpoints and SN lists can be embedded in the BDT-tool , Needn`t input

Ping

lurenpluto commented 1 year ago

Hello, you can use easier-to-use tool right now. It provides ping and nc functions to quickly check the connectivity of the peer device and has only one program.

compile: cargo build --release -p bdt-tool

Command line: ./bdt-tool --ep BDTStackAddress [--sn SN's Desc, default is sn-miner.desc] [--log_level none/info/debug/error, default is none] --cmd [ping/nc parameters]

Examples: Running Environment: Local BDT Stack address: 192.168.200.100:8060 tcp&udp Remote device's desc: u1.desc Remote device open a stream on VPort 2023 SNs' desc: sn1.desc & sn2.desc

  1. ping ./bdt-tool --ep L4udp192.168.20.100:8060 --ep L4tcp192.168.20.100:8060 --sn sn1.desc --sn sn2.desc --cmd ping u1.desc 3 3 Results: ping success, rtt is 7.46 ms ping success, rtt is 1.54 ms ping success, rtt is 1.58 ms
  2. nc ./bdt-tool --ep L4udp192.168.20.100:8060 --ep L4tcp192.168.20.100:8060 --sn sn1.desc --sn sn2.desc --cmd nc u1.desc 2023 Results: connect vport=2021 success!

If you want to see more information about the running process, you can add the parameter log_level, like "--log_level info".

One of the key principles of our tool design is to put ourselves in the user's shoes, to make it simple enough to use, and not prone to ambiguity and confusion

The command line of the tool should be improved from the following perspectives. 1.

1. the --ep parameter inside is too complex

The use of bdt internal endpoint coding format, of course, this user can provide is the most accurate, but the general user does not know what the meaning of this, for the need for protocol + ip + port as a parameter, you can refer to many mature tools such as nc how these are provided, --ep can be used as an advanced parameter, to advanced and skilled users But the general user or to mention the more friendly parameters, otherwise this direct internal coding format is difficult to spell correctly

2. Need to support device

Our bdt protocol connection is based on device.desc, the endpoint list is only a part of it, so the most intuitive way for bdt tool to test the connection is to specify device.desc or device_id directly, which is the most consistent with the bdt protocol command line. PS: If you specify only device_id, you need to have the process of finding device.desc, you can use metachain and sn to find the target device.desc.

3. sn_list selection

If sn is not explicitly specified in the parameters, then use the sn list on the corresponding meta chain, which we also have a fixed logic to pull the corresponding sn list

jing-git commented 1 year ago

New features have been updated for bdt-tool, you can update the code and use it after compiling.

Usage example:

  1. ping

Try to ping device_id 5aSixgLro2HUD6djQRT82dLdXNBjF7trBUczRu7piH9A 3 times, timeout is 5 sec.

./bdt-tool --cmd ping 5aSixgLro2HUD6djQRT82dLdXNBjF7trBUczRu7piH9A 3 5
  1. nc

Try to connect device_id 5aSixgLro2HUD6djQRT82dLdXNBjF7trBUczRu7piH9A vport 2021.

./bdt-tool --cmd nc 5aSixgLro2HUD6djQRT82dLdXNBjF7trBUczRu7piH9A 2021

User manual handbook