Closed jimaek closed 2 months ago
continuous
is the word generally used for this, so that might be the best name, but it's a little hard to type - but with autocomplete, that might be fine? Alternatively I would suggest --infinite
as you used in the issue title instead of non-stop.
As for -t
, it might be familiar to Windows users but not Linux/Mac since continuous is the default there, and -t
is used for TTL instead. In this case, I'd suggest just not adding the alias.
Once it gets close to the last packet, maybe before the last one, it will start a new measurement on the background and then stitch the results together to emulate a single continuous test.
Something like this makes sense but will need testing, maybe it'll need to be started a bit sooner.
Also, note that this should error out when combined with --json
or --latency
continuous is the word generally used for this, so that might be the best name, but it's a little hard to type - but with autocomplete, that might be fine? Alternatively I would suggest --infinite as you used in the issue title instead of non-stop.
Yes, its too hard to type, thats why I used non-stop, but infinite works too.
Then with that change this task seems ready
Ready but the API will accept these requests only after https://github.com/jsdelivr/globalping/pull/453 is merged.
@jimaek
1/ if I understand this correctly, the "--infinite" option runs a first probe and prints measurements as usual, but instead of exiting afterwards, it will run another probe using the id from the first probe and then it prints the new results ? Do you think it's compatible with the "live view" we currently have ? As a reminder the way the "live view" works is by continuously replacing the results from the screen with the temporary results, then at the end it deletes the temporary results and prints everything. So if we run a second probe it might look quite confusing to the user .
2/ what does "max packet limit" mean ?
3/ I'm not sure I understand what "stitch the results together to emulate a single continuous test" means here
Maybe we can go with --repeat/-R
and combine it with --interval/-I
?
--repeat or -repeat 0 (infinite), --repeat 1, --repeat 2 ... --repeat N
and --interval 5s, interval 1m ...
I am not sure if people will get the --repeat option since non-stop ping is the default behavior on linux and mac. But I do like the --interval idea. It should be optional though, where by default is instant (whatever ms we decide works best) and not required and the user can overwrite it to some other value.
We could add both --infinite
and --interval
, but note that the implementation must be different for each of them.
For --infinite
, the goal is to emulate default Linux behavior and make it seem like 1 continuous ping. To do that, we'll likely want to start a second measurement shortly before the first one finishes as @jimaek mentioned to avoid any visible delays in between,
--interval
if added should really be just "wait for the measurement to finish, wait N seconds, run it again"
^ Sounds good to me. Just need to properly document the behavior in CLI and readme
Should we use the same view to display the results continuously or do you have something else in mind?
Great question, @jimaek how do you expect this to work? For one probe, we could do the same as native ping, but for more than 1, it doesn't quite work and we need to keep updating the output that's in the visible part of the terminal.
One after the other is indeed not very useful. If the user requests more than 1 probe, can we collapse the UI to something else? Like only showing summaries per probe per row instead of the raw output. Basically a CLI output of this https://ping.pe/google.com
If we could get the users of similar sites as the one above to switch to our CLI and not feel a need to use the website it would be a huge win for us.
With the probe reusing functionality on the API level we can now emulate a continuous non-stop ping. Maybe a new ping specific parameter? On Windows that would be "-t".
How about we add --nonstop/-t ? And if enabled the CLI will:
Additionally: