shenwei356 / rush

A cross-platform command-line tool for executing jobs in parallel
https://github.com/shenwei356/rush
MIT License
846 stars 63 forks source link

Support reading records via stdin #22

Open fungs opened 5 years ago

fungs commented 5 years ago

Hi, great tool! I like all of your *kit programs.

One thing I'm using a lot in GNU parallel is the --pipe option, where the records are divided in blocks and provided to the commands via stdin. This is very useful if single commands work on a large number of records and stdin is better then command line arguments with size restrictions. rush can use an explicit number of records, which I sometimes prefer and which GNU parallel cannot do, because the blocksize is defined by (approximate) data size for performance reasons.

Is there any chance this feature makes it into rush (I coudn't find it)?

I'm aware that this kind of circumvents the whole custom field and parameter assignment part, but maybe you can fit it smoothly by using a BASH-like named pipe syntax to turn records and fields into virtual files using fifos. For instance

rush - n 1000 'command < <{2}' < records.txt

could provide the second field of records.txt as a file. The syntax should, of course, not clash with common Shell syntax. This example was just for illustration purposes.

Best, Johannes

shenwei356 commented 5 years ago

Sorry Johannes, I'm a little confused what you like to do :crying_cat_face: , could you please give more specific example?

fungs commented 5 years ago

I would like to feed groups of records to the commands via standard input, not via command line parameters.

shenwei356 commented 5 years ago

Here's an simple example. But there's length limit to pass them to stdin by echo.

$ seq 5 | rush -n 2 -k 'echo "{}" | cat ; echo'
1
2

3
4

5
fungs commented 5 years ago

The difference in your example is, that echo does not read via standard input.

Specific example, yea :) ...

Consider downloading 100 million gene sequences via accession, you want to spawn say 6 downloaders and give them blocks of 10k accessions to download each and to spit them out on the standard output. Here, one command gets 10k records, trying to provide that as command line parameter will likely not work (if it does, add zeroes until it doesn't). Smaller blocks will hammer the server.

shenwei356 commented 5 years ago

I see. rush can't do that.

But using echo {} to feed one record to command via stdin every time seems OK for me, the only drawback is you have to spawn n commands in total. This may reduce the performance if it's costly to startup the command.

Anyway, you can split the records in multiple blocks and feed them to commands as you said.

fungs commented 5 years ago

Workarounds are possible, I guess this is a convenience feature request. It is just very convenient (and very useful with large data) to feed information via a pipe and not via command line options. There are many examples for using '--pipe' in GNU parallel.

kenneth-Q commented 5 years ago

Oh, I have the problem too. You may try this cat random.img | parallel --pipe --recend '' -k bzip2 --best > randomcompressed.img.bz2. But I cannot find a function like pipe at rush. This is useful for me. How about you?

kenorb commented 2 years ago

I would expect this commands will send 100 lines to each instance, but it doesn't:

seq 1 10000 | rush -n 100 'wc -l'

Parallel equivalent:

seq 1 10000 | parallel -l 100 --lb --pipe 'wc -l'