Closed ai4739 closed 2 years ago
The default Git branch here is master, which is being used as the development branch, so the info you're seeing in the help file refers to functionality of the next version.
(I should probably have some development branch instead, to avoid confusion, but this project wasn't started with all that much consideration)
The help file for v0.3.2 can be found here.
--auto-slice-size
isn't implemented in v0.3.2, and I recall fixing a bug with --max-input-slices
, which is probably the issue you're seeing.
So for a fix, you could use the current development version, or try some workaround on your side. Note that development versions aren't as well tested as release versions, so you may need to do some more testing on your side to verify it works correctly.
Okay I didn't realize that the master here was development as well ... I just cloned the master
branch here, re-built and --auto-slice-size
works just fine. Thanks so much for your quick response!
I'm struggling with scripting
parpar
to use-S
... no matter what options I pass,parpar
seems to balk at either of these options.I have a friend who built his install manually (Debian), and he claims this option works; however, my installation (Arch) fails to recognize the option.
Configuration:
According to the github help file,
-S
is an alias for--max-input-slices=32768
. However, when I give an initial-s 400k
and--max-input-slices=32768
as an option (expecting parpar to adjust the slice size for me),parpar
still fails with too many input slices.My input sizes vary, so I'd really like to get
--auto-slice-size
working so I don't have to do the math in bash ... and also, I don't have enough experience to know what a good number of slices is to start ... I've read all of your closed issues and there are some good discussions. However, I wasn't convinced that say, if I divided my input size (in bytes) by 25000 to come up with an initial slice size (in bytes) ... that this math would cover all forseeable cases where I would call the script. I guess anything over 25000 bytes would be okay, but then you get into performance issues again for smaller (~100MiB) files not really needing that many slices or larger files incurring too much of a performance hit.