Open deajan opened 5 years ago
This change is more a codestyle issue. No real speedup there.
if you want to fork less, you should really use POSIX builtins, e.g.:
user@box:~$ echo TEXT >FILE
user@box:~$ VALUE=
user@box:~$ VALUE=$(<FILE)
user@box:~$ echo $VALUE
TEXT
user@box:~$ VALUE=
user@box:~$ read -r VALUE <FILE
user@box:~$ echo $VALUE
TEXT
In the 1st example we fork again (at least we do not fork a 'cat' process). In the 2nd example we really use builtins only. Multiline readings need the 1st or another approach, e.g. while read -r LINE ...
The problem with read -r is that it will read line per line, and I'm not sure that putting it in a while loop will make things faster.
After thinking about this, I'am not happy with your idea. It is bash-only and so not portable and so not worth to safe some cpu-cycles.
# git grep '$(cat '|grep -c Logger
93
Safing cycles in a log-message is not critical
# git grep '$(cat '|grep -cv Logger
28
Those other 28 occurences or also not time-critical. (lockfile-checking, pidfile, keys) IMHO. So better stay portable...(and in the end, everything should be converted to a POSIX shellscript 8-))
Apply https://github.com/zfsonlinux/zfs/commit/58aeb87a8f69019487457460a5d7c823910fc8ff