Closed henrycatalinismith closed 10 years ago
I haven't noticed any speed issues in most cases. This is on a generally slow machine with a regular drive:
$ time ppl > /dev/null
real 0m0.358s
user 0m0.283s
sys 0m0.043s
$ time ppl ls > /dev/null
real 0m1.073s
user 0m0.640s
sys 0m0.047s
Where I have noticed a little slowness is when using it in Mutt (when compared to abook).
$ time ppl mutt -i a > /dev/null
real 0m0.711s
user 0m0.637s
sys 0m0.037s
$ time abook --mutt-query a > /dev/null
real 0m0.003s
user 0m0.000s
sys 0m0.000s
I've checked the timing again a few minutes later, and the results are different.
$ time ppl
real 0m1.186s
user 0m1.140s
sys 0m0.040s
I've spent a little time poring over ruby-prof
output, and by and large the culprit seems to be the IniFile gem, which was somewhat unexpected. Check out the top few methods by time consumed.
%self total self wait child calls name
42.39 1.009 0.509 0.000 0.500 38 IniFile#parse!
10.45 0.125 0.125 0.000 0.000 4864 StringScanner#skip
10.34 0.124 0.124 0.000 0.000 15542 StringScanner#scan
4.13 0.050 0.050 0.000 0.000 1900 StringScanner#scan_until
2.46 0.088 0.030 0.000 0.058 1520 IniFile#process_property
2.13 0.026 0.026 0.000 0.000 574 Hash#initialize
1.73 0.021 0.021 0.000 0.000 1610 Hash#[]=
In fact, if I replace the IniFile gem with IniParse, performance improves quite noticeably.
$ time ppl
real 0m0.576s
user 0m0.524s
sys 0m0.048s
%self total self wait child calls name
5.43 0.023 0.023 0.000 0.000 5320 Regexp#match
3.55 0.115 0.015 0.000 0.100 1120 <Class::IniParse::Parser>#parse_line
3.50 0.022 0.015 0.000 0.007 1120 IniParse::Lines::Line#initialize
2.50 0.249 0.011 0.000 0.238 1702 *Array#each
2.49 0.036 0.011 0.000 0.026 840 IniParse::OptionCollection#<<
It's enough of a noticeable performance boost that I may just have to look into the feasibility of making this change permanent in a future version.
Another good reason to move to IniParse is that it claims to be able to write changes to an INI file without disrupting the ordering of the contents or removing any comments. This could be instrumental in enabling the creation of a ppl config
command analogous to Git's own git config
. It'd be very nice to be able to replace that ugly echo path = "
pwd" >> ~/.pplconfig
hack in the quick start guide with built-in functionality.
The change in 1.22.0
is probably a sufficient performance boost for now. Still not too satisfied with the speed in general, but it's definitely an improvement.
I'm not sure if this is due to changes in ppl
, or because I have more data in ppl
, or because I'm using ppl
more, but this is definitely more noticeable now. I'm at the point where I can say that I'm dissatisfied with the performance.
$ time ppl > /dev/null
real 0m0.993s
user 0m0.880s
sys 0m0.060s
$ time ppl ls > /dev/null
real 0m1.652s
user 0m1.547s
sys 0m0.063s
$ time ppl mutt -i a > /dev/null
real 0m1.615s
user 0m1.510s
sys 0m0.053s
The above was done on the same machine as the previous statistics.
I apologise for ignoring all your input for the last 23 days! I took on some new responsibilities at work and while I acclimatised to the new workload I didn't really feel like working on software in my free time.
Anyways, yes, I agree: performance is nowhere near where it should be. I now think it's because of rubygems. I've been looking into other projects that use Ruby to power a CLI application, and they largely avoid rubygems for performance reasons.
For example, Heroku:
Going forward we will be sunsetting support for the heroku gem in favor of the Toolbelt. The Toolbelt is much faster, shaving several seconds off the startup of each heroku command.
Also, hub:
Though not recommended, hub can also be installed as a RubyGem It's not recommended for casual use because of the RubyGems startup time.
This is going to be my next avenue of enquiry in the quest for satisfactory performance.
bump
I have 350 addresses by now and am working on a 4 year old MacBook Pro.
$ time ppl > /dev/null
real 0m0.451s
user 0m0.404s
sys 0m0.044s
$ time ppl ls > /dev/null
real 0m3.184s
user 0m3.132s
sys 0m0.036s
$ time ppl mutt -i a > /dev/null
real 0m3.204s
user 0m3.144s
sys 0m0.036s
I tried what would happen if I cut down my addressbook to less contacts (in the end it where 179)
$ time ppl > /dev/null
real 0m0.453s
user 0m0.432s
sys 0m0.016s
$ time ppl ls > /dev/null
real 0m1.846s
user 0m1.780s
sys 0m0.060s
$ time ppl mutt -i a > /dev/null
real 0m1.851s
user 0m1.804s
sys 0m0.036s
I'm afraid that's simply to slow :( And it looks like it scales with the number of contacts and is not just a startup issue.
Man, I'm so sorry about this problem. I wish ppl's performance scaled better with address book size. But as I'm unlikely to undertake the significant reworking that would be necessary to fix this problem any time soon, I'm going to close this issue so that my issue queue only reflects new or ongoing issues.
oh don't worry.
Some time ago I started a project called pplqq ( https://github.com/axelGschaider/pplqq ) that is supposed to be a drop-in replacement for ppl mutt
. I started off in haskell and achieved impressive times, than suffered some data loss, tried something in scala (with rather bad results due to JVMs startup time) and since then didn't find the time (things move slowly with a newborn daughter . . . )
so: at some time in the future I might have a little helper programm
Hey,
Has anyone else out there found that ppl generally takes a few hundred milliseconds longer to execute than they'd like? I primarily use ppl on two different computers: one with an SSD, and one with a regular spinning platter hard-drive. Paradoxically, I find that ppl runs like a sick old horse on the machine with the SSD, and comparatively quickly on the one with the regular hard-drive.
Check out this fairly typical SSD running time:
That's almost two seconds! And it's just the plain
ppl
command which displays the help text! In fact, look what happens if I runppl ls
!Over two seconds! I'll amend this issue with some figures from the machine with the regular hard-drive as soon as possible, but in the meantime, is anybody else suffering with performance as poor as this?