Closed qdot closed 9 years ago
Wow, I was going to start explaining why swiper--candidates
can't go any faster unless I
substantially change the ivy-read
interface, but I'm getting to the minibuffer in less than 0.2
seconds for the same file.
What's your Emacs version / computer specs (SSD?).
Ok, I just did an emacs -q and loaded only swiper then brought up the file, works fine like you described. Apparently some other package is interfering with candidate generation. I'll poke around and see what else it could be. There are a LOT of potential bottlenecks because I'm a bit package crazy. :|
And I'm on 24.4.1 on a quad-core machine with SSD and 16gb of RAM, so hardware shouldn't be an issue.
Ok, well, just opened it with everything loaded and now it's running quickly too. So now I'm suspecting leak somewhere. I'm in this file a LOT lately, so I'll see if it happens again today, and if not close at end of day.
Closing as invalid for now, definitely a leak somewhere. Managed to hit the long wait multiple times again today, but still can't pinpoint why, and it makes EVERYTHING slow, so no reason to rest blame here. Sorry about that. :)
FWIW, I encounter the same issue with a 11.000 lines TeX file.
That's what the profiler says.
- command-execute 5145 98%
- call-interactively 5145 98%
- funcall-interactively 4951 95%
- swiper 4941 94%
- swiper--ivy 4941 94%
swiper--candidates 1708 32%
- ivy-read 650 12%
- read-from-minibuffer 640 12%
- #<compiled 0xb39ad3> 317 6%
- ivy--minibuffer-setup 317 6%
- ivy--exhibit 317 6%
+ ivy-completions 4 0%
swiper--update-input-ivy 3 0%
+ ivy--exhibit 314 6%
+ minibuffer-inactive-mode 3 0%
+ ivy--preselect-index 10 0%
- swiper--init 406 7%
line-number-at-pos 406 7%
line-number-at-pos 307 5%
+ execute-extended-command 10 0%
+ byte-code 194 3%
+ ... 58 1%
+ preview-move-point 1 0%
But of course, I can't reproduce that with emacs -Q
, too. I've tried toggle-debug-on-quit
and repeatedly git C-g
during the freeze (and then continuing with c
). The backtrace always looks like this:
Debugger entered--Lisp error: (quit)
swiper--candidates()
swiper--ivy(nil)
swiper()
funcall-interactively(swiper)
call-interactively(swiper nil nil)
command-execute(swiper)
(benchmark-run (swiper--candidates))
says three seconds.
OTOH, with the same emacs which is so slow with that TeX file, invoking swiper
in Emacs' vhdl-mode.el
with its 17.000 LOCs is really fast.
@tsdh, can you look at swiper-font-lock-ensure
and exclude latex-mode
?
It could be that font locking eats up the time.
@abo-abo No, the buffer is way too big, so s-f-l-e
won't do anything.
Now I think I found out what's the problem. I use flyspell-mode in tex buffers which will add overlays which mark words with spelling errors (or words which aspell simply doesn't know). When I open the buffer, at first swiper
is there immediately. Only after flyspell has kicked in via some idle timer it gets slow.
Yes, that seems very possible. Here's the routine to collect the swiper candidates:
(while (< (point) (point-max))
(push (format swiper--format-spec
(cl-incf line-number)
(buffer-substring
(line-beginning-position)
(line-end-position)))
candidates)
(forward-line 1))
It could be that traversing the buffer with forward-line
causes some
packages to do some work.
That's at least part of if. (benchmark-run (while (= 0 (forward-line 1))))
reveals that 1.27 seconds of the 3 seconds used by swiper--candidates
are caused by the iteration. I guess the rest is caused by building the big list of lines and reversing that.
I did try (split-string (buffer-string))
as an alternative at one point, but found it to be less efficient than the forward-line
approach.
One other alternative is to never build the list at all, and make the completion candidates into a function
that calls re-search-forward
over and over again. I haven't tried that yet.
Oleh Krehel notifications@github.com writes:
One other alternative is to never build the list at all, and make the completion candidates into a function that calls re-search-forward over and over again. I haven't tried that yet.
Yeah, some kind of dynamic and lazy candidate computation would be nice.
Currently, having to re-compute all candidates with each swiper
call
is quite some overhead. But caching the candidates as long as the
buffer is not modified is also bad because it's likely to require lots
of memory.
Bye, Tassilo
FYI, I am trying to work on a file with 2.3M of lines (XML). So, lazy evolution of candidates would help a lot!
In this case, excessively large means a 16k LOC python file (yeah I'm not happy about it either, but this is why I'm paid to work on these things :) )
File in question:
http://hg.mozilla.org/mozilla-central/file/840cfd5bc971/dom/bindings/Codegen.py
Just running swiper in this file can take ~5 seconds to hit the minibuffer. Profiler output (taken only calling swipper) looks like:
isearch-forward/backward tend to come up instantly, though still lag while doing back/forward movement, because, well, it's 16k lines of python. :|