Closed yyoncho closed 3 years ago
It might be worth looking into. I've had speed issues with lsp-mode
, particularly in JS/TS files when company
is gathering completions (second column is in milliseconds):
- ... 6406 99%
Automatic GC 3461 53%
- company-calculate-candidates 2943 45%
- company--fetch-candidates 2943 45%
- company-call-backend-raw 2943 45%
- apply 2943 45%
- company-capf 2943 45%
- company-capf--candidates 2943 45%
- completion-all-completions 2943 45%
- completion--nth-completion 2943 45%
- completion--some 2943 45%
- #<compiled 0xa798ce5> 2943 45%
- completion-basic-all-completions 2943 45%
- completion-pcm--all-completions 2942 45%
- all-completions 2942 45%
- #<lambda 0x77a20ef15b5a50d> 2942 45%
- cond 2942 45%
- funcall 2942 45%
- #<lambda 0x10e9fced85104b1b> 2942 45%
- cond 2942 45%
- let* 2942 45%
- if 2852 44%
- progn 2852 44%
- lsp--log-entry-new 2852 44%
- let* 2852 44%
- ewoc-enter-last 2852 44%
- ewoc-enter-before 2852 44%
- ewoc--insert-new-node 2852 44%
- ewoc--refresh-node 2852 44%
- lsp--log-entry-pp 2852 44%
- let 2852 44%
- let* 2852 44%
- let 2852 44%
- progn 2852 44%
- setq 2852 44%
- concat 2851 44%
- json-encode 2851 44%
- json-encode-array 2845 44%
- mapcar 2843 44%
- json-encode 2843 44%
- json-encode-hash-table 2842 44%
- maphash 2818 43%
- #<compiled 0xc0b4441> 2816 43%
- json-encode 1597 24%
- json-encode-hash-table 1550 24%
- maphash 1540 23%
- #<compiled 0x65152d9> 1538 23%
- json-encode-key 1228 19%
- json-read-from-string 1166 18%
- #<compiled 0xb9a9191> 609 9%
- kill-buffer 592 9%
+ run-hooks 438 6%
+ replace-buffer-in-windows 47 0%
+ tramp-flush-file-function 10 0%
+ helm-kill-buffer-hook 7 0%
+ magit-preserve-section-visibility-cache 5 0%
uniquify-kill-buffer-function 3 0%
+ exwm-layout--other-buffer-predicate 3 0%
process-kill-buffer-query-function 2 0%
+ save-place-to-alist 1 0%
+ generate-new-buffer 527 8%
json-read 12 0%
+ json-encode 12 0%
+ json-encode 286 4%
json-join 2 0%
json-encode-number 18 0%
+ json-encode-array 15 0%
+ json-encode-string 7 0%
+ json-encode-key 1197 18%
+ json-join 3 0%
+ json-encode-list 5 0%
+ let 45 0%
+ lsp-request-while-no-input 39 0%
+ progn 6 0%
+ completion-hilit-commonality 1 0%
+ xcb:-process-events 1 0%
+ #<compiled 0x12261c5> 1 0%
+ timer-event-handler 21 0%
I tried using eglot
with the same server and files and found that it took much less time to get all the completions:
*** eglot
- ... 1417 98%
Automatic GC 1169 81%
- company-call-backend-raw 245 17%
- apply 245 17%
- company-capf 245 17%
- company-capf--candidates 244 16%
- completion-all-completions 244 16%
- completion--nth-completion 244 16%
- completion--some 244 16%
- #<compiled 0x580a2f9> 244 16%
- completion-basic-all-completions 244 16%
- completion-pcm--all-completions 244 16%
- all-completions 244 16%
- #<lambda 0xeb204a846a01e1b> 244 16%
- cond 244 16%
- cl-remove-if-not 244 16%
- funcall 242 16%
- #<lambda -0x1ab0b620c7fb462> 242 16%
- if 242 16%
- setq 242 16%
- jsonrpc-request 237 16%
- apply 237 16%
- #<compiled 0xb1a52d1> 236 16%
- sit-for 235 16%
- apply 235 16%
- explain--wrap-sit-for 235 16%
- apply 235 16%
- #<compiled 0x1fd691a41aab> 235 16%
- read-event 229 15%
- jsonrpc--process-filter 222 15%
- jsonrpc-connection-receive 199 13%
- jsonrpc--log-event 198 13%
- pp-to-string 168 11%
- pp-buffer 140 9%
- indent-sexp 113 7%
- lisp-indent-calc-next 59 4%
- calculate-lisp-indent 19 1%
lisp-indent-function 6 0%
forward-sexp 3 0%
forward-sexp 10 0%
indent-line-to 4 0%
down-list 17 1%
up-list 3 0%
forward-sexp 13 0%
- (setf jsonrpc-last-error) 1 0%
- apply 1 0%
- #<compiled 0x8e0fe85> 1 0%
- eieio-oset 1 0%
apply 1 0%
jsonrpc--json-read 22 1%
- generate-new-buffer 1 0%
- get-buffer-create 1 0%
run-hooks 1 0%
- timer-event-handler 6 0%
- apply 6 0%
#<compiled 0x1fd691a9b81d> 3 0%
- explain--measure-timer-callback 2 0%
- explain--measure-function 2 0%
- apply 2 0%
- auto-revert-buffers 2 0%
- apply 2 0%
- auto-revert-buffers--buffer-list-filter 2 0%
- #<compiled 0x313b599> 2 0%
mapcar 1 0%
- auto-revert-handler 1 0%
- dired-buffer-stale-p 1 0%
file-remote-p 1 0%
- explain--measure-idle-timer-callback 1 0%
- explain--measure-function 1 0%
- apply 1 0%
- #<compiled 0x92fd311> 1 0%
flymake--log-1 1 0%
cl-gensym 1 0%
+ #<lambda 0x1deb519307de335a> 1 0%
+ mapcar 5 0%
+ apply 2 0%
+ #<lambda 0xacbf07822beda46> 1 0%
+ #<compiled 0x1fd691ad4ca7> 2 0%
+ save-current-buffer 1 0%
+ timer-event-handler 19 1%
@leungbk Unrelated but how did you get the time in profiler?
@leungbk you should not benchmark with logging enabled.
@leungbk and also, lsp-mode is doing flex match, while in this benchmark eglot is not doing such. If you want to benchmark jsonrcp.el vs lsp-mode.el you should use the both APIs to do direct call.
My mistake, I didn't realize that logging was that expensive. When I have it off it's noticeably faster.
On Fri, May 22, 2020 at 11:04 AM Ivan Yonchovski notifications@github.com wrote:
@leungbk https://github.com/leungbk and also, lsp-mode is doing flex match, while in this benchmark eglot is not doing such. If you want to benchmark jsonrcp.el vs lsp-mode.el you should use the both APIs to do direct call.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/emacs-lsp/lsp-mode/issues/1144#issuecomment-632833307, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG65GOSCLRCF2VZSSVR2EXTRS25EHANCNFSM4JF6ZXBQ .
I'll just point out some trivial stuff:
Eglot, contrary to certain parts of Emacs, doesn't ever do any flex matching. The servers do.
Eglot is also running with logging enabled (seems to take about 13% of total time) you can turn if off by setting eglot-events-buffer-size
to 0;
Other than these facts, I can't comment if these benchmarks mean anything, but if they do, they are encouraging, thanks @leungbk
Eglot, contrary to certain parts of Emacs, doesn't ever do any flex matching. The servers do.
Take that back. There a little bit of flex-highlighting involved, I think, and performed by Emacs, but no actual selection performed by Emacs. Seems to be irrelevant in the profile anyway.
Given the current backlog, it doesn't seem to be realistic to address this issue given the amount of work that has to be invested with zero to no end-user benefit.
This library will be part of emacs 27(?) and it is available in the elpa. We should check:
How easy is to integrate it in lsp-mode. lsp-mode supports calling multiple servers when using lsp-request related methods so we should plug it without changing the interface.
We should test the performance of the library compared to lsp-mode implementation. I think that the jsonrpc library uses buffers while lsp-mode is allocating strings of a particular size and then the string is used as a buffer. I am not sure which one performs better or whether there is performance difference but AFAIK emacs is buffer usage is very optimized.
We should also consider rewriting the lsp-mode parsing code to use the described in 2) approach if it is faster and if full replacement of the code turns out to be harder to implement.
CC @joaotavora