Closed ANtlord closed 2 years ago
I don't have "proof" (or done a systematic investigation) but I subjectively also noticed a noticeable slowdown of either pyls or jedi recently. I found that
python-language-server==0.31.10 jedi==0.15.2
doesn't have those slowdowns. Maybe you can see if you find the same and compare profiles with those versions?
It doesn't work for a script consists of import os; os.
as switch it to the version
I also noticed that when using jedi for completion directly its
whereas pyls just stays slow even after the first completion.
True, true. As I wrote in the PR that shows bottlenecks the issue is related to a lot of "unnecessary" information. It fetches from Jedi documentation and reference of symbols (methods, fields etc) which could be reasonable. For example if you show it within a completion as VSCode does. Anyway the fetching is much slower than fetching of symbols only. May there is a better way to use Jedi API but I don't know it.
is it possible to utilize this settings of jedi - jedi.settings.call_signatures_validity
? it looks like a good way to keep the cache in memory for longer period of time, therefore allowing faster autocompletion. I have asked related question at jedi repo as well. heres the link - https://github.com/davidhalter/jedi/issues/1679#issue-719378332
Here is some additional profiling using modified test_numpy_completions
test case on Python 3.6:
The very first (zeroth) run after installation can take as long as 25 seconds (!) on a machine with 12 cores, huge RAM and a fast SSD drive. Then it comes down to quite predictable ~12 seconds on first run and ~6 seconds on the consecutive runs (thanks to Jedi cache). I also run it with jedi 0.18 (currently pyls is not compatible with it but it is possible to run this test case after changing version requirements) and there might be an improvement, but not a huge one:
jedi 0.17.2 | jedi 0.18.0 | |
---|---|---|
first run | 12.5s | 9.58s |
second run | 6.81s | 6.54s |
Without going into the details the conclusion is in agreement with what @ANtlord described in https://github.com/palantir/python-language-server/pull/826: get_signatures()
call made in _label()
is expensive. While my pull request (https://github.com/palantir/python-language-server/pull/905) eliminates the need to call Completion.docstring()
for all the suggestions at once, the _label()
slowness is not yet addressed. I believe it is in users interest to be able to turn the enhanced label off as it slows the completion enormously. The new LSP version 3.16 allows to resolve label for a single item only using completionItem/resolve
; it would be optimal to defer this expensive operation this way.
The upstream slowness is being tracked in https://github.com/davidhalter/jedi/issues/1059 I believe.
Hello!
I have a weird issue of completion. In a simple script
import os; os.
completion takes about a second or second and half. It happens only if I use the language. Other language servers for other languages work fine.First I've done is checking speed of Jedi. It shows quite fine results. The first completion takes 0.69s the second one takes 0.12s. When I try to get completion in my editor (Neovim) it takes about every time. It looks like that cache jedi is ignored somehow or the language does something else.
The second thing I've tried is tracing of system call with
strace
. I get such statistics of system calls when a cursor stands after the dot in the end of the stringimport os; os.
Total time is 0.000725s which is quite good too. The only thing I care about is number of
openat
. Some process tries to open a lot ofpyi
files which don't exist but I'm not sure if it the cause of the problem.The third what I've tried is using VSCode but unfortunately I can't get how to install the language server for the editor.
Unfortunately I don't know the protocol and I can't measure response of the language server and I don't know what else I can check to find the bottle neck.
Tech info: Python 3.8.3 Linux kernel 5.6.19-300.fc32.x86_64 OS: Fedora 32 Editor: Neovim 0.4.3 Language client: https://github.com/autozimu/LanguageClient-neovim
Jedi benchmark