Closed babakou9 closed 2 weeks ago
One common source of slowness is key binding conflict. For example, if you have a Normal mode key binding ab
to do something and a key binding abc
to do something else, when you press ab
, Vim/Neovim will wait about half a second for the letter c
before executing the command bound to ab
. But this behavior doesn't seem to match your bug description.
hello, thank you for your answer. I don’t think it is related to keybindings indeed. And it doesn’t happen right at the start, but after loading a few libraries and data. It’s like the the environment is overwhelmed, but it doesn’t happen with the same script (and library/data loading) in Rstudio. So I’m wondering what Rnvimserver is trying to do ...
Could you load the libraries one at a time to find what specific library is causing the delay? Is there a public dataset that we can download to replicate the issue?
I’ll try this weekend to reproduce the error in a sharable format
hi! sorry I wasn’t available to do this until now I tried to reduce the code to a minimum with associated dataset (5Mo) https://github.com/babakou9/slowRnvim
it seems that the slowness appears after loading the dataframe with sf
I also added screenshots of the console outputs on my computer for both R.nvim and Rstudio in the repo
EDIT: I noticed that, when I execute the all file, all the tests are quick in R.nvim, but then, if I try to execute another simple list declaration it is slow again...
Thank you! I can now replicate the issue. The problem is building the list of objects for unnamed lists. When nvimcom
can't get the names from a list, it uses the slow sprintf
function to build bracket "names" ([[1]]
, [[2]]
, [[3]]
, etc...).
We can replicate the issue with this simpler example:
ul1 <- as.list(rnorm(150000))
ul2 <- as.list(rnorm(150000))
ul3 <- as.list(rnorm(150000))
ul4 <- as.list(rnorm(150000))
Neovim will become blocked even longer if the Object Browser is open (<LocalLeader>ro
) and we try to open the list.
Unfortunately, I'll have time to try to fix this by the end of next week.
thanks for investigating into this :)
Thanks much! I've also been using sf and noticing this slowdown. Thanks again for all of the work on R.nvim.
As a workaround, removing the feature that causes the bug (not fixing it), I created the branch unnamed_list
in my fork of R.nvim
:
https://github.com/jalvesaq/R.nvim/branches
The bug has two causes:
.GlobalEnv
objects when there is a huge number of lists with unnamed elements..GlobalEnv
through TCP.As reported by @shaman-yellow, Nvim-R_0.9.17 is faster. However, it is faster because it has a bug that makes it go only 17 levels deep in the plots
object of the example while the object has at least 25 levels. So, to fix the bug, I will try:
nvimcom
if the time to build the list of objects is too long. This was already being done, but the code had a bug making the number of levels go up again..GlovalEnv
objects is bigger than 1 MB. I will hardcode 1 MB as the limit, but, perhaps, we should create an option for it because the size that is too big might depend on the hardware where R.nvim is running. I will create the option if someone prefers a different limit.The bug should be adequately addressed in the last commit in the branch unnamed_list
. Now, nvimcom starts inspecting at most 12 levels of lists and tolerating up to 100 ms of completion data build time and up to 1,000,000 bytes of completion data size. These parameters are configurable, as described in the documentation.
Tested and works just fine. You nailed it!
I added a new feature to the unnamed_list branch: automatically increase max_depth
if necessary when the user tries to open a list in the Object Browser. Since now there is a way of automatically increasing max_depth
, I decreased its new default value to 3.
How to try it:
nvimcom
:echo 'remove.packages("nvimcom")' | R
~/.Rprofile
:options(nvimcom.verbose = 4)
R.nvim
config: compl_data = {
max_depth = 2,
max_size = 1000000,
max_time = 100,
},
xxx <- list(l1e1 = 1,
l1e2 = "a",
l1e3 = list(l2e1 = 1,
l2e2 = list(l3e1 = 1,
l3e2 = list(l4e1 = 1))),
l1e4 = "a")
Start the Object Browser (\ro
) and open the list three times (\r=
). You will notice that the list levels open in response to the first two commands, but not the third one. The reason is max_depth = 2
.
In the Object Browser, put the cursor over l2e2
and press <Enter>
twice. You will see on the R Console that max_depth increased to 3.
The value of max_depth
does not increase automatically when the user tries to complete list elements. Does anyone need to complete list elements more than three levels deep? If this is a frequent situation, we can increase the default value of max_depth
.
In the Object Browser, put the cursor over l2e2 and press
twice. You will see on the R Console that max_depth increased to 3.
This is very nice. I rarely have to go very deep when exploring the data. I think this is a very nice feature to have. All good on my side (I also tried with some complicated sf
objects, and it also works great).
Thank you very much!
Have any of you noticed that ? Since I switched from nvim-R it takes sometimes many seconds to execute the simple commands (like bidule <- list() for example). Not when R just started, but after calling a few libraries, loading a few csv files. I don’t understand, it’s not doing any heavy computing, and while taking time there is no cpu or ram overload... The same script in RStudio runs normally (but i don’t want to use rstudio ;)
Edit : Rnvimserver is running at the time and taking its time...
Any clue ? running on Linux / kitty