nvim-telescope / telescope.nvim

Find, Filter, Preview, Pick. All lua, all the time.
MIT License
15.91k stars 837 forks source link

Out of memory error in previewer #647

Open seocamo opened 3 years ago

seocamo commented 3 years ago

Description

get a out of memory error when i move over the files in file window with preview on if i add {previewer = false} to the function ex. oldfiles, it just works so it most be in the preview window that do not get unload files. it happen with any size files, but with big files it only take 2-3 files.

Expected Behavior Nvim do not dies

Actual Behavior Nvim dies with out of memory error

Details

Reproduce 1. nvim -nu test.vim 2. :lua require('telescope.builtin').oldfiles() 3. move up over the (big) files 4. nvim dies
Environment - nvim --version output: - NVIM v0.5.0-dev+1115-gc1fbc2ddf Build type: RelWithDebInfo LuaJIT 2.0.5 Compilation: /usr/bin/cc -D_FORTIFY_SOURCE=2 -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -O2 -g -Og -g -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wshadow -Wconversion -Wmissing-prototypes -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fno-common -fdiagnostics-color=always -DINCLUDE_GENERATED_DECLARATIONS -D_GNU_SOURCE -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -DMIN_LOG_LEVEL=3 -I/home/name/.cache/yay/neovim-nightly-git/src/build/config -I/home/name/.cache/yay/neovim-nightly-git/src/neovim-nightly-git/src -I/usr/include -I/home/name/.cache/yay/neovim-nightly-git/src/build/src/nvim/auto -I/home/name/.cache/yay/neovim-nightly-git/src/build/include Compiled by name@manjaro Features: +acl +iconv +tui See ":help feature-compile" system vimrc file: "$VIM/sysinit.vim" fall-back for $VIM: "/usr/share/nvim" Run :checkhealth for more info - Operating system: - Manjaro Linux x86_64 Kernel: 5.11.2-1-MANJARO - Telescope commit: - c8cc024
Configuration

```viml Plug 'nvim-lua/plenary.nvim' Plug 'nvim-lua/popup.nvim' Plug 'nvim-lua/telescope.nvim' let mapleader = " " :lua require('telescope').setup {} nnoremap po :lua require('telescope.builtin').oldfiles() ```

Conni2461 commented 3 years ago

This seems super weird. What does "big" file mean? 10k lines, 100k, 1m. What does "nvim dies" mean? Freez, crash? And can you share the error message, if there is one.

seocamo commented 3 years ago
  1. i get files on 1.4Mb, 27Mb and 6.3Gb and Nvim open them fine and move around fast, but i get this problem half 10-20 mins of use in a Moodle(php) project and i get the problem too.(just use oldfiles many times)
  2. Nvim dies as in it drop to the terminal with a error "PANIC: unprotected error in call to Lua API (not enough memory)"
  3. i get the error message some times, not always
Conni2461 commented 3 years ago

I ran into another issue, where neovim just freezes. Happens when hovering over a huge file https://github.com/tjdevries/tree-sitter-lua/blob/master/src/parser.c Neovim isn't even able to open this file because treesitter takes too long to parse it.

That annoying and i am currently thinking of some sort of chunk loading (still async) that might help with this issue and read in the next chunk on scrolling or something like this. Might be hard to do for the vimgrep things (using the same loading interface). This might also help with this issue. Probably loading of a 6.3gb file happens in background (continues after telescope is already closed) until it crashes with not enough memory. Maybe even obers plenary.async could help.

I am currently in the middle of exams and need to prepare for a project next semester so this could take a month. I am sorry

seocamo commented 3 years ago

yes, chunk loading is a fix for the big files, it will not help for the core dump/ out of memory that happen each 2 hours when NeoVim crash, i still think there is a memory leak, i have try to find it, but i am not use to this codebase. and off topic please don't be sorry, i (and everybody) is thankful for the all the time you put in to Neovim, thx

Conni2461 commented 3 years ago

Thanks for the kind words :)

Yes we leak one buffer in some occasions. But the out of memory issue is not coming from neovim (because we forgot to cleanup some buffers) but from within the lua async (vim.loop.fs_open, etc). With chunk reading we can just check if state is still valid after reading a chunk and if not stop reading and eventually free memory.

Leaking is addressed here: #664 but will not close this issue :)

seocamo commented 3 years ago

ok, that make sense, because i found that if you type text to filter the list and then delete some of the text and then type it again, you can build up memory used by 10/20 Mb per time until you get around 1 Gb and then nvim crash, and that sound like the some thing.

monkoose commented 3 years ago

Trying to switch from fzf to telescope and ran into the same problem. As i understood bat is deprecated for previewing files and current previewer tries to preview binary files too or something, because my neovim hangs out or crash with panic out of memory if i search files in some directory lets say that have some huge binary files like some .mp4 movies or so. And bat for me just shows that it is binary file and skips it.

Conni2461 commented 3 years ago

I didn't have time to fix it but you can either switch to bat previewers. They are still valid or you try one of the configurations i posted here:

monkoose commented 3 years ago

@Conni2461 Thank you. I will try suggested configs. As of bat previewer - I have tried file_previewer = require'telescope.previewers'.cat.new and got some error that current option can't be nil.

matu3ba commented 1 year ago

I can reproduce with a huge repo and map('n', '<leader>gf', [[<cmd>lua require('telescope.builtin').git_files()<CR>]], opts) -- git files with inserting mai (final oom), but its more obvious with grepping.

Lazy plugins:

  ---- telescope ----
  {
    'nvim-telescope/telescope.nvim',
    dependencies = { { 'nvim-lua/popup.nvim', lazy = false }, { 'nvim-lua/plenary.nvim', lazy = false } },
  }, --<l>tb/ff/gf/rg/th/pr/(deactivated)z
  { 'natecraddock/telescope-zf-native.nvim', lazy = false }, -- simpler algorithm for matching
  -- { "nvim-telescope/telescope-fzf-native.nvim", build = "make", lazy = false }, -- 1.65x speed of fzf
  -- Telescope gh issues author=windwp label=bug search=miscompilation
  { 'nvim-telescope/telescope-github.nvim' }, --Telescope gh issues|pull_request|gist|run

and extension:

telescope.load_extension 'zf-native'

If look at the memory consumption with htop, I can see that residual memory remains around 538MB and virtual around 1295 (with clangd). If I start grepping from empty string it goes up to 2GB residual memory and 2087 virtual. typing a stupid sequence (mamamamamamamamamamamamamamama makes this increase to 5111MB residual and 6251MB virtual one. (Initially waiting abit.) Removing the window drops it to 3430 MB residual and 3479 virtual.

A second search to increase it to 4212MB residual and 4407MB virtual. Then quit window, now its up to 3462MB residual and 3511MB virtual.

A third search with abababababab, then delete. Now its up to 3681MB residual and 3729MB virtual. To me it looks like there is memory leaked on every search, which takes longer to process.

Only typing sequence abababa leads to 4987M residual 5046MB virtual (waiting for a while to let the allocator kick in). Drops on closing to 4102MB residual and 4151MB virtual.

Without telescope I see no memory change.

Does luajit have a leak detector or what can I run to track this down? The memory is shown to me for nvim --embed -S ses for 5 pids together with the same SHR and one process having up to 100% cpu utilization (I guess thats the lua one allocating and never properly freeing).

APPENDUM: This is a tool to analyze luajit leaks http://luajit.io/posts/analyze-lua-memory-leak-with-systemtap/, but it needs minimizing of the problem beforehand.

Some things to poke:

matu3ba commented 1 year ago

I can reproduce this with git_files, live_grep and any other picker with meaningful buffer sizes by merely typing some words with the LLVM repo. With and without any external picker.

Related are #2370, #2482, #2489, #1379, #1049, #943.

Possibly related are also performance issues like #1981.

This code should show the offending memory usage, ideally in llvm repo with a picker of sufficient size, if the inserted cli words can be scripted (taken from https://stackoverflow.com/a/44748634):

local time_start = os.time()

--------------------------------------
-- Your script is here
-- For example, just spend 5 seconds in CPU busy loop
repeat until os.clock() > 5
-- or call some external file containing your original script:
-- dofile("path/to/your_original_script.lua")
--------------------------------------

local mem_KBytes = collectgarbage("count") -- memory currently occupied by Lua
local CPU_seconds = os.clock()                  -- CPU time consumed
local runtime_seconds = os.time() - time_start  -- "wall clock" time elapsed
print(mem_KBytes, CPU_seconds, runtime_seconds)
-- Output to stdout: 24.0205078125  5.000009  5
meronogbai commented 1 year ago

I have a similar issue too. It seems like rg is causing the issue for me according to htop. Killing rg manually frees up memory so I can continue working again.

voidus commented 1 year ago

I think this is this problem https://github.com/BurntSushi/ripgrep/issues/2505

voidus commented 1 year ago

This seems to fix it for me:

nnoremap <leader>fg <cmd>lua require("telescope.builtin").live_grep({ additional_args = { "-j1" }})<CR>

otavioschwanck commented 1 year ago
lua require("telescope.builtin").live_grep({ additional_args = { "-j1" }})

it not only fixed, but made it 1000% faster

matu3ba commented 1 year ago

it not only fixed, but made it 1000% faster

Try live_grep in llvm or linux repo on empty string, wait 15-20s, then delete word, continue search with another word or press Esc.

jamestrew commented 1 year ago

Closing this issue as it's pretty stale plus there's been other related issues and fixes since.

matu3ba commented 1 year ago

Closing this issue as it's pretty stale

Please do not tell me that you think closing stale issues is a good practice. Either the bug did not reproduce or one is unable to reproduce it due to insufficient instructions.

As I understand it, you are closing it, because the explicitly mentioned reproducible is solved. If this is the case, consider rephrasing the issue to make it explicit.

there's been other related issues and fixes since

I listed them in https://github.com/nvim-telescope/telescope.nvim/issues/647#issuecomment-1532244593 and good practice is something like "This use case is solved. Related issues persist in XYZ."

Conni2461 commented 1 year ago

Please do not tell me that you think closing stale issues is a good practice

No this issue is not stale, this is not our practice here (it seems like i have to do a little bit more training).

Issues are only stale, if they are opening, someone/i go like: "hey i cant reproduce this can you give me more information like ......" and then after 3 months there is still no response, then there is a good chance that i close it as stale, but this is not the case here.

This issue is stale from our perspective, because i havent made any progress. I dont know which "fixes since" are referenced here

jamestrew commented 1 year ago

sorry was a little too eager.