davidhalter / jedi

Awesome autocompletion, static analysis and refactoring library for python
http://jedi.readthedocs.io
Other
5.81k stars 508 forks source link

Normal memory usage? #335

Closed spillz closed 8 years ago

spillz commented 11 years ago

I have a long-running python jedi process controlled by Code::Blocks IDE. I have noticed that when working with big libraries like numpy or pandas that each time I retrieve a completion hint or call tip on one of those modules anywhere between 0 and 2MB of memory is swallowed and apparently never freed. I can easily have a process running jedi on only those libs that consumes a couple of hundred MBs.

  1. Is that expected memory usage?
  2. Is there any way to free some of the memory being used short of killing the process without also hurting performance?

PS: I should caveat that it's possible that jedi isn't the offender here, but something in my implementation, but thought I would check to see what's considered normal.

davidhalter commented 11 years ago
  1. More or less. Jedi has quite a large memory overhead for numpy and other big libraries. What is not expected is that each time you do a call to the API that the memory increases. It should increase once and not anymore after that (or at least not 2MB).
  2. No. Not yet. What would a good API look like? Or should Jedi automatically free memory after say 2 hours?
spillz commented 11 years ago

On 1, I will try to put together a pure python sample that illustrates the problem, but I suspect it could just be an issue with the pipe interface that I am using, especially because you think this is not normal.

Re 2, time based could be useful, but something count based would be good too. e.g. an in memory cache that has a limited capacity (not necessarily measured in bytes, but something less granular than count of modules) of a certain size that drops objects in the order that they were last referenced once capacity exceeds the limit. (Sorry if that's not very clear.)

asmeurer commented 11 years ago

Yes, LRU is the best way to go.

davidhalter commented 11 years ago

@spillz The module count just has an issue that files are extremely different in size. Typical numpy size is maybe 100 lines, wx.core is 14'000 lines. That's why I'm not sure if LRU is the best solution. A few modules (few times qt, wx can use 200 mb easily because they're huge where as 1000 little ones almost don't make a difference).

asmeurer commented 11 years ago

Is there a way to measure the memory size being used, even if unaccurately?

spillz commented 11 years ago

What about a token count? That might be roughly proportional to module size.

spillz commented 11 years ago

Related, but different question about memory usage: numpy uses about 8MB of memory when imported, so why does jedi use 60ish MB to do

import jedi
s=jedi.Script('import numpy\nnumpy.',2,6)
c=s.completions()

I once wrote my own program using the inspect module to build a cache containing token list + doc strings + function/method args. The cache, once built and loaded from disk, only used 30ish MB to hold the entire standard library, gtk, numpy, statsmodels and pandas in deeply nested dicts/tuples. Jedi is obviously storing a lot more information about modules, but not sure what...

PS: I can move this to separate issue if you think its off topic.

davidhalter commented 11 years ago

Is there a way to measure the memory size being used, even if unaccurately?

line count? I don't know. That doesn't translate into mega bytes, but it helps if you just want to compare.

Jedi is obviously storing a lot more information about modules, but not sure what...

Yes it does. It stores all the locations of variables, a lot of information about functions and so on. You can look at the parser if you want. That's what it stores.

DXist commented 10 years ago

I use jedi as backend for YouCompleteMe vim plugin. Top says it uses 1.3Gb of resident memory, 2119m of virtual. I need just open 1 python file. I would like to have a limit for this size.

davidhalter commented 10 years ago

@DXist Can you show me which script causes it?

I've thinking a little bit about this. One solution could be to not create a parser tree for builtin modules, but to just analyze them quickly. That would mean that they are just imported in memory - which is way lower...

DXist commented 10 years ago

@davidhalter Actually I'm not sure if it's caused by jedi itself. On my another system I can't see such python process. Probably I need more time to investigate the problem.

Here is main YouCompleteMe script. And here is jedi backend completer script.

davidhalter commented 10 years ago

@DXist I'm not interested in the Jedi script. But I would be very interested to see extreme use cases - big libraries, etc. Especially if they are well known and use more memory than e.g. PySide (100-200mb?).

blink1073 commented 10 years ago

We may get the boot from Spyderlib over this. Spyderlib has a list of modules that it pre-imports for Rope, so I ran them against Jedi with a script that checks for memory usage. The gist is here: https://gist.github.com/blink1073/7824342

I had to break it into pieces so I did not run out of memory. It only prints modules that take over 10MB of memory. PyQt4.Qt is really scary.

MB  | Import
----------------------
 35 | numpy
 12 | numpy.f2py.crackfortran
 14 | numpy.ma
----------------------
Total MB: 363
Total files: 261

MB  | Import
----------------------
 34 | scipy
 30 | scipy.io.matlab.mio5_utils
 11 | scipy.signal
 18 | scipy.spatial
 34 | scipy.stats
----------------------
Total MB: 486
Total files: 335

MB  | Import
----------------------
 18 | PyQt4
 18 | PyQt4.Qsci
325 | PyQt4.Qt
 69 | PyQt4.QtGui
----------------------
Total MB: 668
Total files: 53
davidhalter commented 10 years ago

Wow. Crazy. Thank you for that. The last time I measured PyQt, it wasn't that high, but still like 200 MB. But anway, that's way too high.

Generally we are doing what we can - but apparently not good enough. I also have a few ideas and we're really discussing this when we're talking about changes: #346.

AFAIK PyQt4 is a builtin module, not Python code. Static code analysis is useless in these cases anyway. We'll probably change the evaluate logic a lot there in the future, so that there's no unpickling/parsing for these modules anymore. Instead we would just import the modules and leave it at that. That's probably even faster.

The only problem with that is that I don't have time to do that. My top priority for Jedi is too make it thread safe and more readable. I think in the long term it is way more important to attract contributors than to push for features and performance, now. If you really want a change soon, you can start to change the "builtin" logic. That touches the evaluate logic, but not the really hard part, so I guess it's possible to change it for someone else than me. I would happily assist you and answer questions. But I won't do it in the next few months. Probably.

blink1073 commented 10 years ago

Yes, PyQt4 is a series of compiled modules. I agree that parsing compiled modules is a bad idea. For example, if I fire up IPython, I start at 12MB of memory usage. If I type from PyQt4 import Qt I am up to only 18MB. When I type Qt.<TAB> I get an instantaneous list of completions, with no additional memory usage.
I appreciate that this is not a primary concern at the moment, so I will start digging into it myself.

asmeurer commented 10 years ago

One also has to consider how that affects Jedi's promise to not execute arbitrary code when it runs.

davidhalter commented 10 years ago

One also has to consider how that affects Jedi's promise to not execute arbitrary code when it runs.

Maybe we should clarify this: As soon as there's builtins involved, we import that code. Otherwise no completion is possible. However that is usually not the problem, since importing the builtins at some point is normal anyway.

spillz commented 10 years ago

For an extension lib, if you cache just the tokens that an end user can see why wouldn't it use LESS memory than the imported lib itself? E.g. numpy takes about 6MB after import, so if you just keep classes, methods/functions, doc strings how can that not require less than 6MB? On Dec 6, 2013 7:30 PM, "David Halter" notifications@github.com wrote:

One also has to consider how that affects Jedi's promise to not execute arbitrary code when it runs.

Maybe we should clarify this: As soon as there's builtins involved, we import that code. Otherwise no completion is possible. However that is usually not the problem, since importing the builtins at some point is normal anyway.

— Reply to this email directly or view it on GitHubhttps://github.com/davidhalter/jedi/issues/335#issuecomment-30041927 .

davidhalter commented 10 years ago

It takes more because it's a very simple process now. It generates a string (basically a generated Python file) and parses it through the normal parser again. Sounds stupid, but is very simple and makes a lot of sense, because we just need one evaluation, one ast and a small piece to generate Python code from builtins.

asmeurer commented 10 years ago

Surely there's more to it than importing. How do you get the list of methods of an object?

By the way, "builtins" is a very confusing adjective to describe such modules. To me, the builtins are those compile modules that come with Python and are always available (i.e., the Python module of the same name). Or at very least a module from the standard library. Calling PyQT a builtin is confusing. I would just call it a compiled library.

davidhalter commented 10 years ago

Surely there's more to it than importing. How do you get the list of methods of an object?

Yes, but we could do that on the fly. Would probably even be faster than what we do now.

I would just call it a compiled library.

I'm going to use that from now on. I called them builtins because of the c_builtin stamp they get (not even sure anymore if that's true). http://docs.python.org/2/library/imp.html#imp.C_BUILTIN

But "compiled library" seems to be a good choice for everybody involved (even if people don't know Python).

blink1073 commented 10 years ago

I am part-way through with this implementation. I can successfully get the definition for from PyQt4 import Qt without any of the accompanying infrastructure (I simply use mod = imp.load_module(...)), and a call to Qt.<TAB> generates the completion tokens by simply calling dir(mod). I think I can get there...

blink1073 commented 10 years ago

By "without any of the accompanying infrastructure" I mean I have replaced the Parser with a simpler one for the BuiltinModule.

asmeurer commented 10 years ago

Yes, but we could do that on the fly. Would probably even be faster than what we do now.

But again, I'm concerned about arbitrary code execution.

Anyway, how would it work. Would

Qt.Object().<TAB>

give any completions? That is, how smart will the backtracing be?

I'm going to use that from now on. I called them builtins because of the c_builtin stamp they get (not even sure anymore if that's true). http://docs.python.org/2/library/imp.html#imp.C_BUILTIN

It looks like C_BUILTIN is deprecated since 3.3 (http://docs.python.org/3.3/library/imp.html#imp.C_BUILTIN).

blink1073 commented 10 years ago

IPython does not make that completion, either. I think a lack of backtracking on compiled modules is a small price to pay for not filling up the system memory.

davidhalter commented 10 years ago

@blink1073 We will not make that sacrifice, because it's easily possible to still make everything work. Basically keeping the current features involves the following steps:

  1. Checking for the imports - returning a Compiled class or something (if it's just a module), if part of a module, e.g. function, continue with the next steps already.
  2. checking the mixin directory (should probably be renamed, a friend of mine always thinks this is hugely confusing). In there we write python code for some builtins - yes really builtins. But I don't expect you to do that. Just ignore that step for now, it's really easy to do later.
  3. Being asked by imports/evaluate if you have an attribute with a name, returning a CompiledAttribute object with a reference to the attribute inside.
  4. a) next call is an execution, just use execute within CompiledAttribute to execute builtins. This execute function looks at the docstrings of compiled objects and tries to b) next call is a name again (e.g. builtins.str.upper), use the follow method which also returns a list of CompiledAttribute elements.

That's how I imagine it, feel free to change the names that I've chosen, change the implementation, whatever. However this implementation would solve all the problems we now have: No unpickling, small memory footprint, while keeping the full feature set.

And it would probably also replace jedi.interpret, which is another compiled code evaluation... (It's ugly to have two different builtin/compiled ast implementations.)

blink1073 commented 10 years ago

I am trying to figure out where to plug in to start. Where would this logic reside? I had previously yanked out fast.FastParser, and provided only the basic logic I needed to get goto_definitions and completions to return the objects I was looking for.

davidhalter commented 10 years ago

Just start with imports.py. There's a logic that controls how files are being imported (search for builtin). It's probably in a function callled _follow_file_system or something similar.

In there you should basically replace that builtin call with a "new" call to that CompiledModule (doesn't exist yet). Is that enough? I could also try to explain it in skype or something, if that helps. Just keep asking!

blink1073 commented 10 years ago

I am officially lost again. Following that line of thought got me back to my original implementation. Looking at ImportPath._follow_sys_path, it looks like I need to override the object returned by f.parser.module on line 374, which is governed by CachedModule._load_module on line 58 of modules. I override that method in builtin.BuiltinModule to return a custom parser instead of fast.Parser, which results in the code seen in my original PR. I need to take a break and clear my head on this...

spillz commented 10 years ago

Really excellent to see so much progress on this. This is going to make a juge difference for all the big toolkit libs (which in my usage are the ones that I need good code completion for)

But what does this new approach mean for caching/pickling? I couldn't tell if this means no cache at all for compiled libs and just importing those libs at runtime? If so, that sounds horrible. It's pretty much impossible to free the memory from an imported lib, so the ideal to me is that a main jedi process always works with cached objects and can free up memory as needed, and a separate process is spawned to generate those cache objects as needed.

Re pickling speed issues. Presumably that becomes much less of an issue when the data for those libs is an order of magnitude smaller? On Dec 7, 2013 8:15 AM, "David Halter" notifications@github.com wrote:

@blink1073 https://github.com/blink1073 We will not make that sacrifice, because it's easily possible to still make everything work. Basically keeping the current features involves the following steps:

  1. Checking for the imports - returning a Compiled class or something (if it's just a module), if part of a module, e.g. function, continue with the next steps already.
  2. checking the mixin directory (should probably be renamed, a friend of mine always thinks this is hugely confusing). In there we write python code for some builtins - yes really builtins. But I don't expect you to do that. Just ignore that step for now, it's really easy to do later.
  3. Being asked by imports/evaluate if you have an attribute with a name, returning a CompiledAttribute object with a reference to the attribute inside.
  4. a) next call is an execution, just use execute within CompiledAttribute to execute builtins. This execute function looks at the docstrings of compiled objects and tries to b) next call is a name again (e.g. builtins.str.upper), use the follow method which also returns a list of CompiledAttribute elements.

That's how I imagine it, feel free to change the names that I've chosen, change the implementation, whatever. However this implementation would solve all the problems we now have: No unpickling, small memory footprint, while keeping the full feature set.

And it would probably also replace jedi.interpret, which is another compiled code evaluation... (It's ugly to have two different builtin/compiled ast implementations.)

— Reply to this email directly or view it on GitHubhttps://github.com/davidhalter/jedi/issues/335#issuecomment-30054725 .

blink1073 commented 10 years ago

As David said, there is no way around importing compiled modules (otherwise they are opaque), but you could import them in a one-off subprocess, extract the metadata, and then free the memory by closing the subprocess.

blink1073 commented 10 years ago

You would then pickle that metadata to avoid the whole subprocess dance afterwards.

davidhalter commented 10 years ago

@spiliz Importing it at some point is probably going to be one of the most memory efficient solutions. Maybe we could make a very simple "ast" tree with just docstrings and attributes, but that's probably going to consume as much memory.

@blink1073 Well the problem is that from that point on there's pretty much a new design needed. I would probably try to create a new BuiltinModule and return that instead of f = builtin.BuiltinModule(path=path) and later return f.parser.module. Basically it would look more like return builtin.NewBuiltinModule(path=path). I don't really know how to help there, because it's really about a new design. You can probably reuse parts of the builtin code to do the docstring parsing and maybe look at the dir stuff to get an impression how to traverse builtin modules.

blink1073 commented 10 years ago

What about a jedi-level preference (something like parse-compiled-modules), that would simply import and use dir() and __doc__ when False, reverting to an IPython-like experience? That is pretty much what I have implemented thus far.

davidhalter commented 10 years ago

Why would that be helpful? I don't think that the my proposed solution is much more memory/cpu intense. It's just quite some work to implement it. It's really rewriting the whole builtin module (or maybe starting fresh and taking some bits and pieces from the builtin module).

I really don't want to cut back Jedi to just doing "repl-completion". Not even with an option, as long as another solution would perform (almost) equally well.

blink1073 commented 10 years ago

Sorry, my Spyderlib PR is in active review, and I was trying to reach for a short-term solution. Parsing is not my strong-suit (I wrote a fairly complete parser for Matlab a while back but it was painful), but I think a Skype call might clear some things up. In the meantime, they might accept the PR with a big hairy warning to the user when they activate the plugin notifying them that Jedi can soak up a lot of memory.

davidhalter commented 10 years ago

Sorry, my Spyderlib PR is in active review, and I was trying to reach for a short-term solution.

That's fine.

Parsing is not my strong-suit

Well, don't write one! :) You shouldn't be writing a parser. We have one and that's far enough.

davidhalter commented 8 years ago

Memory usage should be much better now and nobody has complained in a long time.

aomader commented 8 years ago

It's still an issue. I use jedi in Atom using autocomplete-python and it happens to freeze my PC from time to time because it uses all my 16GB of RAM.

The code I write is mostly using the SciPy stack, e.g., scipy, numpy, SimpleITK, etc.

davidhalter commented 8 years ago

Which Jedi version?

Viele commented 7 years ago

I also have this problem of a python process all of a sudden taking up all 24gb of ram. I've tried Sublime with anaconda and Atom with autocomplete-python. Both use Jedi. The problem arises when I add an extra path in the package, pointing to my Autodesk Maya 2017 devkit code completion folder. In there are just 60mb of stub files for autocompletion (pyMel, PySide2 and OpenMaya)

then when I want to use the autocomplete, a python process starts that takes up all memory and writes cache files to \AppData\Roaming\Jedi. It stops when there are about 12GB of cache files, with python then still taking ~3GB of ram. Win10 version is 0.10.0

davidhalter commented 7 years ago

Oh gosh. That is unfortunate. Can you open a new ticket about this? It's not really related to the original issue.

I can see the problems you are facing and I think there should be a limit of how many files Jedi crawls.