Closed gkaemmer closed 11 months ago
how many files are opened in vscode? or do you have imports that will imports hundred of files transitively?
Also, have you set python.analysis.diagnosticMode
to "workspace"? By default, it's "openFilesOnly".
I think the 2GB heap limit comes from the instance of node that VS Code launches to run its language servers. I don't think Pylance can override this. We should confirm my understanding.
Even if we cannot increase the 2GB limit, Pylance still shouldn't be running out of heap space. As @heejaechang mentioned above, we have code in place to monitor memory usage and discard any in-memory caches when we reach a high-water mark. It sounds like there may be some problem with that logic.
I don't think Pylance can override this. We should confirm my understanding.
so, 4GB (64bit vscode) is max decided at compile time for electron. only way to get around it seems allowing users to provide their own nodejs which doesn't have pointer compression enabled at build time. and we run our language server on the user supplied node instead of vscode.
tagging @luabud @judej how do you think? for users that have huge workspace we let them provide us their own node which is not capped at 4GB heap size on 64bit machine?
I tested it (https://github.com/microsoft/pyrx/pull/3310) and it works as expected.
Let's figure out why our in-memory cache management is not working before we consider exposing an option like this.
Answering questions above:
python.analysis.include
)diagnosticMode
is set to openFilesOnly
{
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.linting.enabled": true,
"python.analysis.typeCheckingMode": "basic",
"python.formatting.provider": "black",
"editor.formatOnSave": true,
"editor.codeActionsOnSave": {
"source.organizeImports": true
},
"python.analysis.autoImportCompletions": false,
"python.analysis.autoImportUserSymbols": false,
"python.analysis.completeFunctionParens": false,
"python.analysis.diagnosticMode": "openFilesOnly",
"python.analysis.indexing": false,
"python.linting.flake8Enabled": false,
"python.analysis.include": [
"app/my_teams_code/**"
],
"python.analysis.useLibraryCodeForTypes": false,
}
Thanks for the additional information.
Is most of your code base untyped? In particular, are return type annotations missing for most of the functions and methods in your code base? If so, then a recent change I made in pyright could significantly reduce the amount of code analysis being performed in this situation. This change will be in this week's prerelease version of pylance. Please give it try and let us know if that eliminates the heap issue you're seeing.
Pyright is able to log additional heap usage information that would be useful in helping to track down the problem you're seeing. Please create a pyrightconfig.json
file at the root of your project and add the following: { "verboseOutput": true }
. Then repro the memory issue. You should see text Heap stats: ...
in the log Output window for pylance. Please paste the heap stats that you observe prior to the out-of-memory crash.
Is most of your code base untyped?
Why yes! I'll try out the prerelease version.
Please create a pyrightconfig.json file at the root of your project and add the following: { "verboseOutput": true }
Weirdly, I've already done that and I don't see the heap stats in the output 🤔 -- do I need to install the Pyright extension for this to work?
Okay I logged in this morning and....it didn't crash. I checked top
and the memory went up to 3.1g and held steady, and intellisense was working fine.
It turns out that export NODE_OPTIONS="--max-old-space-size=8192"
actually does work, but requires a full restart of the vscode server process on the remote instance (presumably so that it actually reloads the .bashrc
). This must have happened on my dev server overnight. Good to know.
Still probably good to diagnose why the crash is happening though. @erictraut I tried with the prerelease version after putting --max-old-space-size
back to 2048, and the crash still happens :/
Let me know about the verboseOutput
-- I agree that would be helpful if I could figure out how to enable it.
do I need to install the Pyright extension for this to work?
No, pylance is built on top of pyright.
I tried with the prerelease version
This week's prerelease version of pylance hasn't been released yet. It should be released within the next 24 hours if everything goes as planned.
This week's prerelease version of pylance hasn't been released yet. It should be released within the next 24 hours if everything goes as planned.
It's available now.
export NODE_OPTIONS="--max-old-space-size=8192" actually does work
It probably works for remote server since it uses real node rather than electron that vscode is based on. electron is compiled with pointer compression on, so 4GB is hard limit you can't cross runtime.
with vscode (electron), --max-old-space-size will work up to 4GB
Weirdly, I've already done that and I don't see the heap stats in the output 🤔 -- do I need to install the Pyright extension for this to work?
that's probably because our heap management code didn't kick in for your case. your case, I believe, it is not like you are running solution wide feature such as find all reference
or workspace symbol
.
rather I think things like type evaluator is resolving alias (imported) symbol or getting symbols from other module your file is referencing and that cause parsing/binding to happen for files your file depends on transitively. (at least, that's what your log shows I believe)
in that case, it is not easy for us to dump cache (type cache, binding info and parse tree) since we are in the middle of type evaluation.
that being said, I think the change @erictraut mentioned above should help since it reduces number of files we analyze while type evaluating. (which in turn help us what we should show in completion/hover/signature help and etc even though you don't consume type info directly)
but if that doesn't work, using node rather than electron would be only option. and in your case, it sounds like remote server already uses node so you should be good.
I've been having the same issue for quite a while.
Here is the log I noticed after Pylance sent a notification of server shut down:
[Info - 5:05:38 PM] (25524) Reloading configuration file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:05:38 PM] (25524) No configuration file found.
[Info - 5:05:38 PM] (25524) pyproject.toml file found at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo.
[Info - 5:05:38 PM] (25524) Setting pythonPath for service "microservices-monorepo": "c:\Users\user\Desktop\Projects\Insane\backend\.venv\Scripts\python.exe"
[Info - 5:05:38 PM] (25524) Loading pyproject.toml file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:05:38 PM] (25524) Assuming Python version 3.11
[Info - 5:05:38 PM] (25524) Assuming Python platform Windows
[Info - 5:05:38 PM] (25524) No include entries specified; assuming c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo
[Info - 5:05:38 PM] (25524) Auto-excluding **/node_modules
[Info - 5:05:38 PM] (25524) Auto-excluding **/__pycache__
[Info - 5:05:38 PM] (25524) Auto-excluding **/.*
[Warn - 5:05:38 PM] (25524) stubPath c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\typings is not a valid directory.
[Info - 5:05:38 PM] (25524) Searching for source files
[Info - 5:05:39 PM] (25524) Found 1240 source files
[Info - 5:05:49 PM] (25524) Indexer background runner(13) root directory: c:\Users\user\.vscode\extensions\ms-python.vscode-pylance-2023.3.30\dist (refresh)
[Info - 5:05:49 PM] (25524) Indexing(13) started
[Info - 5:05:49 PM] (25524) scanned(13) 219 files over 1 exec env
[Info - 5:05:49 PM] (25524) indexed(13) 0 files over 1 exec env
[Info - 5:05:49 PM] (25524) Indexing finished(13).
[Warn - 5:05:56 PM] (25524) Workspace indexing has hit its upper limit: 2000 files
[Info - 5:06:43 PM] (25524) Reloading configuration file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:06:43 PM] (25524) No configuration file found.
[Info - 5:06:43 PM] (25524) pyproject.toml file found at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo.
[Info - 5:06:43 PM] (25524) Setting pythonPath for service "microservices-monorepo": "c:\Users\user\Desktop\Projects\Insane\backend\.venv\Scripts\python.exe"
[Info - 5:06:43 PM] (25524) Loading pyproject.toml file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:06:43 PM] (25524) Assuming Python version 3.11
[Info - 5:06:43 PM] (25524) Assuming Python platform Windows
[Info - 5:06:43 PM] (25524) No include entries specified; assuming c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo
[Info - 5:06:43 PM] (25524) Auto-excluding **/node_modules
[Info - 5:06:43 PM] (25524) Auto-excluding **/__pycache__
[Info - 5:06:43 PM] (25524) Auto-excluding **/.*
[Warn - 5:06:43 PM] (25524) stubPath c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\typings is not a valid directory.
[Info - 5:06:43 PM] (25524) Searching for source files
[Info - 5:06:43 PM] (25524) Found 1240 source files
[Info - 5:07:17 PM] (25524) Reloading configuration file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:07:17 PM] (25524) No configuration file found.
[Info - 5:07:17 PM] (25524) pyproject.toml file found at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo.
[Info - 5:07:17 PM] (25524) Setting pythonPath for service "microservices-monorepo": "c:\Users\user\Desktop\Projects\Insane\backend\.venv\Scripts\python.exe"
[Info - 5:07:17 PM] (25524) Loading pyproject.toml file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:07:17 PM] (25524) Assuming Python version 3.11
[Info - 5:07:17 PM] (25524) Assuming Python platform Windows
[Info - 5:07:17 PM] (25524) No include entries specified; assuming c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo
[Info - 5:07:17 PM] (25524) Auto-excluding **/node_modules
[Info - 5:07:17 PM] (25524) Auto-excluding **/__pycache__
[Info - 5:07:17 PM] (25524) Auto-excluding **/.*
[Warn - 5:07:17 PM] (25524) stubPath c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\typings is not a valid directory.
[Info - 5:07:17 PM] (25524) Searching for source files
[Info - 5:07:17 PM] (25524) Found 1240 source files
[Warn - 5:07:33 PM] (25524) Workspace indexing has hit its upper limit: 2000 files
[Info - 5:07:39 PM] (25524) Reloading configuration file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:07:39 PM] (25524) No configuration file found.
[Info - 5:07:39 PM] (25524) pyproject.toml file found at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo.
[Info - 5:07:39 PM] (25524) Setting pythonPath for service "microservices-monorepo": "c:\Users\user\Desktop\Projects\Insane\backend\.venv\Scripts\python.exe"
[Info - 5:07:39 PM] (25524) Loading pyproject.toml file at c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\pyproject.toml
[Info - 5:07:39 PM] (25524) Assuming Python version 3.11
[Info - 5:07:39 PM] (25524) Assuming Python platform Windows
[Info - 5:07:39 PM] (25524) No include entries specified; assuming c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo
[Info - 5:07:39 PM] (25524) Auto-excluding **/node_modules
[Info - 5:07:39 PM] (25524) Auto-excluding **/__pycache__
[Info - 5:07:39 PM] (25524) Auto-excluding **/.*
[Warn - 5:07:39 PM] (25524) stubPath c:\Users\user\Desktop\Projects\Insane\backend\microservices-monorepo\typings is not a valid directory.
[Info - 5:07:39 PM] (25524) Searching for source files
[Info - 5:07:40 PM] (25524) Found 1240 source files
[Warn - 5:07:43 PM] (25524) Workspace indexing has hit its upper limit: 2000 files
[Info - 5:07:54 PM] (25524) virtual workspace:
[Info - 5:07:54 PM] (25524) Starting service instance "<default>"
[Info - 5:07:54 PM] (25524) No pyproject.toml file found.
[Info - 5:07:54 PM] (25524) Setting pythonPath for service "<default>": "c:\Users\user\Desktop\Projects\Insane\backend\.venv\Scripts\python.exe"
[Warn - 5:07:54 PM] (25524) stubPath typings is not a valid directory.
[Info - 5:07:54 PM] (25524) Assuming Python version 3.11
[Info - 5:07:54 PM] (25524) Assuming Python platform Windows
[Info - 5:07:54 PM] (25524) Searching for source files
[Info - 5:07:54 PM] (25524) No source files found.
[Warn - 5:08:29 PM] (25524) Workspace indexing has hit its upper limit: 2000 files
[Warn - 5:09:19 PM] (25524) Workspace indexing has hit its upper limit: 2000 files
<--- Last few GCs --->
[25524:0000758800D78000] 2236230 ms: Scavenge 2902.1 (3032.9) -> 2896.5 (3034.8) MB, 8.9 / 0.0 ms (average mu = 0.996, current mu = 0.999) allocation failure;
[25524:0000758800D78000] 2236255 ms: Scavenge 2905.4 (3036.3) -> 2900.5 (3037.5) MB, 5.4 / 0.0 ms (average mu = 0.996, current mu = 0.999) allocation failure;
[25524:0000758800D78000] 2236597 ms: Scavenge 2907.0 (3037.9) -> 2901.9 (3038.4) MB, 30.1 / 0.0 ms (average mu = 0.996, current mu = 0.999) allocation failure;
<--- JS stacktrace --->
FATAL ERROR: NewSpace::Rebalance Allocation failed - JavaScript heap out of memory
1: 00007FF70C3A7E56 node::Buffer::New+50054
2: 00007FF70C3A805F node::OnFatalError+463
3: 00007FF70B7D9B90 v8::internal::WebSnapshotDeserializer::object_count+864
4: 00007FF70F091957 v8::CppHeap::CollectGarbageInYoungGenerationForTesting+55255
5: 00007FF70B883D9D v8::CppHeap::GetAllocationHandle+176397
6: 00007FF70B87FC94 v8::CppHeap::GetAllocationHandle+159748
7: 00007FF70B86FDE3 v8::CppHeap::GetAllocationHandle+94547
8: 00007FF70B86DAEC v8::CppHeap::GetAllocationHandle+85596
9: 00007FF70B86ADE6 v8::CppHeap::GetAllocationHandle+74070
10: 00007FF70D9C6F7C v8::CppHeap::GetHeapHandle+69292
11: 00007FF70D9B8BEC v8::CppHeap::GetHeapHandle+11036
12: 00007FF70B95C82B v8::internal::OSROptimizedCodeCache::TryGet+201515
13: 00007FF68FECC93C
[Error - 5:10:07 PM] Client Pylance: connection to server is erroring. Shutting down server.
[Error - 5:10:07 PM] Client Pylance: connection to server is erroring. Shutting down server.
[Error - 5:10:08 PM] Connection to server got closed. Server will not be restarted.
[Error - 5:10:08 PM] Stopping server failed
Message: Pending response rejected since connection got disposed
Code: -32097
[Error - 5:10:08 PM] Stopping server failed
Message: Pending response rejected since connection got disposed
Code: -32097
[Error - 5:10:08 PM] Stopping server failed
Message: Pending response rejected since connection got disposed
Code: -32097
Funny thing to notice is that is says Reloading configuration file
4 times in 2 minutes and proceeds to do some sort of indexing. Not sure if relevant just stating as it seems weird doing the same thing very frequently
Answering questions above:
1) I never have more than 8 files open, I have big OCD.
2) diagnosticMode
is set to openFilesOnly
3) Here is my settings.json:
{
"python.formatting.provider": "black",
"python.analysis.typeCheckingMode": "basic",
"editor.formatOnSave": true,
"python.formatting.blackArgs": [
"--line-length=120"
],
}
What happens: After quite a bit of opening VSCode (1h max) I notice that pylance is not working anymore, since autocomplete + ctrl hover is stuck on "Loading...". And the only way I've been solving this is by restarting VSCode. This has been happening for +/- 2 months.
@heejaechang, do we know whether it's the foreground or background (indexing) process that's running out of memory? I can't tell from this log trace. I suspect it's the foreground, but it would be good to confirm.
@dynalz if you think indexing
is the problem, you can turn it off python.analysis.indexing: false
. let us know if that solved the problem.
@erictraut I am not sure whether we can distinguish that since it only output process id but not thread id. if user enable logLevel: Trace
we do put id
to distinguish thread, but I dont think JS's own error message such as FATAL ERROR: NewSpace::Rebalance Allocation failed - JavaScript heap out of memory
will do. but if it doesn't crash after turning indexing
off, then we know it is BG
crashing, otherwise FG
crashing.
I'm suffering from the same issue... in particular pylance crashes analysing the aws_cdk
python library... Following the trace output
VSCode Version: 1.77.3 (Universal) Commit: 704ed70d4fd1c6bd6342c436f1ede30d1cff4710 Date: 2023-04-12T09:19:37.325Z Electron: 19.1.11 Chromium: 102.0.5005.196 Node.js: 16.14.2 V8: 10.2.154.26-electron.0 OS: Darwin arm64 22.3.0 Sandboxed: No
Pylance version 2023.4.10 (pyright d7616109)
I noticed that adding
--max-old-space-size=8192
node option solve my issue
@panilo can you create a new issue.. Also we jus released a new update. Please try the latest prerelease 2023.4.21
the log shows OOM on type eval. so our current heap threshold won't work for this case. but pyright's recent change on type eval might mitigate the issue.
Same issue for me on a Windows 1x Pro VM, letting VSCode run will ends on an OOM VM, I've got the issue since more than 6 month (and on every recent patches)
"C:\Program Files\Microsoft VS Code\Code.exe" --ms-enable-electron-run-as-node c:\Users\jpadmin\.vscode\extensions\ms-python.vscode-pylance-2023.4.41\dist\server.bundle.js --cancellationReceive=file:b9af8a1da1c9ebe16abf89178735d3540ad8d8b884 --node-ipc --clientProcessId=1464
This process uses 2.1GB of RAM (until my VM is OOMed)
@panilo can you create a new issue.. Also we jus released a new update. Please try the latest prerelease 2023.4.21
So the problem persists on 2023.4.41 release
I'm suffering from the same issue... in particular pylance crashes analysing the
aws_cdk
python library... Following the trace outputVSCode Version: 1.77.3 (Universal) Commit: 704ed70d4fd1c6bd6342c436f1ede30d1cff4710 Date: 2023-04-12T09:19:37.325Z Electron: 19.1.11 Chromium: 102.0.5005.196 Node.js: 16.14.2 V8: 10.2.154.26-electron.0 OS: Darwin arm64 22.3.0 Sandboxed: No
Pylance version 2023.4.10 (pyright d7616109)
Pylance trace log
I noticed that adding
--max-old-space-size=8192
node option solve my issueI'm suffering from the same issue... in particular pylance crashes analysing the
aws_cdk
python library... Following the trace outputVSCode Version: 1.77.3 (Universal) Commit: 704ed70d4fd1c6bd6342c436f1ede30d1cff4710 Date: 2023-04-12T09:19:37.325Z Electron: 19.1.11 Chromium: 102.0.5005.196 Node.js: 16.14.2 V8: 10.2.154.26-electron.0 OS: Darwin arm64 22.3.0 Sandboxed: No
Pylance version 2023.4.10 (pyright d7616109)
Pylance trace log
I noticed that adding
--max-old-space-size=8192
node option solve my issue
Same thing here with CDK
.
Hey @judej, this issue might need further attention.
@gkaemmer, you can help us out by closing this issue if the problem no longer exists, or adding more information.
@dynalz if you think
indexing
is the problem, you can turn it offpython.analysis.indexing: false
. let us know if that solved the problem.
disabling python.analysis.indexing
did not solve the issue. It was been disabled since your message but it still happens till this day.
I'm not sure if there is something I can check so I can provide more info? Please let me know
I have the same problem with aws_cdk
@ben-elsen, the problem you're seeing with aws_cdk is likely this one, which should be addressed in last week's prerelease version of pylance.
@erictraut the description of the problem is exactly the same but I tested the pre release version of pylance but the problem remains..
Is there a way to just provide a setting to increase memory? I have a huge repo on my workspace, currently working on a workspace with: 3799 folders 8971 files 795500 lines of code
I'm not sure if this is the usually normal workspace and if this is causing the issue or not, but probably having the ability to allocate more resources would be nice either way.
I can go OOM with 14 files and 4 folders for example, maybe we "follow a false scent"
Is there a way to just provide a setting to increase memory? I have a huge repo on my workspace, currently working on a workspace with: 3799 folders 8971 files 795500 lines of code
I'm not sure if this is the usually normal workspace and if this is causing the issue or not, but probably having the ability to allocate more resources would be nice either way.
Same issue for AWS CDK. I'm running VSCode + WSL2 Ubuntu-20.04. Project is fairly small ~30 files (max ~500 lines per file)
I tried providing larger size --max-old-space-size=6000
but more memory I provide more pylance server consumes and result is same.
My WSL has allocated 9GB mem and I can see pylance server consuming 5.5GB.
Pylance pre-release version v2023.7.21
VSCode version June 2023 (version 1.80)
CDK version 2.85.0
OS Windows 10
TL;DR: downgrade Pylance to v2023.4.40
. CDK and Pylance work in this version without OOM. Hope this helps someone.
Though, I do see the getSemanticTokens
as full
in output
, it doesn't impact cdk and pylance:
2023-07-19 10:02:51.234 [info] (40862) [BG(1)] getSemanticTokens full at XXXXXXX
2023-07-19 10:02:51.235 [info] (40862) Background analysis message: getSemanticTokens range
2023-07-19 10:02:51.244 [info] (40862) [BG(1)] getSemanticTokens range 0:0 - 64:13 at XXXXXXX
2023-07-19 10:02:51.245 [info] (40862) Background analysis message: analyze
This is pretty frustrating as it's really beneficial to have pylance working when working with CDK, and it worked without errors in previous versions.
Hope this gets fixed soon.
TL;DR: downgrade Pylance to
v2023.4.40
. CDK and Pylance work in this version without OOM. Hope this helps someone.Though, I do see the
getSemanticTokens
asfull
inoutput
, it doesn't impact cdk and pylance:2023-07-19 10:02:51.234 [info] (40862) [BG(1)] getSemanticTokens full at XXXXXXX 2023-07-19 10:02:51.235 [info] (40862) Background analysis message: getSemanticTokens range 2023-07-19 10:02:51.244 [info] (40862) [BG(1)] getSemanticTokens range 0:0 - 64:13 at XXXXXXX 2023-07-19 10:02:51.245 [info] (40862) Background analysis message: analyze
This is pretty frustrating as it's really beneficial to have pylance working when working with CDK, and it worked without errors in previous versions.
Hope this gets fixed soon.
Seems better with Pylance v2023.4.40
but at the end of the day the issue remains.
Steps to Reproduce: let Pylance running, and he will eat 1Go over the weekend doing nothing.
With default parameters"C:\Program Files\Microsoft VS Code\Code.exe" --ms-enable-electron-run-as-node c:\Users\jpadmin\.vscode\extensions\ms-python.vscode-pylance-2023.4.40\dist\server.bundle.js --cancellationReceive=file:fb54b59cd212920eaf247255a756a1ca661f9e98ed --node-ipc --clientProcessId=6432
Is there a way to just provide a setting to increase memory? I have a huge repo on my workspace, currently working on a workspace with: 3799 folders 8971 files 795500 lines of code
I'm not sure if this is the usually normal workspace and if this is causing the issue or not, but probably having the ability to allocate more resources would be nice either way.
Is there any way I can provide debug info in helping solving the issue? I've been having this issue for months.
I can safely say I only have this issue in this workspace, other workspaces work fine.
Hey @judej, this issue might need further attention.
@gkaemmer, you can help us out by closing this issue if the problem no longer exists, or adding more information.
This issue has been closed automatically because it needs more information and has not had recent activity. If the issue still persists, please reopen with the information requested. Thanks.
Why was this closed?
It was marked as requiring more information and the bot auto closes things if nobody responds. Not sure what the information we were waiting for was, so maybe it was marked that way by mistake.
If you're having an out of memory crash with Pylance it's better to just open a new issue instead of just adding to this one though. Most memory issues require a specific repro to cause, so it's likely these are all different.
come to here by Google search when I met a similar problem.
pylance taken huge memory leading to OOM
it is suggest that language server( not only pylance ) may expose a command in command palette to examine memory profiling. so that the language server could take out effective option to limit memory
@crackevil if you're having an OOM crash, could you open a new issue? We'd need to reproduce it in house in order to debug.
I've followed the pylance-is-crashing troubleshooting markdown, set the python.analysis.nodeExecutable
to /usr/bin/node with node v21, and put export NODE_OPTIONS="--max-old-space-size=10000"
in my .bashrc, reload my vscode by pkill myself.
But I still see the command line /usr/bin/node --max-old-space-size=8192 <path to my home vscode pylance extension/server.bundle.js -- -- ........>
then pylance is still crashing when the memory is used up.
Here is how I mitigated memory usage - vscode by default scans too many files (if you're not careful):
BTW, if you still have a python.linting.ignorePatterns
setting, that's now replaced with python.analysis.ignore
(I think).
Other than that: use something like the settings below (which is aimed at Python, but I presume you can figure out which folders to add for your project).
"files.exclude": {
"**/*-report.*/**": true,
"**/*.egg-info/**": true,
"**/.coverage/**": true,
"**/.git/**": true,
"**/.mypy_cache/**": true,
"**/.pytest_cache/**": true,
"**/.tox/**": true,
"**/__pycache__/**": true,
"**/htmlcov/**": true
},
"files.watcherExclude": {
"**/*.egg-info/**": true,
"**/.egg-info/**": true,
"**/.git/**": true,
"**/.mypy_cache/**": true,
"**/.pytest_cache/**": true,
"**/.tox/**": true,
"**/.venv/**": true,
"**/__pycache__/**": true,
"**/htmlcov/**": true
},
"python.analysis.exclude": [
"**/__pycache__",
"**/.git",
"**/.mypy_cache",
"**/.pytest_cache",
"**/.tox",
"**/htmlcov",
"**/*.egg-info"
],
"python.analysis.ignore": [
"**/.vscode/**",
"**/__pycache__/**",
"**/.egg-info/**",
"**/.git/**",
"**/.mypy_cache/**",
"**/.pytest_cache/**",
"**/.tox/**",
"**/.venv/**",
"**/*.egg-info/**",
"**/htmlcov/**",
"**/site-packages/**/*.py"
],
"search.exclude": {
"**/*.egg-info/": true,
"**/*.html": true,
"**/.git": true,
"**/.mypy": true,
"**/.tox": true,
"**/htmlcov/": true,
"**/repos/**": true,
"**/site-packages/**": true,
"**/test-report.xml": true
}
Bonus if you use the coverage-gutters
extension:
"coverage-gutters.ignoredPathGlobs": "**/{node_modules,venv,.venv,vendor,.git,.tox,.*_cache,__pycache__}/**",
If you don't set that, performance of coverage-gutters
is going to be dramatic too.
Environment data
Repro Steps
Sadly I don't have repro steps 😞 -- it's quite a large private repo.
Expected behavior
Pylance hits a 2GB heap limit after loading just about 700 files from a private repo, which I'd estimate is under a million lines of code. I would expect Pylance to allow you to increase the memory limit up to, say, 8GB or 16GB. I am not 100% sure this would fix the problem, but it makes sense that users might want to hold more than 2GB of type information in a single workspace.
FWIW, it is possible to load this repo into Pycharm, although it's required that you up the memory limit to 8GB from Pycharm's default of 4GB.
If that doesn't help, I'd also expect to be able to fully ignore whole packages outside of my particular area. Even though I've set
python.analysis.include
to my own team's directory, Pylance still crawls the entire dependency tree of all of my files.Actual behavior
Pylance provides no way to increase the limit above 2GB (at least not that I can find). I have tried adding
export NODE_OPTIONS="--max-old-space-size=8192"
to my.bashrc
but that didn't change anything.Logs
I've redacted the file names from the repo in this snippet, but it should give an idea. The main thing to notice here is there are no "whale" files that are blowing it up, it's just hundreds of reasonably sized files that all need to be loaded.
pylance_out_redacted.txt