Open jacob314 opened 2 years ago
We should add an analysis_options.yaml setting the maximum allowed memory usage for the project.
I don't believe that this makes sense, given that every package can have its own analysis options file. Which value would we use? What happens if a new package is opened, or an existing package is closed? If we're going to add user controls for this, it seems much more reasonable to add a command-line flag to the analysis server so that there's only one value (unless we really want the memory limit to fluctuate based on the number of open packages, in which case I suppose the limits can be additive, and can dynamically be changed by the user as the server is running).
But it isn't clear to me that it makes sense to do this, at least not as the code is currently written. When the server crashes, clients typically automatically restart the server. If the server is leaking memory, that might be a reasonable solution because when its restarted it will initially use less memory and only gradually get back to a state where it needs to be restarted again. But if the user is attempting to develop in a code base that's too large for the server to be able to handle, it will be killed during the initial analysis, then restarted, only to be killed again during the initial analysis, ad infinitum.
There is evidence the analysis server can sometimes leak a significant amount of memory (https://github.com/dart-lang/sdk/issues/48875). When analysis server memory usage gets high, it degrades overall system performance particularly for users on lower end machines. It is preferable to kill and automatically restart the analysis server and deal with a short outage of analysis functionality than to leave users wondering why Dart IDEs are not performant.
We should add an analysis_options.yaml setting the maximum allowed memory usage for the project. The very few users who need more than the default ~8GB(?) of memory can modify their analysis options files. We could also set the limit for memory usage as a % of physical memory on the machine. Users with only 8GB of memory would probably prefer a 4GB setting.
It would be nice if we could kill the main analysis server isolate while leaving the other isolates open but that is not crucial as IDEs already need to restart the analysis server when it crashes.
We already have proof from g3 workflows that we can make the analyzer generally robust in large code bases even when we kill it due to excessive memory usage using a fairly low threshold.