Closed eirikpre closed 2 years ago
@joecrop I tried to look into it, but could figure it out. Do you know if there is a way to avoid running Actions twice for scenarios where the branch is part of the repo and PR exists?
@joecrop I'm seeing that the potiential_references can easily grow into millions of indexed objects. I'm debating to set the default value of the setting to without potential_references. Or at the very least limit it if there are more than a (flat) 100 files similar to how the indexing is limited. Ref:
if (total_files >= 1000 * this.parallelProcessing || this.forceFastIndexing) {
return this.parser.get_all_recursive(doc, 'fast', 0);
}
if (total_files >= 100 * this.parallelProcessing) {
return this.parser.get_all_recursive(doc, 'declaration', 0);
}
if (doc.lineCount > this.maxLineCountIndexing) {
window.showInformationMessage(
`The character count of ${workspace.asRelativePath(uri)} is larger than ${this.maxLineCountIndexing}. `
); // prettier-ignore
return this.parser.get_all_recursive(doc, 'fast', 0);
}
// Otherwise, we parse the file with the precision requested by the user
return this.parser.get_all_recursive(doc, this.documentSymbolPrecision, 1);
EDIT* Or we can make the regular expression more restrictive
@joecrop I tried to look into it, but could figure it out. Do you know if there is a way to avoid running Actions twice for scenarios where the branch is part of the repo and PR exists?
Yes, there's a way to limit this, I have done it before. Let me find out what I did in the past.
EDIT* Or we can make the regular expression more restrictive
Yeah, I'm all for making the regular expressions more restrictive, but I think it might not help in performance because we would be parsing each file something like 2X more times. And if you think about it, probably 90% of the symbols in a file are indeed potential references. I think the best solution is as follows:
EDIT* Or we can make the regular expression more restrictive
Yeah, I'm all for making the regular expressions more restrictive, but I think it might not help in performance because we would be parsing each file something like 2X more times. And if you think about it, probably 90% of the symbols in a file are indeed potential references. I think the best solution is as follows:
1. Like you said, if the file count is small enough, leave references on unless the user explicitly disables them in settings. * This should be a user controlled setting, with a default of 100 like you suggested. My workspaces have 1M+ objects and 1000+ files and my computer can easily handle it, so I don't want to lose that capability. 2. If the file count is larger than the threshold, or the setting is disabled, we ideally still want the reference provider to work. I haven't gotten around to this, but the ideal scenario is to run an indexer-like algorithm that finds all the matching symbols in all the files. This would be a relatively slow operation, but that is the tradeoff. This is how sublimeText works today.
Good points, I fully agree.
Properly fixing performance issues is to use a proper parser for all parsing that is not fast. The proper parser can be our own ANTLR solution, or another tool that creates an AST (eg verible). When that change happens, the parser and indexer should also be moved to the server. Unfortunately that job is very large (and too large for me to take on at this moment 😢)
Yes, there's a way to limit this, I have done it before. Let me find out what I did in the past.
@eirikpre I modified test.yml in this branch to hopefully limit this. If it doesn't do what you are hoping, you can easily tweak it now.
Plugin extension for testing: systemverilog-0.13.2.vsix.zip
@eirikpre I did some testing and I don't see any problems with performance or functionality with the changes in this branch.
@eirikpre I moved the one leftover action from this PR to #186.
Attempt of closing down some bugs where the extension would misbehave.