Gimly / vscode-matlab

MATLAB support for Visual Studio Code
MIT License
180 stars 35 forks source link

2.2.0 freezing VS code on large files #142

Closed RobinTournemenne closed 2 years ago

RobinTournemenne commented 2 years ago

Hi Gimly,

sorry to tell you that this interesting 2.2.0 update seems to freeze VS Code on a large matlab file (during colorization I guess): 689 lines defining one structure containing notably large cell arrays (~20 cells usually) with matrix inside (usually 2x20).

I cannot provide you with this confidential file, but maybe if you really need it, I can create an identical fac simile.

Best regards,

Robin

PS: I switched back to 2.0.1 which scans files 10 times faster

Gimly commented 2 years ago

I have the feeling I will have to revert that version, looks like something was really messed up.

Gimly commented 2 years ago

Looks like I can't remove a version easily from the marketplace. If it isn't too complicated for you I'd be glad to have a sample file to reproduce the issue.

RobinTournemenne commented 2 years ago

You will find the file attached. I only had to create a fac simile of the first 200 lines or so to provok the freeze. I think the last lines with 700 columns or more might be the culcript. (I saved it as .txt for github upload but it is a matlab file .m)

PARAMETRES_BUG2.txt

rlivings39 commented 2 years ago

@SNDST00M Profiling, this looks like it happens during tokenization:

image

I can't say for certain, but it is likely related to the latest grammar changes, refactoring, language service, etc. Could you please take a look?

Attached the chrome profiler output from the file that @RobinTournemenne provided

bigProfile.txt

ghost commented 2 years ago

That tokenizeLine2 function could actually be vscode-textmate being called from the VSC Textmate language service.

Perhaps this would all work better if that service had access to token data from the vscode API itself

ghost commented 2 years ago

Work is underway. Fix will land in vscode-textmate-language@0.1.1 on 28/10.

RobinTournemenne commented 2 years ago

amazing, thanks!

ghost commented 2 years ago

Just to confirm, this fix doesn't relate to vscode API. The problem is that tokenization occurs in 5 different providers until one terminates. I plan to amend this by implementing a shared tokenization queue and polling it.

ghost commented 2 years ago

A fix for this is now published in the TM-based language service (0.1.1). I am planning major improvements to the API and gonna make it more stable too 🙏🏾 landing 14/11!

RobinTournemenne commented 2 years ago

Nice! I Hope you will succeed!

ghost commented 2 years ago

@Gimly as things are, there has been a massive amount of extension churn (party my bad), so would you be in favour of pushing a new version with this fix and the improved language service API in two weeks time?

ghost commented 2 years ago

As with #140, I was adding a test suite in the backend TM-based service and that's now done. This will be fixed soon™

ghost commented 2 years ago

Had some regressions, sorry for the delay. I'll be making QOL changes for package devs then will try publish tomorrow.

ghost commented 2 years ago

Please give us an update on this issue - it seems to have improved but not vanished entirely for me

RobinTournemenne commented 2 years ago

Nice to see that it goes forward!

Does the file that I shared above works for you? (PARAMETRES_BUG2.m)

On my fairly decent machine:

Still not useable even if VS Code does not completely freeze anymore.

RobinTournemenne commented 2 years ago

"Go to definition" is also lagging but unfortunately also in "normal" situations. See the attached example that I made. It is a fac simile. When I try to go to youpi.m (via alt+click for axample or F12) it takes few seconds (it is instantaneous in version 2.0.1). In the original file (which is longer) it takes up to 30 seconds...

script.zip

ghost commented 2 years ago

Save takes 2 seconds for me, Go To Definiton as well. But the COUCOU function combined with the extractorMidiarpege1 logic actually times out when I alt-LMB youpi().

I think it would be beneficial to create a new tracking issue for big files and also note there that Mathworks is working on a vastly more performant LSP that they keep teasing in the Discord goddamnit

RobinTournemenne commented 2 years ago

+1

And also for Go to definition. It took 2 seconds on the file that I created but on a file with more lines it will increase to an even more undesireable number of seconds ^^

I revert to 2.0.1 waiting for better days.

Gimly commented 2 years ago

@SNDST00M unfortunately looks like the issue isn't only for big files given the new issues that we are getting about slowness.

ghost commented 2 years ago

I decided to have the engine hash files with SHA1 so that they weren't being compared by a table index of the entire document. I suspect this caused the slowdown but I'll run some tests to see if it's significantly faster without.

ghost commented 2 years ago

Okay thanks to #152 and @fschwaiger we've isolated the issue to Textmate scope parsers and it'll be a tougher nut to crack. I hope I break ground on this one by the weekend and it remains feasible to keep the Atom parsers

ghost commented 2 years ago

As I understand it, this is still outstanding along with #162. But we await perf testing after #157. Note potential workaround - https://github.com/Gimly/vscode-matlab/issues/157#issuecomment-1030228546 - in the meantime.

zm-cttae-archive commented 1 year ago

Should be fixed by #174. This necessitated a full rewrite of the library:

Looking back, I cobbled the earlier iterations together and threw in code that was synchronous when it should have been rewritten to be async.. the async keyword was a great gift in the rewrite. I've also learnt a lot in the time I was busy and that helped bring sanity by killing the technical debt. Sorry I couldn't do it before but very happy at my 100 or so commits to clean the library up.