microsoft / vscode

Visual Studio Code
https://code.visualstudio.com
MIT License
162.79k stars 28.74k forks source link

Automatic classification: more aggressive debounce and perf considerations #129607

Open isidorn opened 3 years ago

isidorn commented 3 years ago

Testing #129436

On my machine it takes around 300ms for the language classification to compute. However some of our users have a slower machine, so here are some ideas on how to make sure to not slow down when users that are just scribbling in an untitled editor.

Both 3 and 4 require some knowledge about what is a bigger change to a file. We can figure something out. Maybe @bpasero has ideas

bpasero commented 3 years ago

💯

We have to be very very careful of running an expensive operation in the same thread where the user is typing in (see https://github.com/microsoft/vscode/issues/27378 for example to get some inspiration). At the minimum we should debounce in a way that continuous typing further delays the computation up to the point where the user stops typing for a moment.

This becomes even more important if we decide to make language detection enabled by default.

Further thoughts:

rebornix commented 3 years ago

Thanks @bpasero for bringing this to my attention. I don't think having it running in main thread scales in terms of performance, especially now we are also adding bracket matching on typing. Adding a few ms doesn't seem to hurt typing but if everyone is doing so, a single type can easily exceed 16ms and we can't finish rendering in one single frame.

Note that we already have tokenization and multiple event listeners (auto save trigger), which take a couple of ms each.

isidorn commented 3 years ago

All good points, and I think we should explore all our options here.

  1. We should definitely not do this on every type but have a smart debounce strategy (some ideas mentioned in my initial comment)
  2. Moving this to the worker sounds reasonable. @rebornix is there a good example you could share of some worker side computation
  3. @bpasero model can only handle 100 000 character strings. So @TylerLeonhardt we should just take a subset of the whole content, and not take the full string in memory. fyi @yoeo for idea for model to handle streams, not only strings
rebornix commented 3 years ago

I can chat with @TylerLeonhardt on how to move it to a worker, we already have workers for word and link computation.

btw, some data on the perf of typing now with latest Insiders, which we should improve

image

So @TylerLeonhardt we should just take a subset of the whole content, and not take the full string in memory

@TylerLeonhardt note that the text model supports fetch string within a range which is more efficient than getting the whole string.

bpasero commented 3 years ago

Yeah this is an unfortunate consequence that is specific to untitled files where we use the content of the untitled buffer as title for the tab. If you want to file an issue for it where we track offenders, we can look into that individually 👍

I think it is limited to changes to the first line only though, not any line after that. So might not be the issue for typical usages.

TylerLeonhardt commented 3 years ago

Updated what's done and what's not

bpasero commented 3 years ago

Nice, I think from my end the biggest ask is to support streaming to prevent producing a string of the editor contents and maybe cancellation support, though I am not entirely sure that is even possible in the library itself...

TylerLeonhardt commented 3 years ago

I'm trying to understand how streaming would work if the work is done in a Worker.

Wouldn't the data need to be serialized to be transferred to the worker and lose the benefit of streaming?

Maybe I'm misunderstanding.

bpasero commented 3 years ago

Yeah you are right, that suggestion is from the time before we even thought about workers...

Still, even when IPC is involved you can avoid creating a string from the contents by simply sending over the chunks that you get from the editor itself. The editor contents are partitioned in chunks (not sure if line by line or a fixed size), so you could send these existing chunks over to the worker and pass them on into the detection library.

bpasero commented 3 years ago

The encoding library we use works with that model:

https://github.com/microsoft/vscode/blob/83f4dfdff121dee60169620e738dc2b9dfea0bec/src/vs/workbench/services/textfile/common/encoding.ts#L181-L182

...

https://github.com/microsoft/vscode/blob/83f4dfdff121dee60169620e738dc2b9dfea0bec/src/vs/workbench/services/textfile/common/encoding.ts#L224-L224

TylerLeonhardt commented 3 years ago

Still, even when IPC is involved you can avoid creating a string from the contents by simply sending over the chunks that you get from the editor itself.

I'm using the same code that the EditorWorker uses which sends over changed events that look like this:

export interface IModelContentChange {
    /**
     * The range that got replaced.
     */
    readonly range: IRange;
    /**
     * The offset of the range that got replaced.
     */
    readonly rangeOffset: number;
    /**
     * The length of the range that got replaced.
     */
    readonly rangeLength: number;
    /**
     * The new text for the range.
     */
    readonly text: string;
}

And the MirrorModel handles syncing resources. https://github.com/microsoft/vscode/blob/c2f6aa497f7462c293e1f6301707bf0c091b1712/src/vs/editor/common/model/mirrorTextModel.ts

So we're already only sending chunks over to the worker when it changes...though not the chunks you're referring to I think. If we're talking about optimizing that further it should probably be for all of our editor worker code.

and pass them on into the detection library.

The model doesn't have any internal concepts of what the contents of a file looks like (in otherwords it doesn't keep state) so we need to send as much content as we can to it so that we have the best chance of guessing correctly. Running the model on anything smaller than the whole contents of the file I worry will come with a steep decrease in confidence.

bpasero commented 3 years ago

Oh yeah that is good, wasn't aware you also benefit from the mirror model code.

And yeah, agree the library needs to run detection on the entire contents for a confident decision but should probably have an upper limit of how large these contents can be. Given so you could avoid allocating the string of the entire file (which can be GBs) and only use the chunk size that is supported at max.

TylerLeonhardt commented 3 years ago

but should probably have an upper limit of how large these contents can be

It does :) 100000 characters: https://github.com/microsoft/vscode-languagedetection/blob/main/lib/index.ts#L82

because after that, tfjs throws some error: https://github.com/microsoft/vscode/issues/129597

ideally the string I create only creates a 100000 character string instead of the entire file... but I believe the file was represented as lines so I'd have to roll my own "get first 100000 characters of this string" function... which I haven't done yet.

bpasero commented 3 years ago

Yeah the closet I see is getValueInRange, @rebornix is there a way to get the value of the text buffer up to a maximum length without having to allocate the entire buffer into a string?

I guess one solution could be to model.snapshot and then iterate over the chunks until max length is reached, but I am not sure these APIs exist in the mirror model.

TylerLeonhardt commented 3 years ago

Moving the other things to the backlog as I don't think we need to do them right this second. I'm going to have a TPI for this so those testers can try out perf.

TylerLeonhardt commented 3 years ago

@rebornix gave me the solution:

                // Grab the first 10000 characters
        const end = model.positionAt(10000);
        const content = model.getValueInRange({
            startColumn: 1,
            startLineNumber: 1,
            endColumn: end.column,
            endLineNumber: end.lineNumber
        });

Added in f28e845cbbc6fc4230bff99e083df18c481f6d62 (with a code clean up commit 430e98bf678e9b0e8f69365c87688dfd3ceb9778)