microsoft / vscode

Visual Studio Code
https://code.visualstudio.com
MIT License
164.8k stars 29.49k forks source link

Tokenization overhaul #77140

Open alexdima opened 5 years ago

alexdima commented 5 years ago

The current tokenisation story of VS Code is based on TM grammars, which are pretty powerful, but we are running into their limits if we want to do something more than a top-down scanner can do. Also, once you implement a TM interpreter, you realise how inefficient the way in which regular expressions must be evaluated is and how TM grammars were not meant to do much more than simple colouring using just a few rules... The fact that we now have these complex grammars than end up producing beautiful tokens is more of a testament to the amazing computing power available to us than to the design of the TM grammar semantics.

At the time when TM grammars were introduced and became popular there were no language servers available which understand the semantics of a language. Therefore, TM grammars were also used to colour semantic constructs. The introduction of LSP has brought us language servers for many languages and it we want to leverage this power to reduce the complexity of the tokenizer/classifier. There is already effort under way to specify how such API might look under LSP at https://github.com/microsoft/vscode-languageserver-node/pull/367

In any case, for smarter languages where we offer great extensions, such as for TypeScript or C++, we have noticed two different patterns emerge to try and compensate for these limitations.


Complex TM grammar approach (TypeScript)

This approach was taken by TypeScript, where we now have immense regular expressions, which are a testament to the smartness of the author, but which are potentially very slow to evaluate on the UI thread: image


Text Editor Decorations (C++)

This approach was taken by C++, where we now receive potentially unbounded amounts of text editor decorations used to represent semantic tokens which are pushed by the C++ extension to correct or enhance the TM grammar. The limits of text editor decorations start to show, I have collected some of the issues under this query. Due to their memory cost, complexity, and breadth of usage (i.e. cannot touch the editing logic around them at this point), text editor decorations are not the right tool for this job...


Both approaches show that there is a real need for something more, and that folks which care can get really creative and smart in tackling this need even when we lack as a platform. This issue is about overhauling how tokenization works in VS Code and tries to address multiple goals at once:

1. Move tokenization off the UI thread

Today, TM tokenization runs on the UI thread. Even more interesting, we have numerous features (such as typing a } or typing (, ', ", etc) where we need to know synchronously, at the time we interpret the typed character if we are in a comment, in a string, in a regex, or somewhere else... So we have code paths were we end up tokenizing the current line synchronously given the line above is tokenized in order to find out what's the exact context that we are in and then we make a decision based on that.

We have looked into this and built a prototype where we removed the synchronous tokenization... Moving this kind of classification off the UI thread entirely would result in severe flakiness... In other words, sometimes pressing ' would insert '|' and sometimes only '|, in the same file, in the same location, based purely on typing speed and the time it takes to send tokens over from the web worker. Having an editor where typing something does one thing 90% of the time and another thing 10% of the time based on typing speed would IMHO be completely unacceptable.

As a one-off approach, we have written a fast classifier for comments, strings or regexes for TS, in TS. We will experiment to see if this classifier could be used synchronously on the UI thread to determine what to do when typing these characters (}, ', etc). The challenge here lies with making it incremental (not start from the beginning of the file for each keystroke). Also, since these syntax constructs are "rare" relative to the entire body of text, a line based representation would not be a good one. Even more ideas are that perhaps we shouldn't store the location of strings, comments, etc. but only the save-points between them given the classifier would be fast enough to compute them again.

Another idea circulating was to enable writing monarch classifiers and contributing them from extensions. This would avoid some of the bad design choices of TM, but would still mean evaluating regexes written by extensions on the UI thread. Yet another idea was to have a "fast" TM grammar that only deals with strings, comments and regexes and another normal one for tokens -- again with the same problem of running regexes written by extension on the UI thread. Another idea was to build some base parser, with components such as C-style comments, C-style strings, etc which could be exercised by extensions (i.e. some kind of higher-order constructs than regexes). Or maybe we should just hand write parsers for the top 90% of our languages to detect strings, comments and regexes... We have not yet taken any clear decision as we still need to experiment in this area to learn more...

2. Accept tokens from the extension host (semantic tokenization)

Moving TM grammars off the UI thread is good for reducing our freezes and crashes, but still does not address the fundamental limitations of TM. Here we need to add API such that semantic tokens can be pushed by the extension host. These tokens should very much behave similar to text editor decorations, but have a different implementation where we can represent them with a lot less memory (just 2 or 3 32bit numbers like we do with the other tokens). We should also tweak the way they are adjusted around typing to make most sense for tokens...

They also need to be updateable incrementally and only as needed. There are discussions of using the visible ranges APIs to prioritize the regions which should receive semantic tokens first. We have not yet began drafting an API nor a reference implementation.

3. (low priority) Enable the integration of other tokenization engines

This is just here to remind us to keep in mind that we might want to move away from the inefficient TM grammars completely at one point in the future. There is a lot of love for Tree-Sitter nowadays and we might want to investigate using it, or we might want to roll our own story, since we do actually have a lot of experience in this area...


Tasks

paulyoung commented 5 years ago

It would be great if the proposal to add syntax highlighting to the language server protocol could be considered. I think the latest on that is here: https://github.com/microsoft/vscode-languageserver-node/pull/367

fwcd commented 5 years ago

Tree-Sitter support would be amazing! It does have a fast parser and grammars for many popular languages (including syntax-highlighters that map Tree-Sitter AST nodes to TextMate scopes).

jeff-hykin commented 5 years ago

Excellent post @alexandrudima. While maintaining the C++ TM grammar and helping get the Tree Sitter working, I can say I've experienced the limitations of the Decorations API and TextMate first hand.

To add to your point of complex regex, the C++ grammar has a single pattern (regex + capture groups) that is 20,000 characters long: more characters than the entire file of the Go Lang TM grammar.

Base Parser Idea

Another idea was to build some base parser, with components such as C-style comments, C-style strings, etc

I'm confident in saying a base parser is most likely impractical.

If there was a functional base parser, it would not be very generic. It would be more akin to a repo of every language combined.

Normal + Streamlined Grammar Idea

TM grammar that only deals with strings, comments and regexes and another normal one for tokens we still need to experiment in this area to learn more...

I do think this is realistic, and even easy. Although it doesn't fix the underlying problem, it could prevent and reduce many of the severe slowdowns. It is only a matter of removing patterns from an existing grammar. Defaulting to a synchronous grammar, but allowing a language to specify a synchronous (semantic) and asynchronous (highlighting) grammar could provide a lot of benefits for the heavyweight grammars like C++ and TypeScript.

Adding this would allow exploring the move of TM grammars off the UI thread without the full risk. And making it a toggle-able feature would allow for users to help debug the non-UI thread early on.

[x] write a fast TS classifier of comments, strings, regex (done here with tests here )

Is there any performance data on this? In comparison to the normal TS tmLanguage and/or a comments/regex/strings only tmLanguage.

Non-Synchronous Solutions

we have numerous features (such as typing a } or typing (, ', ", etc) where we need to know synchronously, at the time we interpret the typed character if we are in a comment, in a string, in a regex,

Is there a way to make those features asynchronous as well? Part 1 makes it seem like those features must stay synchronous. However:

sometimes pressing ' would insert '|' and sometimes only '| based purely on typing speed and the time it takes to send tokens over from the web worker

this makes it seem like web workers are a non-viable solution. But then the checklist (especially the last item) makes it sound like web workers are a planned and viable solution. Am I missing something?

Tokens from the Extension Host

how do we manage two tokens stores, one owned by TM, and one owned by the extension host, how do we merge them to give a consistent picture to the editor view?

It seems to me like the extension tokens will need a start and end, and then then extension tokens would always override/separate the TM tokens.

Semantic Representation

TM scopes cannot effectively express basic concepts; a TM token cannot be both a constant and a function, or both a variable and a function, or both a class and an object. Whatever parser is used, whether it be a language server extensions, or the tree sitter, it would be very unfortunate if their semantic information was crippled by having to use TM Scopes to communicate meaning.

If grammar scopes and extension semantic-token-information are known statically, it seems there could be a way to generate an efficient function (similar to the Trie used currently) that accepts a scope + semantic information as input, and returns an enumerated style as a 32 bit output. If the semantic information is needed on the token (for uses other than color), the semantic info could be made immutable, placed in a set, and return a 32 bit enumeration stored on the token itself. This would introduce 1 extra step in looking up semantic information. To store semantic information directly on the token would mean limiting the amount/kind of semantic information, there would need to be a universal standard.

alexzielenski commented 5 years ago

Having recently crafted a language server using text decorations for semantic highlighting as well, I'm in agreement with @jeff-hykin that the most practical and not too difficult solution would be the "Normal + Streamlined Grammar Idea".

I've found that writing a very simple and barebones TM grammar for common/context-insensitive tokens like keywords and comments works very well. I then override the default highlight where necessary with semantic coloring using text decorations.

Thus, to provide a standard API for an extension to identify ranges of text as a token id number, and allowing the extension to also statically map these token id numbers to either tmScopes or a custom tokenColorCustomization (for extensions which might like to provide rainbow coloring or some other new innovation I can't imagine) would be a more than sufficient solution for this task.

This has the added benefit of allowing tmThemes to take part in this semantic colorization with the familiar tmScope concept. tmScopes are not perfect, however, and it would be nice to attach more information to them outside of what's possible with the "dot"-separated identifier

jeff-hykin commented 5 years ago

API for an extension to identify ranges of text as a token id number,

@alexzielenski When you say token id are you talking about the tokens from this?. For example:

{ startIndex:  8, scopes: ['source.js','meta.function.js'] },

Which gets converted into:

                  // bbbbbbbbb fffffffff FFF TTT LLLLLLLL
[    8, 16793623, // 000000010 000000001 000 000 00010111 ]




would be a more than sufficient solution for this task.

There likely needs to be a way to feed text changes/updates to the extension too. Telling it what line and what character(s) changed, and allowing the extension to update its previously provided tokens.

map these token id numbers to either tmScopes or a custom tokenColorCustomization

I'd highly recommend side-stepping TM scopes and instead allowing the extensions to add labels to tokens. A TM token can't be both a variable and a function, even if semantically thats what the token is. It would make for bad habits right out of the bat to have extensions be unable to accurately inform VS Code about the token.

alexzielenski commented 5 years ago

@jeff-hykin When I said token id i just meant an arbitrary number which the extension could assign meaning to. So, the extension could provide:

{ startIndex: 8, endIndex: 12, tokenId: 5 }

And also specify a map to the api:

registerTokenTypes(tid => {
    switch (tid) {
        case 5: return "entity.name.function" // or a TextDecoration
    } 
})

Regarding incremental text updates: I am already making use of an API provided by vscode to its language server to receive text deltas. I'm unsure if there exists an equivalent API for client-side vanilla vscode extensions, though.

I'd highly recommend side-stepping TM scopes and instead allowing the extensions to add labels to tokens.

I'm especially all for this - to allow the language-server/extension to assign arbitrary labels to a given token.

jeff-hykin commented 5 years ago

the extension could provide: { startIndex: 8, endIndex: 12, tokenId: 5 } And also specify a map to the api:

Ah I see, thats a great idea. That would greatly reduce the amount of data needed for communication. If there was a static way to provide such a mapping, I bet it could be combined with the Trie generated from the Theme to make for a direct way to go from tokenId: 5 to style data.

I am already making use of an API provided by vscode to its language server to receive text deltas.

Thanks for correcting me, I guess a functioning token API really is the only thing needed for getting this working on a language server.

Since there is already a text delta API, I guess it would be best to figure out how to port it to normal extensions (or it may already be available, I'm not sure either).

mattacosta commented 5 years ago

For comparison, I benchmarked a few engines:

Tokenization performance (PHP) First tokenization run only. Does not include V8 optimizations from subsequent runs (like after editing a file).

File: PHPMailer.php Average of 10 iterations. All times in milliseconds (lower is better).

  WASM JS TreeSitter TextMate
Average 24.95 38.73 78.24 795.75
Min 23.58 36.48 75.81 789.64
Max 26.84 41.59 80.21 809.50
Standard Dev 1.01 1.57 1.39 6.07
Tokens 25921 25921 17559 30607

There are a number of implementation details to consider, but even the slowest non-TM engine is ~10x faster. Instead of creating a fast tokenizer for each language, I think it would be more efficient to create an API for other engines and immediately get the benefit of whatever is available. This doesn't even factor in that the non-TM implementations all support incremental changes as well.

AnyhowStep commented 5 years ago

Will this overhaul fix syntax highlighting for multiline generic function/method calls?

img

It regularly happens to me where I try to split a generic function/method call to multiple lines and the syntax highlighter just... Freaks out.

It highlights stuff in white, blue, green, other exotic colors. The colors chosen are very random and differ from case to case.

jeff-hykin commented 5 years ago

@AnyhowStep You might want to report that to the Typescript grammar repo, I think that's solvable with the current tools.

To answer your question though: the tree sitter would likely not have that issue. This thread is currently solving a pre-requisite problem: the tree sitter (and other parsers) don't have an efficient real-time way to tell VS code about the structure of the code. We need a way to identify tokens, a way to store information about each token, a way for extensions to incrementally update those tokens, a way for themes to assign styles to tokens, and likely a way to do all those in a non-synchronous way to prevent freezing.

AnyhowStep commented 5 years ago

Ah. I didn't even know where to go to report it. I looked it up and it looks like others have reported it and the issues have been marked as "won't fix" and "design limitation".

https://github.com/microsoft/TypeScript-TmLanguage/issues/767 https://github.com/microsoft/TypeScript-TmLanguage/issues/745 https://github.com/microsoft/TypeScript-TmLanguage/issues/479

https://github.com/microsoft/TypeScript-TmLanguage/issues/475#issuecomment-309905772

This is by design as the grammar lookup doesn't allow multiple lines at a time.

So, not solvable with current tools =(

jeff-hykin commented 5 years ago

Ah. I didn't even know where to go to report it.

@AnyhowStep, yeah it's definitely not obvious where to go. Mentioning it and being redirected is a valid way to find out.

Thanks for including the issue links. I understand the details of problem now, the fundamental problem isn't perfectly solveable with TM, and this situation is exactly what the tree-sitter is designed to fix, so this issue is (very indirectly) working on that problem. However, I've written the syntax for the very similar case of C++ typed function calls and I can tell you the TM grammar is very capable of doing a better job and could prevent the cascade of broken colors. The last issue you link can be solved without making any assumptions by fixing the JSX pattern (as the reporter mentions). The first two issues you link, which are less serious, can be imperfectly fixed with a convenient assumption. I might open up an issue there and see what can be done.

DanTup commented 5 years ago

This is by design as the grammar lookup doesn't allow multiple lines at a time.

Multiline comments and strings work ok, is this different to those?

(though this is probably a bit off-topic for hear)

jeff-hykin commented 5 years ago

@alexandrudima Would it be too much of a performance hit to consider having tokens refer to an enumerated/hashed style rather than directly store the foreground, background, and font directly on the tokem?

If that is in the realm of possibility, I have an idea of a way to allow multiple sources to contribute semantic meaning efficiently and effectively with out needing to have multiple token stores.

rmunn commented 5 years ago

Semantic tokenization would also provide a good way to get #1751 implemented: in nearly every scenario listed in #1751, the "root" language server would also know what other syntax was embedded in the file, and where that syntax ends. E.g., in the onclick="event.preventDefault()" example, the HTML language server would be able to say "the text from X to Y should be parsed as Javascript", where X and Y are the position immediately after the first " and immediately before the second " respectively.

So if semantic tokenization is implemented, it's worth keeping #1751 in mind and building something into the protocol to allow the language server to say "The text in this range should be parsed with this parser".

octref commented 4 years ago

One place to dogfood this could be hover / completion / parameterHints.

  1. They don't have color now anyway so this wouldn't disrupt users too much.
  2. I remember @DanielRosenwasser said VS is using semantic coloring for highlighting code in these widgets, so TS server already has this info.
  3. These are incomplete code so they can't be colorized by the full language TM grammar.
  4. These are read-only code, should be easier to colorize than code in editor.

image

image

image

DanielRosenwasser commented 4 years ago

One place to dogfood this could be hover / completion / parameterHints.

These things aren't valid code, so I'd say you're really better off with using the symbol display parts that we return there.

You could request our highlighting for document previews if you're trying to limit broader UX impact.

segevfiner commented 4 years ago

Had this one with Go #76073. The TM grammer for it has been sourced from the Atom extension which is no longer willing to maintain that grammer as they since moved on to implementing syntax highlighting using tree-sitter, that is, semantic coloring. So I guess this is just one more syntax highlighting issue that will be resolved if VS Code (Probably the Go extension) adopts semantic coloring for Go.

thekingofspain commented 3 years ago

https://github.com/mechatroner/vscode_rainbow_csv/issues/4

jasonwilliams commented 3 years ago

Hey @alexdima Is there still ambition for the overhaul or was it deemed not possible? Can the synchronous assumptions be solved?

ebkgne commented 3 years ago

Hello, I've read from this thread https://github.com/microsoft/vscode/issues/64681 that this would solve our problem ? is it the case ? and is this issue being taken care of or is iddle at the time ? thank you