Closed compojoom closed 3 years ago
Can you please share some example code or a small example project
Also, when you are typing out the import, is the entire line just import {
without any import path yet?
here I made a gif for you:
I have a something.js - in that file I export the const something
in index.js I try to import { some -> and I was hoping that autocomplete will kick in and complete the line. That doesn't happen.
If I go down and type something -> intellisense suggest something -> I hit it and the import on the top of the file is inserted...
And another funny thing is that - if I use flowjs whenever I define a type and intelliSense suggest it -> instead of importing on the top, the import happens inline? I guess this is another issue, but just wanted to mention it in case it is related.
This is a feature request. The requested behavior is to support auto imports while typing out the import itself, such as:
import { ab|
Could show a completion for an auto import for abc
which would complete the line to:
import { abc } './abc'
Ok. That's standard with intelliJ so I expected it with vscode as well. Sorry then.
the completion should be
import { abc } from './abc'
Since we are talking about import features. When I try to import get from 'lodash-es' - the generated import is import {get} from '../../../../lodash-es' Is there a way for it to be just import {...} from 'node_module' ?
Agree with @compojoom I was surprised that this didn’t work by design, it’s pretty standard to have import suggestions in other editors, I’ve linked my issue as I raised the same thing in VSCode.
Steps to Reproduce:
test.tsx
into page pages/index.tsxIt does work if i just type it out in the document without being inside of import braces.
Looks like the difference is a competionInfo
request is sent to the typescript server when i type in the document and i get a response with suggestions, but inside of import
i get
[Trace - 14:23:48.406] <semantic> Response received: completionInfo (1416). Request took 5 ms. Success: false . Message: No content available.
@RyanCavanaugh does the above help?
Can you please share some example code or a small example project
Also, when you are typing out the import, is the entire line just
import {
without any import path yet?
@mjbvz comment above
Spoke with @andrewbranch about this a little bit. We think this can do a lot to alleviate the ECMAScript import order syntax that a lot of users are unhappy with.
Here's some rules for imports that I'm currently thinking.
import { /**/
, import { Yadd/**/
, import { Yadd/**/, Stuff } from
)from
keyword if necessaryimport Identifi/**/
)from
keyword if necessaryimport * as Identif/**/
)from
keyword if necessaryMost of the same logic applies except for named exports, which I think we would not want to power this feature for.
One open question is how this works when you already have existing identifiers in a named import list - do you try to filter the possible module specifiers down? It's not strictly necessary since it's probably an edge case, but it would be a nice UX improvement to try that, and if no modules are found, fall back to module specifiers based on the current identifier.
Curious to hear feedback. We're also open to pull requests, though it's probably not a good first issue.
This is great, i agree with the logic you have come up with for section 3.
One open question is how this works when you already have existing identifiers in a named import list - do you try to filter the possible module specifiers down? It's not strictly necessary since it's probably an edge case, but it would be a nice UX improvement to try that, and if no modules are found, fall back to module specifiers based on the current identifier.
I don't follow, isn't this logic already covered in 1? If you have existing identifiers in a named import list then you've already set the path, so the logic can just remain as it is today. But i feel like I'm missing something here.
If you have existing identifiers in a named import list then you've already set the path
Well, potentially not 🙂. That's why the filtering logic is probably unnecessary. Why would you ever be in this situation? But it would be neat if we could leverage existing intent.
Why would you ever be in this situation?
The only scenario I can think of is if you start listing modules without providing the path. import {moda, modb, modc...}
Usually when you know what you want to import but haven't added the path yet, but. you want to add some more modules.
One open question is how this works when you already have existing identifiers in a named import list - do you try to filter the possible module specifiers down?
Im interested to know how you would do this without a path. How do you know what to filter down to? Do you base this off the modules that are already in the braces? Do you then use these heuristics to both:
Today with auto-imports, we add an entry for every module that exports something with the appropriate name. That gets served up in the completion list.
In that proposal, you could start with that logic, then do a post-pass to look at what other identifiers are in the braces, and ask "does this module export all of these other identifiers as well?" If you end up with more than 0, those are your suggestions; if you end up with none, fall back to the original list for the current identifier.
Today with auto-imports, we add an entry for every module that exports something with the appropriate name. That gets served up in the completion list.
For my understanding. What do these entries look like? Is this a list of filenames or paths for every file (module) that exports something? Or some inverted index of every module and the file it links back to.
If you end up with more than 0, those are your suggestions
When you say more than 0, are you referring to modules (files) that export those identifiers or the identifiers themselves? I’m assuming you’re referring to identifiers left from this file that matches as you’re within the braces so the path doesn’t matter at this point.
Assuming the above is correct there’s also enough information to autocomplete the from “./file/path/to/module”
when adding the closing brace.
I agree that if 0 modules match you can fall back to today’s behaviour and just provide the (global completion?) list and do nothing on closing brace.
@DanielRosenwasser that sounds like a plan to me.
Will this still be part of https://github.com/microsoft/TypeScript/issues/41601 ?
👍 already started working on it
One open question is whether we should be willing to convert a named import to a default/namespace and vice versa:
import useSt
→ import { useState } from "react";
?import { Rea
→ import * as React from "react";
?The former seems more solid than the latter to me, since mapping the module specifier "react"
to the uppercase identifier React
is already a fuzzy heuristic.
Completely agree, if I was using this my expectations would be the former and not the latter. Why would you change it to import * over a default import React?
Because React has no default export. You can’t write import React from "react"
in TypeScript without esModuleInterop
. That’s beside the point though; either the namespace or the default import are a fuzzier match than a named export.
import { Rea
why would someone type this? Are you saying if someone accidently started typing this thinking its a named import you would switch it?
As you said React
doesn't exist as an export, so how would you know to swap it to import * as React from "react";
. Or would you swap any import that does not exist to be `* as Etc?
Looking at import { Rea
, I don't see how there's enough information here for you to infer i want import * as React from "react";
It's kind of like being proactive about people going down a mental garden path. A similar experience is using speech-to-text and realizing that you've started dictating a half-formed thought that isn't consistent with what you actually wanted to ask.
I don't know if we need to add it right away, but I do actually like the idea.
Are you saying if someone accidently started typing this thinking its a named import you would switch it?
Right, this is the question.
how would you know to swap it to
import * as React from "react"
Same way we know to offer it when you type React
. Actually, in that case, the exported namespace (from the @types
package) is named React
, so we can identify that as a possible match. When the default/export assignment is anonymous, we camel case the module specifier and treat that as a possible match.
I started prototyping this, and quickly realized that it might take a little bit of work on the editor side to do it properly. There were two possible approaches using the protocol API that exists today:
insertText
and replacementSpan
to add/replace more of the import statement than the current identifier. I didn’t try this approach because I identified several problems it would have:
source
property), and then resolve different code actions on the details response to add/replace more of the import statement. This should solve all the problems with approach (1). However, it seems that the code actions around the completion conflict with the actual completion of the identifier. I was having trouble building VS Code locally so I couldn’t tell exactly what was happening, but neither calculating positions relative to the pre-completed identifier nor calculating them relative to the post-completed identifier worked consistently. It would probably be easiest if the completion entry details could overwrite the original entry’s insertText
and replacementSpan
, or specify that the code actions should be executed without actually completing the identifier, and/or indicate where the caret should end up after the completion is done. I’m not sure what would be best on the editor side. @mjbvz @jessetrinity @uniqueiniquity thoughts/ideas? Let me know if I need to explain further; I know this is probably hard to grok in text.I figured out exactly what VS Code is doing. Using the example where you want to go from import ca
to import { camelCase } from "lodash"
, the process goes like this:
import
, no completions yetc
. Completions are triggered, but camelCase
isn’t the top one yet, so details are not resolved.a
. That filtering makes camelCase
the top hit, so details are resolved. During this process we build up the code actions to be applied. The first edit is to insert {
at position 7 (at ca
). The second is to insert } from "lodash"
at position 9 (at the end of ca
).+
shows the position of insertions and ^
shows the position of the caret):
import { ca
+ ^
then the second:
import { ca } from "lodash"
+ ^
Already we have a problem: the caret moved to the end of the statement. Since our insertion was at the position of the caret, it was ambiguous whether the edit should have gone before or after it.
ca
that was already typed with camelCase
. But what actually happens is it replaces the two characters behind the caret with camelCase
:
import { ca } from "lodascamelCase
It would solve the problem if we could make VS Code apply the second edit after the caret position instead of moving the caret to the right with the edit. However, there’s another problem: VS Code may not re-request details with every keystroke, so we can also end up in a position where we calculate the edits to be in the exact right place for the partially-typed identifier ca
, but if the user quickly types mel
and then hits tab before VS Code requests new details, the edits we calculated are now out of date, and will try to insert at an incorrect position.
So, I don’t think this feature is possible without a new editor API to support it. And if we add something to the completions API, I’m not sure what implications that has for the LSP side of things on VS.
So, I don’t think this feature is possible without a new editor API to support it. And if we add something to the completions API, I’m not sure what implications that has for the LSP side of things on VS.
@andrewbranch it sounds like this should be broken off as a separate feature (but still something that should be pursued). From what i understand https://github.com/microsoft/TypeScript/issues/31658#issuecomment-731341254 is a nice to have. Is https://github.com/microsoft/TypeScript/issues/31658#issuecomment-692365135 still possible to do without VSCode changes? (I would assume it is as we already do this sort of intellisense/editing in other parts of the file)
No, every possible version of this whole feature would be affected by the problems I've outlined. It all needs editor work to be done well.
Well, I should clarify: we could make it work only for named imports, and only for people who have brace completion turned on. We can go from import { ca }
to import { camelCase } from "lodash"
, but we cannot go from import { ca
to import { camelCase } from "lodash"
. The important factor is whether or not the from "lodash"
insertion goes after the caret or right on top of it. And that doesn’t seem like a good enough solution to settle for.
Well, I should clarify: we could make it work only for named imports, and only for people who have brace completion turned on.
I’m sure it’s on by default, I’ve never switched it on so I’m assuming almost everyone has this on. (I can only speak for VSCode)
We can go from import { ca } to import { camelCase } from "lodash", but we cannot go from import { ca to import { camelCase } from "lodash"
When I type import { I get the closing } brace automatically, so then I start typing the names import, this alone would be a big improvement over nothing at all. I’m a sample set of one but for me it seems worth doing just for that.
The important factor is whether or not the from "lodash" insertion goes after the caret or right on top of it
Are you referring to the cursor? If so I would expect it to be on the end of the line after the insert so the latter.
And that doesn’t seem like a good enough solution to settle for.
Yeah I agree what was showed in the thread is better. But if that’s not possible offering autocomplete between the braces is still a big upgrade compared to what exists today, and would be a shame to throw that out.
@uniqueiniquity pointed me toward the upcoming (3.16) LSP spec that can handle this nicely, through the combination of two mechanisms:
resolveSupport
client capability means that we could lazily resolve (i.e., get during the details request) not just code actions, but any CompletionItem properties, like textEdit
(which is analogous to TS Server’s insertionText
and replacementRange
). This means we could take this more straightforward route of specifying a single edit that replaces the default behavior of the identifier completion without worrying about the performance concerns I mentioned earlier of calculating the exact edit up front.snippetSupport
client capability, combined with the insertTextFormat
CompletionItem property, would allow us to control where the text insertion cursor lands.Together, these two features solve all of the problems I mentioned earlier. Implementing the former in typescript-langague-features for VS Code should be trivial, and it should either Just Work™ or also be trivial for VS, since VS is already speaking LSP to its TypeScript support layer.
Unfortunately, I couldn’t find anything resembling the latter in vscode.CompletionItem
, and I believe @uniqueiniquity told me he doesn’t think it exists yet in VS either. That part might take some investment from editor teams, as it would reach outside the scope of my knowledge for either editor. But since it’s a first-class LSP API, I feel fairly confident this is a good way forward for an implementation, and controlling the text insertion cursor position may not be a blocker for the feature, since it could be added later when clients support it without redesigning the implementation.
- The
resolveSupport
client capability means that we could lazily resolve (i.e., get during the details request) not just code actions, but any CompletionItem properties, liketextEdit
(which is analogous to TS Server’sinsertionText
andreplacementRange
). This means we could take this more straightforward route of specifying a single edit that replaces the default behavior of the identifier completion without worrying about the performance concerns I mentioned earlier of calculating the exact edit up front.
Note that this is not completely true. Resolve lets you lazily evaluate a lot; however, most platforms will not allow you to lazily evaluate the text edit because it impacts how a completion item is evaluated at display time. For instance in the spec acknowledges this as:
All other properties (usually sortText, filterText, insertText and textEdit) must be provided in the textDocument/completion response and must not be changed during resolve.
@NTaylorMullen I thought that was the point of the new resolveSupport
client capability, though?
That is, the platform could specify that it does allow it.
Notably, VS does allow for this, and should be specifying as such in the client capabilities.
The full text of the discussion is
By default the request can only delay the computation of the
detail
anddocumentation
properties. Since 3.16.0 the client can signal that it can resolve more properties lazily. This is done using thecompletionItem#resolveSupport
client capability which lists all properties that can be filled in during acompletionItem/resolve
request. All other properties (usuallysortText
,filterText
,insertText
andtextEdit
) must be provided in thetextDocument/completion
response and must not be changed during resolve.
I read “all other properties” to mean “all properties not specified by resolveSupport
.” Without any other indication from the spec, I would expect the only property that could not be resolved lazily is label
, since that is the only required property in CompletionItem
. Is that not the case? Why would the textEdit
be needed up front to display the item in the completion list?
@NTaylorMullen I thought that was the point of the new
resolveSupport
client capability, though? That is, the platform could specify that it does allow it. Notably, VS does allow for this, and should be specifying as such in the client capabilities.
Wasn't sure if VS did but I know that VSCode didn't not that long ago 😄, @333fred went down that path and ran into headaches.
I read “all other properties” to mean “all properties not specified by
resolveSupport
.” Without any other indication from the spec, I would expect the only property that could not be resolved lazily islabel
, since that is the only required property inCompletionItem
. Is that not the case? Why would thetextEdit
be needed up front to display the item in the completion list?
I should have been more specific. Yes the resolveSupport
can specify that the edit can be replaced; however, in VSCode it doesn't which I presume is a blocker?
Why would the textEdit be needed up front to display the item in the completion list?
From my understanding it influences when to show/dismiss the completion list based off of the potential "replacable" completion span. I don't 100% recall though to be honest.
Looking at the typescript-language-features source code for VS Code, I don’t see why it would be a problem to set vscode.CompletionItem
’s insertText
and replacementRange
(which together are analogous to LSP’s textEdit
) during the details resolution, but I haven’t tried it yet. At this stage I’m happy to move forward as long as the functionality is theoretically supported by the relevant specs, as I think I’m capable of putting the missing pieces into VS Code myself. Maybe I’ll find out why it’s a problem soon 😄
I don’t see why it would be a problem to set
vscode.CompletionItem
’sinsertText
andreplacementRange
(which together are analogous to LSP’stextEdit
) during the details resolution, but I haven’t tried it yet.
I'd also imagine in VSCode that insertText
would also be required to be in its "final" state on the initial completion request unfortunately. That being said if the TypeScript code is factored in a way where updating one platform doesn't require updating another then you're good! 😄
Wasn't sure if VS did but I know that VSCode didn't not that long ago 😄, @333fred went down that path and ran into headaches.
Indeed, vscode only supports resolving detail, documentation, and additionalTextEdits. https://github.com/microsoft/vscode-languageserver-node/blob/f1bc8dba5b8192ce8730aaeb05ce823cfff8c9b5/client/src/common/client.ts#L1514
This might be where the confusion lies—TypeScript does not speak LSP to VS Code. As far as I can tell, I could make any imperative change in here in conjunction with a TS Server protocol change.
This might be where the confusion lies—TypeScript does not speak LSP to VS Code. As far as I can tell, I could make any imperative change in here in conjunction with a TS Server protocol change.
Don't you wire your "custom" LSP messages up via VSCode's hooks though? Aka their completion resolve hook? If that's the case it'd have the same restrictions.
You’re right, I didn’t see this comment:
Only a couple possible ways forward here. It seems like we have to calculate the full edit for all completions we offer up front, which adds a little bit of overhead¹ for module specifier resolution on every item. Options are
¹ Or currently, a lot of bit for pnpm users, a la #40584
Can we be concrete about the specific overhead of each item? From the top level scope, we already provide auto import completions, so what sort of work has to be done per module to generate the full statement?
Use a stricter filter than we currently do to reduce the number of completions returned. For normal auto-import completions, we don’t process any symbols whose names do not contain the token text characters in order, case-insensitive.
I think this might be reasonable here too.
Can we be concrete about the specific overhead of each item?
When auto import completions are not cached, the details request for a single item is usually an order of magnitude faster than the completion list request. Obviously there are huge variables that could affect this relationship, but that’s typical based on old notes of mine. Some portion of the details request is spent redoing work that was done in the completion list request, and the rest is spent on module specifier resolution and building display parts and code actions—so if we tried to do all that work in the completion list request, we’d be able to skip the former part. But let’s still call the details work an order of magnitude faster than the full list work—200ms for the list and 20ms for the details might be a ballpark guess for a small program. This order of magnitude relationship means the total response time for the list request could easily double if we tried to do the details work for 10 auto imports.
My gut feeling is that analysis is coming out overly pessimistic for typical cases, but 10 auto imports is not very many. One of the things that can make module specifier resolution more complicated is symlinks, which play a big role in lerna, yarn workspaces, and pnpm. I don’t want to write a feature that appears miserably slow for power users with big multi-project workspaces, even if it is purely adding functionality where none exists now. I’m open to a smarter filter only if the experience feels really seamless. But part of my hesitation is that this is a non-trivial amount of work that would largely not be reusable if the limitation of not being able to lazily resolve the main edit were removed. Conversely, because normal auto-imports are lazily resolved now, this would be a pretty small feature that could reuse a lot of existing code if that were an option for these kinds of completions. So trying to resolve everything eagerly feels risky, because it’s a decent amount of work and there’s likely to be a performance/experience tradeoff.
I know this issue is already prioritised but I thought it was worth sharing this thread https://twitter.com/markdalgleish/status/1356404787244195840?s=21 showing just how frustrating users find the import intellisense experience at the moment. bmeck also gathered more here
Just for some anecdotal evidence.
What are the reasons someone might want to write an import statement manually, getting good completions, vs. using auto imports as they exist today?
@mjbvz @jrieken this feature is getting close, but here’s the next UX hiccup in practice: the existence of the import statement
snippet completion blocks new completions from being requested once you start typing the identifier you want to import, for as long as the characters you type are an ordered subset of “statement.” (I forgot to turn on screencast mode in this GIF, but to finally get completions, I press Esc and then Ctrl+Space.)
I could change the snippet prefix to just “import,” but that would give it the same label as the plain keyword completion for import
, provided from TS Server. Alternatively, could we find a way to not block a new completions request in this case? It’s blocking because continuing to type is simply filtering down the list that was returned when typing the import
keyword, and it usually does make sense not to request new completions when an existing item is still a match. But I think snippets that include a space in their prefix kind of break that logic.
The vscode.CompletionList
has the isIncomplete
property for that. This means that we will ask for new completions, for that provider, after each keystroke.
@jrieken how would that work in this example? From i
to impor
, it looks like we should just be offering normal global completions. It’s only once the user has gotten to import
(with trailing space) that we want to start offering import statement completions, but because of the snippet, we never get a chance to tell VS Code that the list is incomplete, unless we said it was incomplete from the very beginning.
unless we said it was incomplete from the very beginning.
That. Also note that we only re-request completions from those providers that signaled incomplete-ness. So, you want to split this up into a provider that is complete and one which isn't. The trigger context will also tell you if you are being ask to complete an incomplete result (which again can be another incomplete result)
Edit from @DanielRosenwasser
Given
or
The suggestion is to look for auto-imports to and to fully complete the paths with
Steps to Reproduce:
But intelliSense just shows the text Buttons (with abc in front of it) and hitting enter doesn't add the path to the import.
If I try to directly import the component from a function - then auto-import correctly adds the Component to the top of the file with an import {Button} from 'correct_path'
Does this issue occur when all extensions are disabled?: Yes