PowerShell / vscode-powershell

Provides PowerShell language and debugging support for Visual Studio Code
https://marketplace.visualstudio.com/items/ms-vscode.PowerShell
MIT License
1.69k stars 486 forks source link

IntelliSense should work across dot-sourced file references #144

Open jdkang opened 8 years ago

jdkang commented 8 years ago

Is it possible to support dot sourcing to pull Intellisense across files?

e.g.

\
\a.ps1
\b.ps1
# a.ps1
function Get-Foo($baz) { write-host $baz }

# b.ps1
. $PsScriptRoot\b.ps1

Get-Foo -baz
daviwil commented 8 years ago

Right now the only way to make that work is for you to select the contents of b.ps1 and run them with F8. This will cause the dot-source reference to be evaluated so that the Get-Foo method is in your session.

I've been thinking about how I could make this work by default without having to evaluate the file but it might require changes to PowerShell's underlying completion engine. Will definitely look into this in the future but probably won't get to it for a little while. Thanks!

Leon99 commented 7 years ago

This is an extremely important feature. It will significantly improve PowerShell development experience. I wonder if there is a way to "trick" the completion engine to treat the modules in the opened folder the same way it treats the standard library?

daviwil commented 7 years ago

@Leon99 Not sure, I'll have to ask around and see. Good idea though!

rkeithhill commented 7 years ago

@daviwil Wouldn't this work in the new system where we have a PowerShell interactive console as long as you F8 the lines that dot source other scripts? That works in ISE.

Leon99 commented 7 years ago

If it's possible to modify $profile file and then re-init auto-completion engine, it'd be an easy job. Hook onto opening a folder, get the list of .psm1 files available, import them through $profile, init auto-complete. Done.

Glober777 commented 7 years ago

Approach suggested by @Leon99 sounds interesting, although the import would have to also automatically reoccur every time those module files change, as IntelliSense may continue suggesting things based on the previous version of the module.

daviwil commented 7 years ago

@rkeithhill Yep, that'd work, though I think the expectation is that the user wouldn't need to do that.

@Leon99 @Glober777 That's one option I suppose, but I'd rather find a way that either allows the existing completion engine to become aware of file relationships or find another way using my own analysis so that no code needs to be evaluated before the IntelliSense results come back.

I've got plans for creating a module which can analyze a folder of script files and create an updatable index of everything in those files, taking into account relationships between files (like those dot-sourced into a .psm1, etc). Once I have this index, I might be able to use it in addition to the completion results that come back from PowerShell's completion engine.

However, I think it might be better to first see if this is something desirable to add to the built-in completion engine itself before running off to build my own solution :)

Leon99 commented 7 years ago

@daviwil Sounds like a proper plan. My only concern is that it may take a lot of effort to implement, that's why I suggested a "low budget" idea. Well, at least I think it is. In fact, for me it'd be okay even if there was a command that I need run manually to import all the modules in the current folder plus do some magic to make sure that IntelliSense sees it.

mvvsubbu commented 7 years ago

Is this something in your radar to be implemented anytime soon?

Arunideas commented 6 years ago

This is really a important feature to make sure VScode as ISE is less preferred for Powershell developers after seeing the Vscode :) Am ne of those eagerly waiting 😀

SeeminglyScience commented 5 years ago

@rjmholt About this comment:

This may aid PowerShell/vscode-powershell#144 btw. That issue wants full intellisense I think (which would essentially require us to pre-emptively run the script or otherwise reimplement our completions), but will still help.

Just throwing an idea out here, we might be able to get that without actually running the script. If we rebuild the AST we could replace dot sourced CommandAst with it's actual parsed content. Not sure how that would be performance wise for things like completion (I'm guessing not great) but the actual rebuild should be pretty quick.

I have a proof of concept ICustomAstVisitor for rebuilding ASTs that I've been playing with. I'll try to see how feasible it would be or just throw it up somewhere.

rjmholt commented 5 years ago

That would be very cool. I'd be interested to see how far we could take in-place AST rebuilding.

ChrisLynchHPE commented 5 years ago

@rkeithhill Yep, that'd work, though I think the expectation is that the user wouldn't need to do that.

@Leon99 @Glober777 That's one option I suppose, but I'd rather find a way that either allows the existing completion engine to become aware of file relationships or find another way using my own analysis so that no code needs to be evaluated before the IntelliSense results come back.

I've got plans for creating a module which can analyze a folder of script files and create an updatable index of everything in those files, taking into account relationships between files (like those dot-sourced into a .psm1, etc). Once I have this index, I might be able to use it in addition to the completion results that come back from PowerShell's completion engine.

However, I think it might be better to first see if this is something desirable to add to the built-in completion engine itself before running off to build my own solution :)

I know this is quite a long time ago, but has anything been done with this? This is a major knock against VSCode with split module or script files within the same workspace. I'd like to break apart my very large PSM1 into smaller ones, but still have IntelliSense across all PSM1 and PS1 files without needing to dot source everything for dev work.

PLogFinder commented 4 years ago

For a user like me, this is what I can envisage, borrowing from old visual basic for applications (VBA) in office - we had a references menu item and dialogue, and could select modules we wanted to use, and the intellisense for those modules then worked while we edited our VBA macros.

To adapt to VSCode/Powershell extension context: would you consider having a References menu item to collect a list of module/script files that either the user selects, or can be auto-populated using your automated method? So, a VSCode menu item 'References' that displays the list, with a command to run to auto-populate (maybe that command also runs if the user clicks the References menu item when it's empty), and a dialogue where the user can refine the list if the list already exists (and after running the auto-populate command)? The list gets saved in the folder's project settings (not up with the jargon for that), so the references are pre-set next time the user opens that folder. To decide if the References list applies per-file (where the listed references and intellisense are active for the currently open file) or per-folder (where the listed references and intellisense are active for any open file from that folder), or maybe can have two parts, the file-level references and the folder-level references (dialogue for References list would accordingly be in two parts).

This gives the user control of the extent/scope of references, rather than forcing them to accept everything that's thrown in by an automated analyser (I do think that starting off with the analyser is super convenient, though)

About PSModulePath - I differentiate between my development environment and the intended execution environment where I don't control PSModulePath, so I'd want to keep my PSModulePath clean (as powershell fresh install) and use another way (the References list I suggest above) to make use of modules in my development editor. When test-running, then, the script is running in a clean environmment similar to production.

Hope this makes sense.

Glober777 commented 4 years ago

I like the approach suggested by @PLogFinder with one minor addition. I'd like to avoid updating this list very often for each project, so it would be really great if it would support wildcards, where I'd specify something like:

{
   "ReferencesList": [
    "${WorkspaceRoot}\\ModuleName\\*.ps1"
   ]
}

or:

{
   "ReferencesList": [
    "${WorkspaceRoot}\\ModuleName\\Public\\*.ps1",
    "${WorkspaceRoot}\\ModuleName\\Private\\*.ps1",
   ]
}

That would probably not catch things that may be declared directly within the ModuleName.psm1 file, but still would surely be better.

SeeminglyScience commented 4 years ago

@rjmholt btw I did experiment with rebuilding the AST of multiple files. It works... sort of. Too much in PowerShell's completion API relies on extent. So either we'd have to fake the extents (which while doable seems like a whole other can of worms) or just have it be very buggy.

I'm not confident that there's anything this extension can do to accomplish this, short of lifting the entire completion API and rewriting large parts of it.

drlsdee commented 4 years ago

Yes, I would like to see such an opportunity too. I think this would be especially useful when writing custom classes, inheriting them, and separating the code of the classes from the code of the functions using these classes. Something like this:

\$PSScriptRoot\             # The module's root folder.
\Classes\                   # All classes are here. Each script contains only one class; each script is named exactly like
                            # the class it contains. Classes inherited from others should be placed in subfolders, i.e.:
\Classes\Foo\Foo.ps1        # The "Foo" class has no descendants or ascendants or any other references to other classes.
\Classes\Bar\Bar.ps1        # The class "Bar" has descendants.
\Classes\Bar\Com\Com.ps1    # The class "Com" inherited from the class "Bar": "Com : Bar"
\Functions\                 # All functions are here. Each script contains only one function; each script is named exactly
                            # like the function it contains.
\Functions\Public\*.ps1     # Public functions.
\Functions\Private\*.ps1    # Private functions.
...
...                         # "DSCResources", "Tests", nested modules et cetera.
...
\"$($moduleName).psm1"      # The main module. The code in this file should list all the applicable files in the module's
                            # folder structure and import them during module import: first classes
                            # (in directory nesting order), then functions, etc.
\"$($moduleName).psd1"      # The module manifest.

As a workaround, now instead of pressing "F5" (or "F8") I have to write some additional function that will import the module with all its classes and call some function from the module. Is this called TDD? J

XabiBeltza commented 3 years ago

So, little chance of this getting developed?

SydneyhSmith commented 3 years ago

@XabiBeltza working on this issue is not currently in our short-medium term plans

rjmholt commented 3 years ago

Honestly a feature like this is only possible to make work in scenarios where the dot-sourced path is known statically anyway.

Consider this:

$path = "$PSScriptRoot/script/is/here.ps1"

. $path

In this case, we must do some form of constant-folding to determine the outcome of the dot-sourcing. So it's possible here, but requires non-trivial effort (i.e. special AST processing) to make work. At the point where we're manipulating an AST, we now have to juggle how that works with PowerShell, since PowerShell itself is providing the intellisense (we just call into its API). So we get to the point where what the user sees isn't what PowerShell sees, adding considerable complexity.

But then there are worse cases:

$path = "./script/is/here.ps1"

. $path

In this case the outcome depends on the session's current location.

Or indeed:

$path = Get-Content -Raw ./paths.json | ConvertFrom-Json | Select -First 1

. $path

This is a rather pathological case (although I can come up with significantly worse), but the point is that there's no general approach to making intellisense work across dot-sourcing that doesn't run the script. Dot-sourcing is a runtime concept.

Worse than this though, there's no guarantee that the filesystem layout at edit-time will be the same as the layout after deployment (at runtime). If you have a module where you lay all your pieces out in individual files and then have a main thing that dot-sources them all in, but then you lay all the files out differently when you "build" the module, then this feature isn't going to help you.

Finally, I'll also say that we are an open-source project. We don't have any near-term plans to attempt an implementation here internally, but we're always open to contributions and have accepted them in the past.