Closed alcarney closed 1 month ago
Your "Goals" list is amazing!
The only one I don't understand is:
Have something ready to try when the next major lsprotocol version is ready - that way if this works out, server authors only have to deal with breaking changes once."
Is that a one-off thing that lsprotocol
is planning? Or a more general solution you're considering?
Or a more general solution you're considering?
No, nothing that fancy :)
Just with the breaking changes coming in lsprotocol v2024.x
(#410), I think it would make sense to change to the updated approach at the same time
I'm trying to decide if it's worth trying to ship a version of pygls
with both approaches in the same package, or if we ship them separate as stable 1.x
series and unstable 2.x
series, both available at the same time. - Do you have any thoughts on that?
I now have a failing test, client is able to talk to a server over stdio, but the server is not sending correct capabilities (yet)
I'm trying to decide if it's worth trying to ship a version of pygls with both approaches in the same package
Now that I think about it, we'll probably have to go the pre-release branch approach - we won't be able to have lsprotocol
2023.x and 2024.y installed at the same time
Ah yes, a pre-release is probably the best approach.
Looking good! So, pygls/lsp/_base_client.py
is autogenerated right?
So,
pygls/lsp/_base_client.py
is autogenerated right?
Yes
Ah yes, a pre-release is probably the best approach.
Cool, I'll slowly start replacing the previous approach with the new stuff then, rather than trying to maintain both in the same codebase.
Quick update. I think most of the fundamentals are nearly there, I have both a sync (for wasm) and async (flagship) implementations of the base JSON-RPC server and pygls will choose which automatically based on the current environment - though I'm not sure what the type checkers will think of that.... :thinking:
I also have proof of concept end-to-end tests working for CPython, Pyodide, and WASM-WASI over stdio (on my machine at least). I want to see if I can get CI for these working before moving to some of the higher level stuff.
Again happy to elaborate more on any specific details and let me know if you have any feedback! :)
It's looking great! Especially impressive the WASI stuff.
Only very minor comment right now is that I think it'd be good to keep the Github Action definitions as minimal as possible and have them call out to scripts/*
where possible.
I'm very curious about the Nix stuff, if it's at all relevant I'd personally like to see it in the repo.
Only very minor comment right now is that I think it'd be good to keep the Github Action definitions as minimal as possible and have them call out to scripts/* where possible.
I agree, now that I have something that works it should be easy enough to extract as a script.
I'm very curious about the Nix stuff, if it's at all relevant I'd personally like to see it in the repo.
I want to make sure it's not forced on anyone/CI, but if you're open to having some Nix in the repo I'll see if I can get it to a nice place :)
In other news... I've finally figured out the plumbing to get a WASI version of pygls into VSCode! :smile:
Taking this branch, a tweaked version of our playground extension and piecing together a bunch of code snippets from the microsoft/vscode-wasm repo I was able to use the example code_actions.py
in the insiders version of vscode.dev.
If you're interested in the details I've put the code in this repo for now.
What's interesting is that the same extension can work with a local VSCode instance as well.... so if we wanted to polish it up and publish it as an official pygls-playground extension, it would offer a one-click install option for people to get up and running with pygls!
I was able to use the example code_actions.py in the insiders version of vscode.dev.
OMG 😲 That's HUGE! Wow, really impressive. And like you say, it's useful on local VSCode too. What would be the quickest way to give it a spin right now?
but if you're open to having some Nix in the repo I'll see if I can get it to a nice place :)
Personally I'd really like to see some Nix. I've been getting more into, and have found it very useful.
What would be the quickest way to give it a spin right now?
It's a little fiddly to get up and running (mostly because it doesn't come with a server implementation out of the box). One of the following should work
VSCode Desktop
logging.basicConfig(level=logging.DEBUG, filename="server.log", filemode="w")
line in examples/servers-next/code_actions.py
pygls.server.launchScript
option to examples/servers-next/code_actions.py
examples/servers/workspace/sums.txt
and have a play around!vscode.dev
logging.basicConfig(level=logging.DEBUG, filename="server.log", filemode="w")
line in examples/servers-next/code_actions.py
pygls.server.launchScript
option to examples/servers-next/code_actions.py
examples/servers/workspace/sums.txt
and have a play around!You might have to manually run the pygls: Restart language server
command a few times as you get things setup as the error handling is not brilliant yet
Great, thanks for the detailed isntructions. I tried them both, and they both get stuck at installing the extension with this error message:
Can't install release version of 'swyddfa.pygls-wasi-playground' extension because it has no release version.
For the vscode.dev approach I'd add this step:
:man_facepalming: Sorry, I should've said, when installing you need to pick the "Pre-Release" version.
I think if you went to https://insiders.vscode.dev, it will default to the pre-release version
This is slowly... coming together I think. I'm quite happily going end-to-end with a few simple language servers now, there's just a long tail of details to backfill, including
@server.thread()
lsprotocol
type definitions (#431)But I'm reasonably confident that this approach can go all the way now. :crossed_fingers:
If you are on Linux, then this may be at a point where it's worth trying it out and seeing what you think - but expect a fair amount to be broken still. Also, I've started putting more detail into the commit messages so feel free to dig into those for more info :)
Wow, you're making amazing progress. Those commits are seriously awesome 🤩
I tried the WASI playground again, and got a bit further. The pre-release installs fine. Locally I get this error after the install:
2024-01-28 12:06:43.988 [info] [Error - 12:06:43] Client pygls: connection to server is erroring.
Reader received error. Reason: unknown
Shutting down server.
2024-01-28 12:06:43.990 [info] [Error - 12:06:43] Stopping server failed
2024-01-28 12:06:43.990 [info] Error: Client is not running and can't be stopped. It's current state is: starting
at LanguageClient2.shutdown (/home/tombh/.vscode/extensions/swyddfa.pygls-wasi-playground-0.1.0/dist/node/extension.js:15677:17)
at LanguageClient2.stop (/home/tombh/.vscode/extensions/swyddfa.pygls-wasi-playground-0.1.0/dist/node/extension.js:15654:21)
at LanguageClient2.stop (/home/tombh/.vscode/extensions/swyddfa.pygls-wasi-playground-0.1.0/dist/node/extension.js:18187:22)
at LanguageClient2.handleConnectionError (/home/tombh/.vscode/extensions/swyddfa.pygls-wasi-playground-0.1.0/dist/node/extension.js:15897:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
On vscode.dev, it seems to be getting stuck downloading the repo in the first place, but I'm sure that's just something I'm doing wrong.
BTW, examples/servers-next
is just examples/servers
right?
BTW, examples/servers-next is just examples/servers right?
Sorry yes, only just dropped the servers-next
folder :sweat_smile:
Locally I get this error after the install:
Hmm, it's not immediately obvious to me what the error is from that. How far did you get with setting up the code for the server? There's a chance the extension tried to start the server before you were ready...
Hmm, it's not immediately obvious to me what the error is from that. How far did you get with setting up the code for the server? There's a chance the extension tried to start the server before you were ready...
It happens as soon as I open VSCode in the folder. I get this little dialog in the bottom right:
BTW, I was just taking to David Greisen, and he was wondering if this branch will still support multi-threading? That's what you mean with having @server.thread()
in your remaining tasks list right?
It happens as soon as I open VSCode in the folder.
The extension will eagerly try to get a server up and running, so it will probably launch the the server before you've configured everything. Have you tried doing each step in this comment and then trying to run pygls: Restart language server
?
If you've done all that and get an error like Client is not running and can't be stopped. It's current state is: starting
then try running Developer: Reload Window
from the command palette.
In theory since everything will now be configured, it should be able to start correctly with a fresh slate. :crossed_fingers:
If it still doesn't work then I'm really stumped.... :sweat_smile:
BTW, I was just taking to David Greisen, and he was wondering if this branch will still support multi-threading? That's what you mean with having @server.thread() in your remaining tasks list right?
Yes! :) Though I'm hoping to get @server.thread()
to use the thread pool managed by the event loop rather than have pygls
maintain a separate ThreadPool
of its own.
Thanks for the tips again. Still no access I'm afraid. I still get that same error:
2024-02-21 09:40:58.659 [info] [Error - 09:40:58] Client pygls: connection to server is erroring.
Reader received error. Reason: unknown
That's after doing the pygls: Restart language server
and the Developer: Reload Window
.
The only thing I'm not 100% clear on is the step: "Change the pygls.server.launchScript
option to examples/servers-next/code_actions.py
". I do that in VSCode's User Settings right?
Yes! :) Though I'm hoping to get @server.thread() to use the thread pool managed by the event loop rather than have pygls maintain a separate ThreadPool of its own.
Great!
I do that in VSCode's User Settings right?
Yes, though don't forget it's examples/servers/code_actions.py
again now... :sweat_smile:
I'll look to see if I can push an updated version of the extension soon-ish, this branch has moved on a lot and I'll enable the stderr
channel which should at least make it easier to debug...
@tombh Pushed an update to the extension, if you run it in a local copy of VSCode you should be able to see the server's stderr
output in the pygls-server
output window - hopefully that makes it easier to figure out what's going on :crossed_fingers:
The stderr
won't be available on the web version though :/
Great thanks, I'll check it out soon.
I'm still just seeing that same error:
2024-02-21 09:40:58.659 [info] [Error - 09:40:58] Client pygls: connection to server is erroring.
Reader received error. Reason: unknown
But one time I did get this error!
<--- Last few GCs --->
[1663866:0x38004e0000] 42 ms: Mark-Compact (reduce) 0.7 (2.9) -> 0.7 (1.9) MB, 0.53 / 0.00 ms (average mu = 0.291, current mu = 0.022) last resort; GC in old space requested
[1663866:0x38004e0000] 42 ms: Mark-Compact (reduce) 0.7 (1.9) -> 0.6 (1.9) MB, 0.53 / 0.00 ms (average mu = 0.169, current mu = 0.004) last resort; GC in old space requested
<--- JS stacktrace --->
#
# Fatal JavaScript out of memory: CALL_AND_RETRY_LAST
#
Unfortunately though I could never reproduce it 🫤
Though my intuition is that it's all just something wrong with my setup, like the virtualenv or paths or something.
I'm still just seeing that same error:
If I'm not mistaken, that's from the pygls-client
output channel - there should now also be something in the pygls-server
channel :crossed_fingers:
Though my intuition is that it's all just something wrong with my setup, like the virtualenv or paths or something.
In theory at least, there shouldn't be any issues like that with this approach, the extension bundles everything it needs to run, well apart from the server's .py file that is :sweat_smile:
Oh! I didn't realise there was that drop down for choosing channels. And now I've seen a useful error 🥳
It can't find any of the example servers defined in examples/servers/.vscode/settings.json
's "pygls.server.launchScript": "..."
, because examples/servers/.vscode/launch.json
's configurations.pathMappings.localRoot: "${workspaceFolder}"
gets set to /workspace
.
When I set configurations.pathMappings.localRoot
to "."
, I can put whatever path I want in "pygls.server.launchScript"
and everything works perfectly!
It can't find any of the example servers defined in examples/servers/.vscode/settings.json's "pygls.server.launchScript": "...", because examples/servers/.vscode/launch.json's configurations.pathMappings.localRoot: "${workspaceFolder}" gets set to /workspace.
When I set configurations.pathMappings.localRoot to ".", I can put whatever path I want in "pygls.server.launchScript" and everything works perfectly!
Interesting... can't say I've ever come across pathMappings
before.... :thinking:
Glad to hear that you managed to get it working though!
I'm going to close this.
TLDR: This PR is too big (just look at the list of goals! :sweat_smile:), not ready and the rest of the world is moving on.
I'm glad I tried this, but after some time away from it I think it's worth another shot and hopefully something a little more incremental
As mentioned in #334, I don't think we need to base our JSON-RPC implementation on an
asyncio.Protocol
object anymore. Since all we have to to is read and write some bytes, I think it makes more sense to base it on high-level asyncioReader
andWriter
objects instead.I'm opening this PR (very!) early so that people have the chance to provide feedback, but bear in mind that there are still many details to consider! If it looks like this approach might pan out I will, of course, put some more effort into commit messages etc. so that it's more digestible.
I'd like to think that if this new approach proves successful, we can ship it alongside the current one so that people can opt-in to try it out
Goals
JsonRPCProtocol
does a lot, including some details that are actually LSP specificlsprotocol
version is ready - that way if this works out, server authors only have to deal with breaking changes once._async
flavours of every method.So far I only have a proof of concept working, where a JSON-RPC client is able to send a request to a simple JSON-RPC server and get a response.
Happy to elaborate further if needed, or even throw it all in the bin if it turns out to be a horrible idea!
TODO
lsprotocol 2024.x
(#431)Code review checklist (for code reviewer to complete)