ebkalderon / tower-lsp

Language Server Protocol implementation written in Rust
Apache License 2.0
962 stars 54 forks source link

Redesign core architecture, support custom server requests #313

Closed ebkalderon closed 2 years ago

ebkalderon commented 2 years ago

Added

Changed

Fixed

Removed


This redesign has been a very long time coming, but it's finally here! :tada: This pull request shouldn't affect too much code downstream, except the sacred initialization incantation is slightly different:

- let (service, messages) = LspService::new(|client| ...);
+ let (service, socket) = LspService::new(|client| ...);
- Server::new(stdin, stdout)
+ Server::new(stdin, stdout, socket)
-     .interleave(messages)
      .serve(service)
      .await;

Most importantly, support has been added for custom server requests via the following API:

impl LanguageServer for Foo { ... }

impl Foo {
    async fn request(&self) -> Result<...> { ... }

    async fn request_params(&self, params: ...) -> Result<...> { ... }

    async fn notification(&self) { ... }

    async fn notification_params(&self, params: ...) { ... }
}

let (service, socket) = LspService::build(|client| Foo { ... }) // Wow! :heart_eyes: 
    .method("custom/request", Foo::request)
    .method("custom/requestParams", Foo::request_params)
    .method("custom/notification", Foo::notification)
    .method("custom/notificationParams", Foo::notification_params)
    .finish();

In the process, this pull request also lands a slew of architectural improvements (under the hood and otherwise) and fixes a long-standing critical concurrency bug. Client now finally implements tower::Service, making it possible to fire off requests and notifications at the client "the Tower way". The jsonrpc module has seen much consolidation too, with a new and improved Request type replacing Message, ClientRequest, and ServerRequest. Finally, as part of the ground-up rewrite of Server, the server's concurrency level can now be customized by the user, if desired.

I didn't originally plan on all of these improvements landing in a single PR, but they nonetheless arose as a consequence of rewriting the internal JSON-RPC router to support custom requests.

Closes #226. Closes #256. Closes #177. Partially addresses #284.

I'd love any feedback on this before merging, @silvanshade @Timmmm @IWANABETHATGUY @samscott89 @lgalabru! :heart:

Sorry for the noisy pings, but I'd appreciate any willing real-world testers.

ebkalderon commented 2 years ago

It seems that there are still concerns around #284 with stalls occurring when max_concurrency long-running server tasks are spawned, causing backpressure. This stops the read_input task from advancing forward, thereby preventing the any $/cancelRequest messages sent by the client from being received.

Unlike the code present in the master branch today, this situation heals itself over time as the buffered futures eventually resolve and return responses, freeing read_input again to continue advancing. However, if any of these futures happen to be waiting to receive inbound responses from the client, these futures will block forever and cause the transport to stall once again.

A reproducible example of this issue can be created by adding this line to the execute_command() handler in examples/stdio.rs, just before the self.client.log_message call:

tokio::time::sleep(std::time::Duration::from_secs(10)).await;

and then in your LSP client of choice, execute the dummy.do_something command on the server over and over again, as quickly as you can. The server should slowly grow unresponsive and eventually stall completely as the client reports that the server has timed out, usually after 5000ms for most language clients.

It would be nice if we could set the transport concurrency level to something very, very high (like 100+, akin to the SETTINGS_MAX_CONCURRENT_STREAMS setting in HTTP/2) and then have a separate concurrency level for processing server requests specifically (something customizable by the user and defaulting to much lower value, like 4). I believe an approach like this would fix the issue for the vast majority of users.

I believe this can be achieved by raising the buffer size of the server_tasks_{tx,rx} channel from 0 to 100, thereby allowing up to 100 server requests to be queued for processing at a time, but only executed in batches of max_concurrency.

EDIT: Addressed in 534d770, and it seems to be working.

ebkalderon commented 2 years ago

Ahh, looks like 7920a99 cannot be implemented right now on "runtime-agnostic" because async-codec-lite has a private Error type that cannot be matched upon directly. There should be no need for this to exist if <T as Decoder>::Error already has a From<std::io::Error> bound, though. :raised_eyebrow: I'll see if I can open up a pull request to fix that. In the meantime, I'll drop that commit from this PR.

EDIT: Actually, I realize now that I should be able to retrieve the source and then downcast that into ParseError. It's a bit ugly, but it should work still. Restored in d5bb6f9.

silvanshade commented 2 years ago

Ahh, looks like 7920a99 cannot be implemented right now on "runtime-agnostic" because async-codec-lite has a private Error type that cannot be matched upon directly. There should be no need for this to exist if <T as Decoder>::Error already has a From<std::io::Error> bound, though. 🤨 I'll see if I can open up a pull request to fix that. In the meantime, I'll drop that commit from this PR.

Okay, I'll see if I can implement a change for that but your solution seems reasonable for now.

silvanshade commented 2 years ago

These changes sound great! I only skimmed through the source diffs so far but I think all the proposed changes make sense given previous discussions around the various issues.

IWANABETHATGUY commented 2 years ago
impl LanguageServer for Foo { ... }

impl Foo {
    async fn request(&self) -> Result<...> { ... }

    async fn request_params(&self, params: ...) -> Result<...> { ... }

    async fn notification(&self) { ... }

    async fn notification_params(&self, params: ...) { ... }
}

let (service, socket) = LspService::build(|client| Foo { ... }) // Wow! :heart_eyes: 
    .method("custom/request", Foo::request)
    .method("custom/requestParams", Foo::request_params)
    .method("custom/notification", Foo::notification)
    .method("custom/notificationParams", Foo::notification_params)
    .finish();

in the demo above, what is type of params, serde_json::Value ?, or some struct have serde::DeSerialize proc_macro

IWANABETHATGUY commented 2 years ago

Please ignore me, i don't see the doc. By the way, your custom request impletation is awesome, thanks for you work!

samscott89 commented 2 years ago

I tried this branch out on the code I was working on that I mentioned back in https://github.com/ebkalderon/tower-lsp/issues/284#issuecomment-908420168.

As far as I can tell, this seems like it would have fixed the issue I had. So that's cool!

ebkalderon commented 2 years ago

Okay, I think this pull request is largely ready to merge! With some light integration testing out of the way and some external feedback addressed, it seems that the time has come to pull the trigger on this. It may take a bit to prepare the changelog entries for the next release, given all the changes, and it's very early in the morning where I am, so pushing the green button will have to wait another day or two. :sleeping:

ebkalderon commented 2 years ago

I've chosen to rename .method() to .custom_method(), making it a bit clearer that this builder method does not redefine preexisting registered RPC methods, but can only define custom methods on top of the LSP specification-defined ones.

ebkalderon commented 2 years ago

Looks like this is ready to merge. Let's go! :tada:

IWANABETHATGUY commented 2 years ago

Could you please publish a new version? Currently i used git repo as my dependency which maybe broken when you add some breaking change.

ebkalderon commented 2 years ago

Yup, I'm waiting for PR #321 to finish in CI as I type this. :smile: