Closed ebkalderon closed 2 years ago
It seems that there are still concerns around #284 with stalls occurring when max_concurrency
long-running server tasks are spawned, causing backpressure. This stops the read_input
task from advancing forward, thereby preventing the any $/cancelRequest
messages sent by the client from being received.
Unlike the code present in the master
branch today, this situation heals itself over time as the buffered futures eventually resolve and return responses, freeing read_input
again to continue advancing. However, if any of these futures happen to be waiting to receive inbound responses from the client, these futures will block forever and cause the transport to stall once again.
A reproducible example of this issue can be created by adding this line to the execute_command()
handler in examples/stdio.rs
, just before the self.client.log_message
call:
tokio::time::sleep(std::time::Duration::from_secs(10)).await;
and then in your LSP client of choice, execute the dummy.do_something
command on the server over and over again, as quickly as you can. The server should slowly grow unresponsive and eventually stall completely as the client reports that the server has timed out, usually after 5000ms for most language clients.
It would be nice if we could set the transport concurrency level to something very, very high (like 100+, akin to the SETTINGS_MAX_CONCURRENT_STREAMS
setting in HTTP/2) and then have a separate concurrency level for processing server requests specifically (something customizable by the user and defaulting to much lower value, like 4). I believe an approach like this would fix the issue for the vast majority of users.
I believe this can be achieved by raising the buffer size of the server_tasks_{tx,rx}
channel from 0 to 100, thereby allowing up to 100 server requests to be queued for processing at a time, but only executed in batches of max_concurrency
.
EDIT: Addressed in 534d770, and it seems to be working.
Ahh, looks like 7920a99 cannot be implemented right now on "runtime-agnostic"
because async-codec-lite
has a private Error
type that cannot be matched upon directly. There should be no need for this to exist if <T as Decoder>::Error
already has a From<std::io::Error>
bound, though. :raised_eyebrow: I'll see if I can open up a pull request to fix that. In the meantime, I'll drop that commit from this PR.
EDIT: Actually, I realize now that I should be able to retrieve the source and then downcast that into ParseError
. It's a bit ugly, but it should work still. Restored in d5bb6f9.
Ahh, looks like 7920a99 cannot be implemented right now on
"runtime-agnostic"
becauseasync-codec-lite
has a privateError
type that cannot be matched upon directly. There should be no need for this to exist if<T as Decoder>::Error
already has aFrom<std::io::Error>
bound, though. 🤨 I'll see if I can open up a pull request to fix that. In the meantime, I'll drop that commit from this PR.
Okay, I'll see if I can implement a change for that but your solution seems reasonable for now.
These changes sound great! I only skimmed through the source diffs so far but I think all the proposed changes make sense given previous discussions around the various issues.
impl LanguageServer for Foo { ... }
impl Foo {
async fn request(&self) -> Result<...> { ... }
async fn request_params(&self, params: ...) -> Result<...> { ... }
async fn notification(&self) { ... }
async fn notification_params(&self, params: ...) { ... }
}
let (service, socket) = LspService::build(|client| Foo { ... }) // Wow! :heart_eyes:
.method("custom/request", Foo::request)
.method("custom/requestParams", Foo::request_params)
.method("custom/notification", Foo::notification)
.method("custom/notificationParams", Foo::notification_params)
.finish();
in the demo above, what is type of params, serde_json::Value
?, or some struct have serde::DeSerialize
proc_macro
Please ignore me, i don't see the doc. By the way, your custom request impletation is awesome, thanks for you work!
I tried this branch out on the code I was working on that I mentioned back in https://github.com/ebkalderon/tower-lsp/issues/284#issuecomment-908420168.
As far as I can tell, this seems like it would have fixed the issue I had. So that's cool!
Okay, I think this pull request is largely ready to merge! With some light integration testing out of the way and some external feedback addressed, it seems that the time has come to pull the trigger on this. It may take a bit to prepare the changelog entries for the next release, given all the changes, and it's very early in the morning where I am, so pushing the green button will have to wait another day or two. :sleeping:
I've chosen to rename .method()
to .custom_method()
, making it a bit clearer that this builder method does not redefine preexisting registered RPC methods, but can only define custom methods on top of the LSP specification-defined ones.
Looks like this is ready to merge. Let's go! :tada:
Could you please publish a new version? Currently i used git repo as my dependency which maybe broken when you add some breaking change.
Yup, I'm waiting for PR #321 to finish in CI as I type this. :smile:
Added
LspService::build().method(...).finish()
.Client
now implementsService<Request, Response = Option<Response>>
, just likeLspService
.concurrency_level
setting onServer
.jsonrpc::Request
object with convenient builder interface..result()
/.error()
and.is_ok()
/.is_error()
methods tojsonrpc::Response
.From
implementations forjsonrpc::Id
.Loopback
trait to allow for mock service/client to be used withServer
.Changed
LspService
now implementsService<Request, Response = Option<Response>>
.LspService::new()
now returns aClientSocket
instead of aMessageStream
.Server::new()
now requires a thirdClientSocket
argument instead of using.interleave()
.jsonrpc::Response::{ok, error}
tojsonrpc::Response::{from_ok, from_error}
.Client::send_custom_{request,notification}
toClient::send_{request,notification}
.Fixed
Server
occasionally stalling by processing client responses separately from client-to-server requests.-32600
(invalid request) if incoming JSON parses, but isn't a JSON-RPC request or response.Removed
Server::interleave()
in favor of a thirdsocket
argument toServer::new()
.jsonrpc::{ClientRequest, Incoming, Outgoing, ServerRequest}
.MessageStream
.This redesign has been a very long time coming, but it's finally here! :tada: This pull request shouldn't affect too much code downstream, except the sacred initialization incantation is slightly different:
Most importantly, support has been added for custom server requests via the following API:
In the process, this pull request also lands a slew of architectural improvements (under the hood and otherwise) and fixes a long-standing critical concurrency bug.
Client
now finally implementstower::Service
, making it possible to fire off requests and notifications at the client "the Tower way". Thejsonrpc
module has seen much consolidation too, with a new and improvedRequest
type replacingMessage
,ClientRequest
, andServerRequest
. Finally, as part of the ground-up rewrite ofServer
, the server's concurrency level can now be customized by the user, if desired.I didn't originally plan on all of these improvements landing in a single PR, but they nonetheless arose as a consequence of rewriting the internal JSON-RPC router to support custom requests.
Closes #226. Closes #256. Closes #177. Partially addresses #284.
I'd love any feedback on this before merging, @silvanshade @Timmmm @IWANABETHATGUY @samscott89 @lgalabru! :heart:
Sorry for the noisy pings, but I'd appreciate any willing real-world testers.