Open rockstar opened 2 years ago
Thanks for opening this ticket! If you check the Stream
implementation of ClientSocket
today (source), it delegates down into the underlying futures::channel::mpsc::Receiver<_>
implementation of Stream
. This is pretty much the same as how the old Client
and MessageStream
worked, albeit with a new abstraction around it.
How did the interface change so that
messages.next().await
doesn't ever loop, no matter how many messages the server sends to the client.
Do you think you could elaborate on what you mean by this? In the code snippet above, I cannot see how service
is being driven. The LspService
must be actively emitting server-to-client messages for messages.next().await
to yield them, and since there isn't any service.call(...).await
nor is service
passed into Server::new
, I can't understand how the loop would ever progress as described. I'd love if you could provide more details!
Thanks for the reply. I've been fighting trying to solve this bug for quite a bit; I don't see that much functional difference.
We have an exported run
function that essentially fakes an event loop and runs forever. That calls server.processor.process().await
(with some error handling). Our send
function is where service.call
is called, but I didn't think that'd have much to do with messages originating from the server; you asking about it makes me wonder if I need to be looking at this solution holistically, and potentially evaluating whether or solution is hacky or not.
@rockstar I'm not really sure why the approach you described worked before but doesn't work now, but I am wondering if the problem might go away if you could avoid using the custom LspServer
interface and instead set things up through Server::new
as usual.
In the case of a wasm target, I believe you should be able to use the web Streams API. There is a wasm-streams crate that provides adapters between the Streams API from web-sys
and the usual futures
Streams, along with impls for AsyncRead
and AsyncWrite
, which you will need for connecting everything up with Server::new
.
I've created a quick minimal example that demonstrates this approach at tower-lsp-web-demo.
The wasm-based project that I linked to above is now a fairly complete example which fully demonstrates how to use tower-lsp
with the intended API in a wasm context. I'd be happy to answer any questions about how that was set up but otherwise I think it's safe to close this since the original motivation was to be able to use tower-lsp
for a wasm target.
I'm not sure I agree that the bug is fixed. Sure, you've provided a workaround, but there's a disconnect where "what worked then doesn't work now" in an absolutely public api. The workaround is definitely something to explore, but this would require an re-architecture of a large portion of more than one application. As it stands, the migration from lspower
to tower-lsp
is a no-go for us.
Reopening this ticket in order to ensure this gets addressed! I happened to notice this was closed a while ago, despite noticeable reservations that it was actually resolved.
In influxdata/flux-lsp#427, the flux-lsp moved from
lspower
totower-lsp
because the merge had occurred. The reason we had usedlspower
in the first place is because we needed to be able export the lsp server via wasm. We have a wasm interface that looks like this (expressed in TypeScript):The idea here was that you could request things from the server via
send
, and any time that the server needed to initiate diagnostics, logs, notifications, etc. via theonMessage
handlers. The way we implemented this is something similar to this (you can read the full source in the PR/repo)Somewhere between the change from
lspower::MessageStream
totower_lsp::ClientSocket
, that message firing from the server is broken. While trying to dig into the issue, a lot of the introspection isn't possible becauseClientSocket
has a lot of private interfaces (which probably isn't related to the issue, but maybe something to consider for debugging going forward). What am I missing? How did the interface change so thatmessages.next().await
doesn't ever loop, no matter how many messages the server sends to the client.