Closed dgreisen closed 6 years ago
I have already thought about this and flask api comes to my mind, too. So, I have an idea similar as yours.
from pygls import LanguageServer, lsp
ls = LanguageServer(logger)
@ls.register(lsp.COMPLETIONS)
def get_completions(completionParams):
. . .
return completionItems
@ls.register(lsp.COMMAND, 'myls.cmd_add')
def cmd_add(p1, p2):
return p1 + p2
if __name__ == '__main__':
ls.start_tcp('127.0.0.1', 5000)
# ls.start_io()
Where the lsp.COMPLETIONS
constant is JSON-RPC method from the LSP specification -> 'textDocument/completion'
.
When the server is started and initial request is sent from the client, server should look at the registered capabilities (this will also contain custom commands) and return them to the client as part of InitializeResult
response.
I also think we should keep both ways for starting the server (TCP & IO), so we can easily debug it.
Multiple workspace roots should be considered too, as @dgreisen said.
@danixeee Multi-root workspaces is something our current extension should fully support. We definitely need this to be in place from the beginning.
Clossing the issue since pygls
supports mechanism from above discussion.
@ls.feature('textDocument/didOpen')
async def did_open(ls, params: DidOpenTextDocumentParams):
pass
@ls.thread()
@ls.command('exampleCommand')
def show_python_path(ls, *args):
pass
How are people going to implement their own server on top of pygls?
are they going to create plugins similar to django? are they going to extend an instance, similar to flask? are they going to subclass? some other way that I'm not thinking of?
I kinda like the flask api (the simple one, not all the complex blueprint stuff). I am just totally making this up :):