docker / labs-ai-tools-vscode

Run & debug workflows for AI agents running Dockerized tools in VSCode
Other
54 stars 6 forks source link

Sync rendering of jsonrpc notifications with neovim #42

Closed slimslenderslacks closed 2 weeks ago

slimslenderslacks commented 2 weeks ago

We have 6 different jsonrpc notification methods which must each have a rendering function.

We are writing this content into a markdown buffer. In neovim, a treesitter plugin is always updating the treesitter AST as the buffer changes, and this allows a highlight module to create syntax highlighting for different sections (different roles), code fences, and errors. I think this makes it easier to distinguish the different.

If vscode can not use treesitter for this, we can add semantic tokens to the LSP to get the same effect - I assume vscode can have different color schemes based on the LSP semantic tokens. In practice, I find the default treesitter implementation in neovim to be good enough so if we add semantic tokens, it'd only be to support special syntax highlighting in vscode. @ColinMcNeil to find out what vscode requires.

None of the methods should put their json params directly into the buffer.

content from error

This will allow us to highlight this codeblock with a red color to indiciate an error.

ColinMcNeil commented 2 weeks ago

Is the behavior in neovim when receiving debug to skip the message when debug is off? Making it a toggle would be tricky here.

slimslenderslacks commented 2 weeks ago

Yes, in neovim, it's a per-session toggle. So if I have two neovim sessions, I can be in debug mode in one and not in debug mode in the other.

ColinMcNeil commented 2 weeks ago

If you toggle after a prompt has run, do you still get debug messages? I'm trying to see if I need to maintain the debug messages even if debug mode is off, so that they can be re-added to an existing buffer when entering debug mode.

slimslenderslacks commented 2 weeks ago

I don't do that. But that would be super cool. Prompts too. Ultimately, I guess we're updating a model of the current state of the conversation and you're just rendering it to a buffer. But no, I didn't go that far :)