Closed idg10 closed 3 years ago
Related: #35.
Currently, the .NET Interactive API's commands and events are able to be sent over HTTP. To keep things unified, it might be that not only should this new mechanism use the existing web server but also the existing types (e.g. KernelCommand
and KernelEvent
), serialization protocols, subkernel routing, and so on. The idea has been proposed in the past of making the set of commands and events extensible. If the programming model is the same for these custom channels, it will keep things simpler. Separate channels will be needed in any case because the kernel enforces serial execution of commands in the existing channel. Allowing for multiple, parallel channels might be desirable as well for some existing commands (completions, hover text) that have no need to block behind code execution commands.
So something like this:
https://github.com/idg10/interactive/commit/88785215e9c6df3de821ee386dc67181149ce0bb
This adds a new message type, KernelChannelMessageType
, to KernelEventType
, and a corresponding KernelChannelMessage
extending KernelEvent
.
This only implements the kernel to UI half. For the other direction, the available existing mechanism seemed very command-focused, and these events don't feel like commands. So my initial thought was that these should be events but not commands. Does that sound right? If so, this would require adding some way to send an event that isn't a command—as far as I can tell the code currently in place seems to presume that there's always going to be a command in the event.
There are three aspects of the design proposal that look like they depart from what we have so far, so I'd like to discuss them a bit. I have a number of questions but these are just a way of thinking through the design, so don't feel like you need to address them all.
It seems like the KernelChannelMessage
is coupling channels with messages. Is this to provide routing? What happens for example if I send a message where ChannelName == "A"
to channel B
? I wonder if keeping these concepts separate will provide more flexibility over time. How should we think about when to use a user-defined channel versus a user-defined message type?
In what ways do these channels differ from the existing single default "channel" that commands and events are sent on?
Currently, we don't have the concept of a "generic" message types. KernelCommand
and KernelEvent
are abstract. The existing concrete implementations are differentiated by their type until serialization, at which point the command or event type name is serialized (example). The proposed KernelChannelMessage
looks more like our existing KernelEventEnvelope
and KernelCommandEnvelope
types in this sense.
Would it be desirable for user-defined message types to depart from this pattern? Would it add friction to make people define types?
It would be helpful to have some examples of expected message types in order to better understand whether they fit within the command and event model.
Just to clarify terms, at the risk of over-explaining, a command is a message that:
An event is a message that:
Do these categories work for the user-defined message types we have in mind?
It seems like the
KernelChannelMessage
is coupling channels with messages.
With hindsight, I named that class badly. It is essentially a serialization format, and not one that client code in the notebook would use. The usage model for sending a message is this (note the use of KernelChannelMessageData
, not KernelChannelMessage
):
channel.SendMessage(new KernelChannelMessageData { Type = "TestMessage", Content = $"Test content {messageId++}"});
So the problematic scenario you identify (send a message where ChannelName == "A"
to channel B
) doesn't arise. The target channel is determined entirely by which particular channel
you send on.
Whether that's the right model is debatable of course. In the conversation we had with Diego, if I remember correctly there was some brief discussion of possibilities around hierarchical message routing—in that case a channel wouldn't be so much a distinct entity as a per-message label. That's definitely a possibility but I suspect you'd still end up with the same basic on-wire representation. (Although you might decide you didn't need the Type
at that point, because you could represent that with a suffix on the label. E.g., instead of a channel per se, you might subscribe to all messages with a label matching /foo/bar/123/updates/*
and if the application wanted different kinds of updates, individual messages might have labels such as /foo/bar/123/updates/typeA
and /foo/bar/123/updates/type1
.)
So KernelChannelMessageData
is the basic representation of a message, and KernelChannelMessage
is how I wrap that with enough additional context to be able to know where to deliver the message when it pops out at the other end. So conceptually, KernelChannelMessage
is, as you say, closer in spirit to your ...Envelope
types. So perhaps KernelChannelMessageData
and KernelChannelMessage
should have been named KernelChannelMessage
and KernelChannelMessageEnvelope
respectively.
In what ways do these channels differ from the existing single default "channel" that commands and events are sent on?
It's a service layered on top of that channel. So it's different from that single channel in precisely the same way that commands and events are different from that channel: all three are shaped services sitting on top of a general-purpose message passing layer. The next obvious question is then: how are these channels different from commands or events?
Starting with commands, the main differences are that channels:
(That last one is debatable, in that you could imagine the 'send' returning a task or promise that completes when the message has been delivered, or fails if it was not. But that then implies partial failure modes; I'm currently of the view that if reliable communication between the notebook and the kernel has been lost, all bets are probably off. It also feels like a slippery slope from here to having the channel effectively report back a user-defined result. And while that might be useful, that feels to me like a service that should be layered over the communications channel. A basic duplex asynchronous message sending service is elemental, and feels like the right starting point, although maybe I'm only saying that because I spent the first part of my career writing kernel mode device drivers for network cards, and was even the co-author of an incredibly obscure networking standard.)
So how do they compare to events? They're closer to those, but we still have the same first two items. Channels:
So as long as I haven't drastically misunderstood your codebase (which is entirely possible—I've spent less than a day with it so far) the symmetry is the single biggest difference between this 'message channel' concept and the existing implementation. You don't actually have an abstraction for a uniform underlying message mechanism: the notebook ➞ kernel mechanism (commands) is really quite different from the kernel ➞ notebook mechanism (events). And the reason I'm under the impression that the basic communiction abstraction in your system is asymmetric is because of this:
export interface KernelTransport extends Disposable {
subscribeToKernelEvents(observer: KernelEventEnvelopeObserver): DisposableSubscription;
submitCommand(command: KernelCommand, commandType: KernelCommandType, token: string): Promise<void>;
waitForReady(): Promise<void>;
}
Although...you do have a uniform underlying message mechanism: SignalR. But presently that's an implementation detail, and the abstraction you layer on top of that is asymmetric.
The approach I took in that initial commit is upside down—it builds a message passing service on top of the event delivery service, which is obviously backwards when that event delivery service itself needs an underlying message passing service to be able to work—because I was trying to built on top of what was already there. I was almost tempted to do something more radical: introduce a symmetric message passing abstraction (essentially a model for what SignalR is doing for you), which in the TypeScript we might call MessageTransport
, with KernelTransport
then becoming a layer on top of that. Commands and events would then become peers of any user code using this proposed new service—they would all just be consumers of the underlying message passing service.
(Although in practice you almost certainly wouldn't want to expose the MessageTransport
directly to user code. In practice I suspect that KernelTransport
would add an extra member to provide mediated access to the underyling message passing service, and client code in turn would get at that through some new member(s) on DotnetInteractiveClient
. So the consumption model wouldn't look so different from my example. It would just be a different layering in the implementation.)
I am under the impression that the new functionality is actually able to broadcast to multiple listeners. Sending a message over channel (or topic) will let all of the listener for such channel to see the message. The mechanism is more like a pub/sub.
I am probably wrong but wasn't thinking of this as implemented on top of the command/event protocol we already have.
An advantage to model this as command / event I can see is that this will be able to flow over the stdio interface too without breaking previous integrations.
Probably we are looking at something where we have a 'DeliverMessage' and a 'MessageProduced' event. I still believe these should be with a broadcast behaviour
wasn't thinking of this as implemented on top of the command/event protocol we already have.
No, that wasn't my long-term plan either. It was more a case of trying to get something demonstrable as quickly as possible.
I've now taken a crack at "uninverting" this—so instead of implementing a general-purpose message passing mechanism as a layer on top of the more specialized command/event protocol, I've put together a basic illustration of how we could introduce an abstraction for a general-purpose message channel, and then layer the existing protocols on top of that.
This modifies both the STDIO and the SignalR channels to implement this new generic message passing mechanism, and then adds code to adapt that to the existing KernelTransport
interface. Unlike the previous spike, this supports bidirectional communication: I've been able to add a button in HTML cell that invokes JavaScript that sends a user message to the kernel, with kernel-side C# code that subscribes to user messages, and relays them back to the client side again, where they're picked up by the notebook-client-side JavaScript and displayed back in the HTML. (Obviously it'd be a lot simpler just to have the JavaScript update the HTML directly...but the point of this is to illustrate that user messages can now flow into and back out of the kernel.)
In this example, absolutely everything—user messages, commands, kernel events—are implemented on top of the underlying message passing layer:
https://github.com/idg10/interactive/commit/176886a8f97858656f37b7c85992099c3e27fd0b
I am under the impression that the new functionality is actually able to broadcast to multiple listeners.
Sorry, yes, it is. I should have mentioned that in the list of differences. The model with this implementation is that both ends get a "sendMessage" operation, in which they can send any JSON-able payload along with an arbitrary label (a text string), and either end can subscribe based on either a specific label, or a lable prefix.
So the notebook client side receive in JS looks like this:
interactive.subscribeToMessagesWithLabel(
"test",
message => {
let msg = `Message type '${message.type}' received: '${message.content}'`;
console.log(msg);
output.innerText = output.innerText + '\n' + msg;
});
The "type" and "content" here are specific to this particular example by the way, and not part of the model. The basic model for a message is "label" and "payload". The send side in JS looks like this:
let nextClientMessageId = 1;
sendMessageToKernel.onclick = function() {
interactive.sendMessage("test", { id: nextClientMessageId++, message: "Hello, this originated from script" });
}
Note that this example has chosen a different payload format. (Although it has chosen the same channel name. There's no need for a channel to use the same format for notebook ➞ kernel and kernel ➞ notebook; this test notebook from which I've taken these examples doesn't.) Here's the .NET kernel-side send:
await kernel.SendMessage("test", new { Type = "TestMessage", Content = $"Test content {nextKernelMessageId++}"});
Here's .NET kernel-side code that receives messages from the JavaScript shown above, and relays them back to the client:
public class MessageFromClient
{
public int id { get; set; }
public string message { get; set; }
}
var relay = new System.Reactive.Subjects.Subject<MessageFromClient>();
relay.Subscribe(m => kernel.SendMessage(
"test",
new
{
Type = "RelayClient",
Content = $"Kernel received message with id {m.id} from client: {m.message}"
}));
kernel.SubscribeToMessagesWithLabel<MessageFromClient>("test", relay);
Here's another spike, arising from our recent discussion:
https://github.com/idg10/interactive/commit/f4681741d8821e7064588e6b8ae175d82ec9166c
This is on a branch in my fork of the repo: https://github.com/idg10/interactive/tree/feature/messages-as-commands
This implements exactly the same consumer programming model as before. Script running on the notebook side can subscribe to messages based on a label:
interactive.subscribeToMessagesWithLabel(
"test",
message => {
let msg = `Message type '${message.type}' received: '${message.content}'`;
console.log(msg);
output.innerText = output.innerText + '\n' + msg;
});
Kernel-side code can ask the kernel to send a message with a label and a payload (of any JSON-serializable type):
await kernel.SendMessage("test", new { Type = "TestMessage", Content = $"Test content {nextKernelMessageId++}" });
Information can flow in the other direction to. This is script on the notebook side sending messages into the kernel:
sendMessageToKernel.onclick = function() {
interactive.sendMessage("test", { id: nextClientMessageId++, message: "Hello, this originated from script" });
}
Here's C# code that runs in the kernel, receives messages of the form sent by that last snippet, and sends them straight back up to the client:
public class MessageFromClient
{
public int id { get; set; }
public string message { get; set; }
}
var relay = new System.Reactive.Subjects.Subject<MessageFromClient>();
relay.Subscribe(m => kernel.SendMessage(
"test",
new
{
Type = "RelayClient",
Content = $"Kernel received message with id {m.id} from client: {m.message}"
}));
kernel.SubscribeToMessagesWithLabel<MessageFromClient>("test", relay);
On the kernel side subscription is modeled with Rx.NET, as you'd expect. (I don't think we currently have a dependency on that on the client, so client-side code makes do with something that sort of resembles Rx.)
This is the exact same test notebook code I was using with the previous spike, and the observable behaviour is no different.
So what changed?
This version implements custom message passing as a command. So when client-side sends a custom notification that can be received by code running in the kernel, it sends a new kind of command, currently called SendMessage
although I think that name could use some work. And critically, it works the same way in the opposite direction: the big change here is that it's now possible for the kernel to send commands back up to the client. (I've not attempted a full-blown implementation of anything you could call a client-side kernel, but logically, you could think of it as being the very bare bones of such a thing.)
So communication between the notebook and the .NET Interactive process is now effectively symmetrical: either can send the other a command. (And in principle, either could send the other events, but right now, nothing on the notebook side sends events to the kernel.) You can see this in the two extra members of the TypeScript KernelTransport
interface:
subscribeToCommands(observer: KernelCommandEnvelopeObserver): DisposableSubscription;
submitKernelEvent(event: KernelEvent, eventType: KernelEventType, associatedCommand?: { command: KernelCommand, commandType: KernelCommandType, token: string }): Promise<void>;
These are essentially the mirror image of the subscribeToKernelEvents
and submitCommand
members that were already there. (This development is less easily seen in the C# code, because there isn't such a direct representation of this kernel transport abstraction there. But it's logically the same thing.)
The transports currently in main
on .NET Interactive don't support this: they are designed so that the notebook sends commands to the kernel, and the kernel reports events back to the notebook. To enable the symmetry required by this new approach, I've retained the new MessageTransport interface from the previous spike. Whether you'd necessarily do it that way starting from scratch is not clear—I did this just because I'd already done that work that enabled symmetic communication over both the stdio and SignalR mechanisms. (I've still not tackled the Jupyter transport.) Unlike the previous spike, this MessageTransport
is no longer a central element: it is now just an implementation detail for a couple of the channels. Architecturally the significant change here is the newly-symmetric kernel transport.
Thoughts on using "command" instead of "message" in the APIs, per the semantic distinctions here? I'm wondering whether the concepts are necessarily different? Are we introducing new concepts or just new participants, i.e. that the client can receive commands and the kernel can consume events?
Having stepped away from the code for a few days, on returning I realised that I've got two completely different things in here both called "message" which wasn't very helpful... Sorry!
I see four ideas, although I'm not sure of the right name for two of them:
To send an appmessages you send the command currently called SendMessage
. This command is wrapped as a message (MessageEnvelope
) that gets sent over the transport. The receiving end recognizes that the message contains a command, unwraps it, recognizes that it's actually a SendMessage
command containing an app message, works out whether the application has subscribed to messages of that kind and if so delivers it.
appmessage: (currently also called "messages" in that spike): an application-defined unit of information comprising a label and a payload, delivered by executing a particular command; the command has no behaviour besides making the appmessage available for the app
Is this second level of indirection necessary? My impression is it would lead to a duplication of serialization, pattern matching, and dispatch implementations, one for the command and event types and another for the various message types. Right now we have, for example:
SubmitCode
RequestCompletions
Each has a corresponding JSON representation which, with our standard envelope, looks like this:
{
"token": "the-token",
"commandType": "SubmitCode",
"command": {
"code": "123",
"submissionType": 0,
"targetKernelName": "csharp"
}
}
An approach we've discussed is to allow this list to become extensible, so people could introduce custom command types, e.g.:
{
"token": "the-token",
"commandType": "MyCustomCommandType",
"command": {
"property1": "123",
"property2": 456
}
}
It would be a peer to the existing commands and use the same code paths for serialization and so on. It can be mapped to strongly-typed commands, but of course you could still create a looser approach within your own custom messages.
But it would be nice to avoid creating messages like the following, where JSON contains more encoded JSON, and deserialization has to happen twice:
{
"token": "the-token",
"commandType": "SendMessage",
"command": {
"Type": "MyCustomMessage",
"Content": "{\"property1\":\"123\",\"property2\":456}"
}
}
The example as it stands doesn't add an additional serialization layer. It's true that there is double serialization in the kernel-to-notebook direction over SignalR, e.g. this:
{"type":1,"target":"messages","arguments":["{\u0022label\u0022:\u0022kernelCommands\u0022,\u0022payload\u0022:{\u0022token\u0022:\u002216\u0022,\u0022commandType\u0022:\u0022SendMessage\u0022,\u0022command\u0022:{\u0022label\u0022:\u0022test\u0022,\u0022content\u0022:{\u0022type\u0022:\u0022TestMessage\u0022,\u0022content\u0022:\u0022Test content 1\u0022},\u0022targetKernelName\u0022:null}}}"]}
but as far as I can tell that was happening already. If I revert to the latest preview build of .NET Interactive and your notebook extension (instead of using my fork), here's an example of some communication over SignalR:
{"type":1,"target":"kernelEvent","arguments":["{\u0022event\u0022:{},\u0022eventType\u0022:\u0022CommandSucceeded\u0022,\u0022command\u0022:{\u0022token\u0022:\u00225\u0022,\u0022commandType\u0022:\u0022RequestHoverText\u0022,\u0022command\u0022:{\u0022code\u0022:\u0022IObserver\u003Cstring\u003E visibleObserver = await ReactiveNotebookService.CreateReactiveOutputAsync\u003Cstring\u003E();\u0022,\u0022linePosition\u0022:{\u0022line\u0022:0,\u0022character\u0022:43},\u0022targetKernelName\u0022:\u0022csharp\u0022}}}"]}
So the double-serialization already existed (apparently because KernelHubConnection
passes a string
when it calls hubContext.Clients.All.SendAsync
; that SignalR method goes on to perform its own additional JSON serialization step; this was the existing behaviour, and not something I've added). You can see I'm adding a layer of wrapping to support multiple message types. (The first JSON above, from my modified version, has a label
at the top level and wraps everything else in payload
; the second JSON, from your current implementation, is a plain event envelope with no further wrapping.) But there's no additional serialization beyond what you were already doing.
And the reason we don't get any more layers of serialization than already existed is that when it comes to serialization, my code essentially just flattens everything, so although there's an additional layer of wrapping, we only serialize once. Here's how that first example above (plucked from the WebView developer tools Network tab's display of the SignalR activity) looks once we've done the initial 'extract string from /arguments[0]
and deserialize as JSON' step that your code was apparently already doing:
{
"label": "kernelCommands",
"payload": {
"token": "16",
"commandType": "SendMessage",
"command": {
"label": "test",
"content": {
"type": "TestMessage",
"content": "Test content 1"
},
"targetKernelName": null
}
}
}
We've got four levels of nesting here (but all handled with a single level of serialization):
MessageEnvelope
as it's called in TS, KernelChannelMessage
as it's called in C# in that spike (because apparently I was in a hurry and didn't notice I'd ended up with different names for these things)/payload
, a KernelCommandEnvelope
as it's called in TS, which actually corresponds to your KernelCommandEnvelope.SerializationModel
in C#/payload/command
, as happens with any command, the serialized instance of the particular command type; in this case, the newly added, badly named SendMessage
command/payload/command/content
the application-supplied data, the 'real' payload here in the sense that this is the actual data we're trying to pass from our kernel-side application code back up to the UI.Here's the code (in a C# notebook cell) that caused that to be sent:
await kernel.SendMessage("test", new { Type = "TestMessage", Content = $"Test content {nextKernelMessageId++}"});
So we've got a C# anonymous type which the Kernel.SendMessage<T>
method wraps in a SendMessage
command object:
var sendMessageCommand = new SendMessage(label, message);
SendCommandToClient(sendMessageCommand);
Notice that this doesn't serialize its payload at this point. That SendMessage
command type defines the payload as object
, so it'll just hang onto whatever reference you give it. Serialization is deferred until the last possible moment, avoiding any double serialization.
It continues like this. The SendCommandToClient
method promptly wraps that command in a CommandKernelMessage
:
_kernelMessages.OnNext(new CommandKernelMessage(command));
The serialization takes place in KernelHubConnection.PublishMessageToContext
(which is subscribed to the _kernelMessages
subject referred to in the preceding example):
string data = KernelChannelMessage.Serialize(kernelMessage);
await hubContext.Clients.All.SendAsync("messages", data);
(The KernelServer
class also calls KernelChannelMessage.Serialize
so what I'm about to describe for the SignalR path also follows for that.) The command handling in that KernelChannelMessage.Serialize
looks like this:
var envelopeModel = KernelCommandEnvelope.SerializeToModel(commandMessage.Command);
return JsonConvert.SerializeObject(
new { label = channelMessage.Label, payload = envelopeModel },
Serializer.JsonSerializerSettings);
and that's the only point at which the JSON serialization actually occurs. The trick I employed here was to have KernelChannelMessage.Serialize
use the KernelCommandEnvelope.SerializationModel
directly. So when I ask KernelCommandEnvelope
to convert the command into envelope form, I'm only asking it to wrap the KernelCommand
in the object model used for serialization; I don't ask it to perform the actual serialization step. And that's what enables it to keep a single, flat serialization operation. (And I've realised that I could employ the same trick one level up: if KernelChannelMessage
offered a similar SerializeToModel
method, it could return an object instead of a string, which we could then pass to SignalR, avoiding the double-serialization that occurs in your current implementation.)
And here's an example going in the other direction:
{
"label": "kernelCommands",
"payload": {
"commandType": "SendMessage",
"command": {
"label": "test",
"content": {
"id": 1,
"message": "Hello, this originated from script"
}
},
"token": "6cf7c93a-0bd2-7c06-c8c9-d8667ce56159::1"
}
}
That was caused by code in a JavaScript cell (which runs on the client side, of course) doing this:
interactive.sendMessage("test", { id: nextClientMessageId++, message: "Hello, this originated from script" });
Notice that in this direction it's exactly as you'd hope. Although your SignalR transport double-serialized all kernel-to-client messages, client-to-kernel messages avoided this, and so the new command type I added works as you'd expect, with no additional serialization overhead.
I've updated the branch to resolve the naming ambiguities: https://github.com/idg10/interactive/commit/b87ae90fae78cd7653a1cfc5a8b90492fc36ca12
The command enabling you to send user-defined data is now called ApplicationCommand
(not SendMessage
). And the underlying messaging now talks about KernelMessageChannelMessage
. So it's a bit easier to see what's happening, but the same basic structure is there.
I took a crack at fixing the existing double-serialization that occurs for kernel-to-client messaging, but may have discovered why it's there: SignalR wants to use System.Text.Json
to serialize messages, but your code is all based around Json.NET, and SignalR chokes if we pass it, say, a command envelope, because all the JsonIgnore
attributes are the Json.NET ones, so it attempts to serialize things it shouldn't. So that's presumably why the existing code runs everything through a Json.NET serialization layer and then just hands a string to SignalR. Fixing that didn't look straightforward, and in any case is probably slightly out of scope for what I'm doing.
But if we go down the path of message sending as commands we can just to SendMessage(topic, paylaod) => SendCommand(new DeliverMessageCommand(topic, payload);
No?
I've updated the branch to resolve the naming ambiguities: https://github.com/idg10/interactive/commit/b87ae90fae78cd7653a1cfc5a8b90492fc36ca12
The command enabling you to send user-defined data is now called
ApplicationCommand
(notSendMessage
). And the underlying messaging now talks aboutKernelMessageChannelMessage
. So it's a bit easier to see what's happening, but the same basic structure is there.I took a crack at fixing the existing double-serialization that occurs for kernel-to-client messaging, but may have discovered why it's there: SignalR wants to use
System.Text.Json
to serialize messages, but your code is all based around Json.NET, and SignalR chokes if we pass it, say, a command envelope, because all theJsonIgnore
attributes are the Json.NET ones, so it attempts to serialize things it shouldn't. So that's presumably why the existing code runs everything through a Json.NET serialization layer and then just hands a string to SignalR. Fixing that didn't look straightforward, and in any case is probably slightly out of scope for what I'm doing.
Migrating to Text.Json is on our list. That should simplify things
So the double-serialization already existed (apparently because KernelHubConnection passes a string when it calls hubContext.Clients.All.SendAsync; that SignalR method goes on to perform its own additional JSON serialization step; this was the existing behaviour, and not something I've added).
I'd call this an implementation detail of the SignalR-specific adapter and potentially a bug.
Leaving serialization aside for a moment, what's the utility of ApplicationCommand
, versus allowing people to define their own types directly inheriting KernelCommand
?
The utility of ApplicationCommand
is that it means nobody has to define their own types directly inheriting from KernelCommand
. But that's begging the question of course. So why was I reluctant to impose a requirement to define a custom command?
At the moment, is KernelCommand
intentionally part of the API available to code running in .NET cells? I saw it as part of the mechanism by which you make notebooks work, and not currently an application-visible feature. Is there any documentation around the rules for what does or doesn't go in them? They need to be serializable, but is there a commitment around what the rules are for that? Currently you're using Json.NET, and the base KernelCommand
type uses the [JsonIgnore]
from that library. Anyone looking at the library can see this on public members of the type, but does that imply a commitment to using Json.NET serialization going forward? This would tie your hands against using System.Json.Text
(and given that this is how serialization in SignalR now works, I think moving over to System.Json.Text
-based serialization might actually be a prerequisite to fixing the double-serialization that currently occurs; from a comment above it sound like you might already be planning to do this).
Are any of the built-in commands meant to be publicly visible? They're all defined as public
types right now, so de facto they are, but do you mean for them to be used by applications? They seem to me more like details of the protocol between client and kernel, and as such, not necessarily something you'd want to cast in stone as part of the API. So if defining and handling commands becomes something application code gets to do, does that mean there needs to be a more clearly-defined boundary between the bits that are necessarily public (e.g, KernelCommand
would have to be so that you can derived from it) and types you might not want to include in a public commitment to what will be available in the long term.
In other words, opening up the possibility of application-defined commands seemed like it was opening a much bigger can of worms than a single specialized addition: it puts you on the hook for a whole heap of extra backwards compatibility in the future. If you're up for that, then arbitrary user-defined custom commands are definitely the more general mechanism (albeit at the cost of making the simplest scenarios more complex). It just seems to me like you'd need to nail down quite a lot to enable this to work without causing a headache in the long term. You're taking what's currently an internal mechanism that you rely on heavily, and making it a feature of your public API. This sounds like it could be making a rod for your back.
It's not my decision to make of course, but this was the reason I had been taking a more conservative approach. I'll take a crack at another spike that throws all caution to the wind, in which both sides are able to both handle and invoke commands, completely removing the simpler, specialized methods in this version.
Overall, I'm in favor of taking an approach that aims for the ideal API. Let's not worry about the size of the work. We've intended to add functionality like this from the outset, so we fully expect to work right alongside you to get this done. I'd like to hear more specifically what your backwards compatibility concerns are.
At the moment, is KernelCommand intentionally part of the API available to code running in .NET cells?
They're intended to be public, and are already a public API for integration via dotnet interactive stdio
. If people want to use them within a notebook cell, e.g. for extension development, that won't be surprising. Another potential goal is to allow people to implement handlers for custom commands (like IKernelCommandHandler<MyCustomCommand>
) directly on a Kernel
implementation.
Is there any documentation around the rules for what does or doesn't go in them?
Not yet.
They need to be serializable, but is there a commitment around what the rules are for that?
The requirement is round-trip serializability. We currently test the existing implementations for that and would need to work out how to communicate and enforce this for user-defined commands.
Currently you're using Json.NET, and the base KernelCommand type uses the [JsonIgnore] from that library. Anyone looking at the library can see this on public members of the type, but does that imply a commitment to using Json.NET serialization going forward? This would tie your hands against using System.Json.Text (and given that this is how serialization in SignalR now works, I think moving over to System.Json.Text-based serialization might actually be a prerequisite to fixing the double-serialization that currently occurs; from a comment above it sound like you might already be planning to do this).
Agreed. Using NewtonSoft.Json was a point-in-time decision because System.Text.Json wasn't ready for some of what we needed, but replacing the NewtonSoft.Json dependency with System.Text.Json is the goal.
I'd like to hear more specifically what your backwards compatibility concerns are.
It's mainly that by working directly with commands, notebooks are more exposed to the plumbing, which makes it harder for you to change that plumbing. Right now, I don't have a clear idea of how more interesting distributed scenarios would work here—if the kernel is communicating with things on other servers (e.g., other nodes in a spark cluster, or for a scenario of interest to us, nodes in a Service Fabric cluster). Does that require a more complex conception of command routing? And if so, does it become harder to retrofit that if you've provided user code to send and handle commands?
I've updated the branch. Here's the latest commit and I've created a draft PR at https://github.com/dotnet/interactive/pull/888 just to make it easier to see the accumulated changes so far. (I've got no intention of turning it into a non-draft PR, because I now think that the KernelChannelMessage
concept is probably the wrong way to go.)
This latest commit makes two changes:
now using just System.Text.Json
in the SignalR channel, removing the double serialization that was occurring before
That latter point mainly turns out to be a case of doubling up all the JsonIgnore
attributes, so that we put both the Json.NET and the System.Text.Json
flavours of these attributes everywhere. (I'm not suggesting that this fully implements the transition to System.Text.Json
by the way. I still consider all these spikes purely illustrative.) It also meant extending the SerializeToModel
concept that was already in the envelope handling out to the KernelChannelMessage
level.
But obviously, the main point of interest is that this removes the "application message/notification" abstraction, and instead, it's explicit in the programming model that we're just invoking commands here.
Here's how that changes things from the consuming application's perspective.
In these examples, kernel-side code is able to display output in the notebook, and not just as the one-shot result of running a cell. Code in the kernel can continue to add new output at any time. This is the main thing we wanted for our scenario.
The way this works is that our notebook has HTML cell in the notebook that creates a <div>
, and a JavaScript cell that writes hooks up a handler to receive the relevant notifications from the kernel and write them to that <div>
.
Here's how the client side of that changes with this update.
Before:
interactive.subscribeToApplicationCommandsWithLabel(
"test",
command => {
let message = command.content;
let msg = `Message type '${message.type}' received: '${message.content}'`;
console.log(msg);
output.innerText = output.innerText + '\n' + msg;
});
Now:
interactive.subscribeToCommands(
"MyKernelToClientAppCommand",
command => {
let msg = `Message type '${command.type}' received: '${command.content}'`;
console.log(msg);
output.innerText = output.innerText + '\n' + msg;
});
That gets slightly simpler because there's no longer any need to dig the embedded information out of the command: the properties we want are now just right there on the command. However...
This is how things change on the kernel side.
Before:
await Microsoft.DotNet.Interactive.Kernel.Current.SendMessage("test", new { Type = "TestMessage", Content = $"Test content {nextKernelMessageId++}" });
Now:
Initialization:
var kernel = Microsoft.DotNet.Interactive.Kernel.Current;
public class MyKernelToClientAppCommand : Microsoft.DotNet.Interactive.Commands.KernelCommand
{
public string Type { get; set; }
public string Content { get; set; }
}
kernel.RegisterCommandType<MyKernelToClientAppCommand>(nameof(MyKernelToClientAppCommand));
Per message:
await kernel.SendCommandToClientAsync(new MyKernelToClientAppCommand { Type = "TestMessage", Content = $"Test content {nextKernelMessageId++}"});
The obvious downside is that we've got more code now. Before, we were able to use an anonymous type, and were implicitly using the ApplicationCommand
that I had built in with the previous steps. Now, we've had to define a type deriving from KernelCommand
and register it (so that the kernel is able to deserialize it).
Admittedly, we were still defining a type before, albeit through C#'s anonymous type mechanism, so conceptually it's not a big shift. But in practice it made the code a lot simpler. When relying on the built-in ApplicationCommand
we just needed a single statement in C#, whereas if we make people define a custom command, we've got all the extra ceremony of that initialization cell. Whether that matters much in practice I'm not sure—for the use case we have in mind we'd probably not be using anonymous types anyway, at which point the only extra complication here is the call to kernel.RegisterCommandType
, which is fine.
The reason this wouldn't be a problem for us is that we'd be wrapping all this in a library anyway, so none of this extra work would pollute the notebook itself. So from our perspective, that's not a reason to argue against this "application defines the command type" approach. But for scenarios where you don't want to reference a helper DLL from NuGet, so everything's happening in the notebook, putting this machinery on show will distract from the notebook's main work. There's something to be said for the simplicity of a client-side single method call to attach a handler, and a single kernel-side statement to send a message.
This next set of examples illustrates the opposite flow, in which client-side script wants to communicate with something in the kernel. As it happens, these examples actually do client-to-kernel-and-back-again. In the test, there's an HTML button, and its click handler sends a message to (before), or invokes an application-defined command (now) on the kernel, but the kernel-side handler immediately sends a message back to/invokes a command back on the client.
The effect of this is that you click a button in the output of an HTML cell, and a message appears in a <div>
also in that HTML cell. But the client-side click handler didn't update the <div>
directly. It went via these new mechanisms.
Before:
sendMessageToKernel.onclick = function() {
interactive.sendMessage("test", { id: nextClientMessageId++, message: "Hello, this originated from script" });
}
Now:
sendMessageToKernel.onclick = function() {
interactive.submitCommand("MyClientToKernelAppCommand", { id: nextClientMessageId++, message: "Hello, this originated from script" });
}
There's no change in complexity here. All that has changed is that we're now talking in terms of commands instead of the somewhat nebulously-defined "messages".
Before:
var kernel = Microsoft.DotNet.Interactive.Kernel.Current;
public class MessageFromClient
{
public int id { get; set; }
public string message { get; set; }
}
var relay = new System.Reactive.Subjects.Subject<MessageFromClient>();
relay.Subscribe(m => kernel.SendApplicationCommand(
"test",
new
{
Type = "RelayClient",
Content = $"Kernel received message with id {m.id} from client: {m.message}"
}));
kernel.SubscribeToApplicationCommandsWithLabel<MessageFromClient>("test", relay);
Now:
var kernel = Microsoft.DotNet.Interactive.Kernel.Current;
public class MyClientToKernelAppCommand : Microsoft.DotNet.Interactive.Commands.KernelCommand
{
public int Id { get; set; }
public string Message { get; set; }
}
kernel.RegisterCommandType<MyClientToKernelAppCommand>(nameof(MyClientToKernelAppCommand));
kernel.RegisterCommandHandler<MyClientToKernelAppCommand>(async (command, context) =>
{
var back = new MyKernelToClientAppCommand
{
Type = "RelayClient",
Content = $"Kernel received message with id {command.Id} from client: {command.Message}"
};
await context.SendCommandToClientAsync(back);
});
So for this direction, not very much changed. We were having to define a type anyway—we couldn't use anonymous types in this direction because there's no way to get C# to infer the type's shape from an Rx subscription delegate's contents. (Anonymous types require you to new
up an instance, and since this is the receive side, we didn't get to do that.) Since we already needed a nominal type, changing that to derive from KernelCommand
wasn't a huge deal. The only real downside is that it now confronts developers writing this sort of code with a more complex concept: rather than merely defining a POCO type to hold the data they want to pass, they now have to know that there's a thing called a command. (And there's also the registration step, to support serialization. We could subsume that into the RegisterCommandHandler
, but I think I prefer the symmetry with the kernel-to-client case.)
Having implemented this, I'm wondering if we have a problem. Earlier in this thread, Diego wrote:
I am under the impression that the new functionality is actually able to broadcast to multiple listeners.
But commands always have a single handler don't they?
Overall I like these changes because, differences in code complexity aside, the concept count is lower than in the previous iterations. Once someone understands the relationship between commands and events, that understanding applies throughout the system.
There are complexities in the distributed routing mechanisms (i.e. a composite kernel of composite and/or proxy kernels) that need to be worked out but this was already the case.
the only extra complication here is the call to kernel.RegisterCommandType
I think we can probably eliminate this as well by making this registration implicit.
One question I have here is whether we expect an error (in the form of a CommandFailed
event) to propagate back to the caller when there is not exactly one handler for that command discoverable by the command routing mechanism.
But for scenarios where you don't want to reference a helper DLL from NuGet, so everything's happening in the notebook, putting this machinery on show will distract from the notebook's main work.
I agree it might be valuable to provide a reusable command type like ApplicationCommand
for the lower-ceremony use cases. I don't think this should be the foundation case, and I expect the more complex and interesting uses cases like custom widgets will typically be delivered via NuGet or an associated project (#890) and not wired up in the notebook, in which case the more strongly-typed approach will help code maintainability.
The only real downside is that it now confronts developers writing this sort of code with a more complex concept: rather than merely defining a POCO type to hold the data they want to pass, they now have to know that there's a thing called a command. (And there's also the registration step, to support serialization. We could subsume that into the RegisterCommandHandler, but I think I prefer the symmetry with the kernel-to-client case.)
I think these are pretty solvable via API and conventions. Implicit registration and serialization support are worthwhile. We might look into doing this all by convention. I've used reflection for this in the past but a source generator could be useful here.
I am under the impression that the new functionality is actually able to broadcast to multiple listeners.
But commands always have a single handler don't they?
This is a good question. I think working through this with some real examples will help test whether this rule should remain absolute.
OK, so are we ready for me to start a new branch in my fork to move towards a real implementation?
Unless there's some particular process you'd like me to follow, I suggest that I should 1) start by adding a doc to the docs
folder describing the feature and illustrating the client- and kernel-side usage, 2) set out a plan for all the areas that would need to be modified to support this across all transports, and then 3) write tests alongside implementation.
I think we're in a good place to move forward with one or more PRs. I'm happy to work incrementally. If you want to include documentation in the PR that's definitely helpful but I tend to focus more on tests as documentation of the intent and API at first, so I'm also happy to wait until the complete feature is ready before documenting it.
Also, there's going to be some churn in the Microsoft.DotNet.Interactive.Http project and some new infrastructure for setting up routes that I think will be very helpful for this. Because there's a high likelihood for overlap there, it might be useful for us to focus first on the changes in the Microsoft.DotNet.Interactive library and work our way out. A quick punch list:
Kernel
level. (#895)Assent
tests but this really only works for the types we know of at compile time.) (#896)FYI @halter73
We want to be able to create 'live' UI elements in a notebook that can be updated by ongoing activity elsewhere in the notebook. Today, we can produce HTML output and client-side JavaScript to enable interactivity, but there is currently no built-in mechanism to connect this in any way to code running in the kernel.
To support this, we would like it to be possible to open channels of communication by which code running inside the kernel (either code cells, or extensions or other components loaded by those cells) is able to send messages to and receive messages from JavaScript (or TypeScript) code running in client hosting the notebook.
It's not completely impossible to achieve this today, but it's messy and limited. We have built a prototype that provides limited support for such connectivity. It works by running a simple web server inside the kernel process, and then connecting to this from JavaScript code on the client side. UI to kernel communication can be achieved with
fetch
operations from the client, and we used SignalR to enable the kernel to send messages to the client code. We have been able to demonstrate the kind of 'live' notebook cells we have in mind. However, there are a number of issues with this technique.For example, currently it only works when the kernel process is local; we have not yet made it work for remote scenarios. We have only attempted this in Visual Studio Code, and not yet other hosts. Nor have we attempted to get it working in CodeSpaces yet. In principle, these are all just challenges which could be overcome with sufficient engineering effort. However, it is likely to be fragile, because there may be scenarios in which our chosen communication mechanism might not be open to us. In any case, it duplicates work already being done by .NET Interactive.
.NET Interactive already maintains a channel enabling communication between the UI process and the kernel process. (As it happens, it even uses SignalR today.) Instead of building a parallel mechanism to this, it would be far more robust and reliable to be able to piggy back on the channel that .NET Interactive has necessarily already established.
We are requesting that a message passing service be layered on top of this.
In the kernel, the existing
Microsoft.DotNet.Interactive
API'sKernel
type would offer a method (say,GetChannel
) enabling an object representing a named channel to be acquired. It might look something like this:A similar facility would exist for client-side code. JavaScript in the notebook would be able to obtain a very similar sort of service. The
KernelClient
interface would offer a method (say,getChannel
) which would return an object providing the same two services: an observable source of messages, and a method by which a message can be sent.When the UI-side code sends a message, this would result in the
Messages
observable in the kernel reporting the message. Likewise when the kernel invokesSendMessage
it would cause the observable source on the UI side to report the message.Issues to be resolved include: