Closed mkovatsc closed 5 years ago
Can I assume that we all agree that many IoT technologies like Bluetooth, ZigBee and LPWAN are out of scope and will need gateways?
The requirement for a protocol to appear in a TD is that it has an associated URI scheme. Payloads must be identified by media types. This is a central constraint of REST, and hence the Web architecture: the uniform interface.
As a side node, be aware that LPWANs are accessed through Application Servers, which offer HTTP and CoAP interfaces. The LP-WAN Working Group at the IETF is working on a transparent compression of CoAP messages to be sent directly to the end nodes, similar to 6LoWPAN with edge routers.
Can WoT WG members explain what the business case is for declarative protocol bindings along with a clear description of the scope for which they are relevant
Siemens needs this feature in TD to be able to apply a common model for metadata (the TD Information Model) to many heterogeneous IoT system already deployed in the field and continuously being deployed until a hoped-for convergence has completed. The rich descriptive features in the TD -- opposed to a constrained profile that needs exactly matching implementations -- are required to collect metadata from field-level devices, and model and transfer it in a uniform format to integration systems such as local gateways or the cloud. Individual software adapters do not scale and are too expensive to maintain with a huge set of heterogeneous devices, often containing only slight variations in the API ("API noise"). This is a requirement that is also central in Platform Industrie 4.0 and IIC.
These system often use HTTP, but their interfaces underlie a lot of noise. This API noise can be canceled by the descriptive power of the declarative protocol bindings as demonstrated in several past PlugFests and also confirmed by SmartThings.
There also non-HTTP systems such as OPC UA, which still have a URI scheme (opc.tcp
) and we also showed that defining a URI scheme for MQTT topics makes sense, as these resources still have a fixed set of methods, and hence fulfill the uniform interface constraint -- and MQTT is massively deployed in IoT systems, yet could benefit from more metadata, which we can provide using TDs.
@mjkoster SmartThings also confirmed this need for declarative bindings to cancel the noise of smart home systems as well as the numerous cloud services. They do exist, they use HTTP and WebSockets, but they are not interoperable because of API noise.
Constraining the TD now to an exact implementation requirement does not help convergence. It just burns bridges to the existing and established ecosystems. We need to show the power of an interoperable metadata model that can automate the integration of different systems; for this we need a powerful and versatile TD. Once we have this, our plan is to use the Interest Group to collect enough experience and strengthen our liaisons to derive the right "profile" for the narrow waist at the northbound API. Please just try to see that "the Web" also started with various plugins, different scripting languages etc. and required time and experience to converge. This is a necessary step in the evolution and cannot just be skipped just because @benfrancis can in a single project.
Many thanks @mkovatsc for the long explanation. Rephrasing what you say as a test of my understanding: declarative protocol bindings only apply to REST based platforms where message payload formats are associated with standard content types. Declarative protocol bindings then provides additional information needed by a driver to use those content types on the designated URIs.
A generalisation would be for the thing description to simply provide an identifier for a driver. This identifier should be a link to a human or machine readable specification for how the target platform uses standard protocols. In my implementation, it will be much easier to provide a number of drivers for known WoT platforms than to implement a fully general solution.
Perhaps we can agree to a compromise where the thing description provides such a link, and optionally the declarative protocol bindings?
A generalisation would be for the thing description to simply provide an identifier for a driver
It is always possible to externalize parts of a Thing description to save bandwidth or optimize otherwise. We should first ensure that we can explicitly model and contain all necessary information in a TD, so that a canonical version can always be serialized -- because omitting parts always bears the threat that these parts are then not public and the benefits of a TD are void.
With drivers this is even worse. Users become dependent on the availability of such a driver, which might still have some hidden knowledge. Thus, it is better to have declarative metadata that can be applied by any implementation. Writing a driver has always been easy when the developer knows what configuration has to be applied to a specified protocol. This is again the goal of a declarative TD: providing this information in a uniform format.
In some sense, we already have what you describe with the subProtocol
field. Here the details must be known from a hopefully public specification. This only makes sense, though, when there is an actual protocol behind and not just the mentioned API noise.
In summary, yes, such optimizations are possible, but we should only apply them later when we know that the identifiers really indicate shared and publicly available information. In fact we already made a big step toward this by introducing defaults, so that in common cases, no declarative binding information is needed in a TD.
Perhaps we can agree to a compromise where the thing description provides such a link, and optionally the declarative protocol bindings?
We already agreed on defaults in the last iteration, so that Mozilla's implementation does not require any declarative information in a TD -- except for the specialized features such reading multiple Properties; still waiting for good proposals here.
Rephrasing what you say as a test of my understanding: declarative protocol bindings only apply to REST based platforms where message payload formats are associated with standard content types. Declarative protocol bindings then provides additional information needed by a driver to use those content types on the designated URIs
Yes, although "REST based" might be a bit to narrow, as most HTTP APIs out there are not RESTful. And we have PubSub heavily deployed in the IoT, which can be covered when the uniform interface constraints of REST applies:
The HATEOAS part we can retrofit with TD, as most IoT resources (in particular on existing devices) are deadends with only sensor values or one-step actuation.
Here are examples of how three existing implementations support WoT messaging over HTTP:
Platform A
This platform currently limits the duration of actions and lacks the means to cancel them.
Platform B
This platform currently doesn't support data for action responses.
Platform C
This platform currently doesn't support getting the set of exposed things, nor the current state of all properties in a single transaction.
All three platforms use JSON for messages, but differ slightly in some of the message structures. I have yet to investigate the error handling characteristics
This variation in how to use HTTP can be explained in terms of the platform developers making slightly different assumptions, in the absence of collaborative work on assessing requirements across a broad set of use cases, and evolving a shared approach that meets these requirements.
For WebSockets there is even more variation. Some platforms just use it for an event stream. Others allow you to also stream property updates from the client to the server, and for the client to invoke or queue actions. One platform pushes the state of all properties when the WebSocket connection opens.
Very few of the use cases implemented for plugfests have focused on high data rates, e.g. a medical monitoring application that provides multichannel data at hundreds of samples a second, and an industrial equipment application that streams 50 thousand data points a second. Less common are applications that involve high speed streaming from the client to the server, e.g. for 3D printing. using HTTP to send each data point as a separate transaction isn't going to scale well!
A further challenge is how to describe how one Web Hub exports a thing to another Web Hub, e.g. an vendor application on a home gateway behind the firewall that exports a thing to a cloud based hub for personal access when you are away from home.
One approach is to use HTTP with POST to /things where the response is the URL for the exported thing on the cloud hub. Property updates by the IoT device can be pushed to the cloud via an HTTP PUT transaction. Updates from client applications can be polled using an HTTP GET with long poll, or a server-sent events stream (HTTP GET with text/event-stream for the response). But when it comes to client applications invoking an action, the cloud hub can't directly make an HTTP client request to the gateway behind the firewall. In principle, the gateway app could poll the cloud hub for action invocations and then use an HTTP POST to deliver the responses back to the cloud, but that is rather awkward. I find it considerably more elegant for the gateway to maintain a WebSocket connection with the cloud and use this for property updates in either direction, to send event notifications, to listen for action invocations and to send the responses.
How would binding templates provide a fully declarative solution to this? I would be much happier with a standardised approach for using HTTP and WebSockets that addresses requirements for a broad range of use cases. To allow for evolution, we would have a means for the gateway to query the cloud platform to determine what protocols it supports. The Web Hub would expose a URL for this as a Linked Data identifier that can be dereferenced to obtain RDF metadata for that Web Hub, whether as JSON-LD or some other RDF serialisation format. The simplest and most elegant solution is to treat the Web Hub as a thing in its own right. Thing descriptions could then link to the Web Hub they are hosted on, but we could also standardise a default path such as "/hub". You can then do an HTTP GET on /hub with "accept: application/json" to obtain the JSON-LD description of that hub.
@draggett
For WebSockets there is even more variation
This is not surprising at all, as WebSockets only offer a data pipe to be used between two pieces of the same application. It is made for the maximum flexibility for the application developer. Hence, WebSockets alone have no value for interoperability. This is why there is a need for sub-protocols, which can provide the missing transfer semantics. Common ones are MQTT-over-WS or the new CoAP-over-WS, which offers RESTful transfer.
@draggett
A further challenge is how to describe how one Web Hub exports a thing to another Web Hub, e.g. an vendor application on a home gateway behind the firewall that exports a thing to a cloud based hub for personal access when you are away from home. How would binding templates provide a fully declarative solution to this?
This is independent from the declarative protocol bindings feature. It is about management interfaces on Servients, which can be described with TD vocabulary (some discussion in https://github.com/w3c/wot-scripting-api/issues). The Management Thing can have an annotated Action to instantiate a proxy Thing by sending a TD of the Thing to be proxied. This then requires a feasible binding between local and remote hub. Out-of-the-box, CoAP-over-TLS or MQTT would work, or HTTP/2 with Call Home.
The WG is fully aware of this, but we have to finish the basics first to not be sidetracked with too much open business. This is a central part for re-chartering.
I guess we have a difference of opinion over the best way to use WebSockets, as it seems natural to me to define a subprotocol with JSON messages corresponding directly to property updates, events, action invocations and responses, etc. The Mozilla Things Gateway provides an example for how to do this. Of course you also need to be able to notify errors, but that is pretty easy to specify. This isn't rocket science. What is important is that the design choices are motivated by the requirements for a broad range of commercially interesting use cases. This includes high data rate use cases for telemetry, e.g. machine tools that stream 50 thousand data points a second, something that I've learned from European manufacturing companies.
In respect to exporting a thing to the cloud from inside a firewall, this impacts the protocols used, e.g. the cloud hub is blocked from sending HTTP requests into the firewall. Property updates corresponding to sensor readings could be pushed from the gateway inside the firewall, to the cloud hub. The gateway could poll for updates to properties by applications in the cloud. But actions are more awkward since although the gateway could push action responses to the cloud, it would need to poll for the action requests and use some ID to associate each response to the corresponding request. This is much cleaner when using WebSockets. My point is that a declarative protocol binding framework would need to account for the details of how the underlying transport protocols are used to support this, and given the wide variety of different approaches, this can get quite complicated. I believe that a much simpler solution is to use a URL to identify the protocol conventions, where the URL dereferences to human or machine readable descriptions of those conventions. I further think that it makes sense to standardise the conventions to avoid unnecessary implementation effort.
@draggett
I guess we have a difference of opinion over the best way to use WebSockets, as it seems natural to me to define a subprotocol with JSON messages
...
My point is that a declarative protocol binding framework would need to account for the details of how the underlying transport protocols are used to support this, and given the wide variety of different approaches, this can get quite complicated
You are simply talking about defining a new sub-protocol for WebSockets. This would be identified through the subProtocol
field we already have, given that it will be a properly defined protocol with IANA-registered identifier.
Such a new protocol is just not in the current charter and it should not block or distract us from fulfilling the current charter.
almost 1/2 year no further activities here. Since the main thread #179 is already closed, I will close this issue here.
This is a continuation of the side-tracked discussion from https://github.com/w3c/wot-thing-description/issues/179.