w3c / wot-scripting-api

Web of Things (WoT) Scripting API
http://w3c.github.io/wot-scripting-api/
Other
42 stars 28 forks source link

General API discussion (was: "Proposal: readMutlipleProperties and writeMultipleProperties") #132

Closed sebastiankb closed 5 years ago

sebastiankb commented 6 years ago

In terms of performance or consistency it would be helpful that the API offers a readMutlipleProperties and a writeMultipleProperties (or similar naming). So far, we are only able to read / write a single Property with the API. However, there are protocols such as Modbus and CAN that allows mutliple readings / writings of datapoints to meet, e.g., consistency aspects.

So, it would be nice to have something like this:


consumedThing.readProperties(["propName1", "propName2"]);
consumedThing.writeProperties([{"propName1": "value"}, {"propName2": "value"}]);

// alternative:
consumedThing.properties["propName1"]
    .and(consumedThing.properties["propName2"])
    .get();

Protocols such as HTTP which do not support such operations can simple sequentially call the passed properties.

What do you think?

danielpeintner commented 6 years ago

Even though I understand the desire (for example to write multiple values at once) I think it causes lots of other issues.

Let's suppose we allow for writing multiple values at once and 1 write fails.

I think properties that belong together should be modeled as one compound resource... I am not sure if that matches with Modbus/CAN though?

What do other think?

draggett commented 6 years ago

I believe that this API is unnecessary for both consumed things and exposed things, as it presumes a coupling between the application programming API for objects and the protocols used by the platform to synchronise exposed and consumed things, something that the Web of Things is intended to avoid.

This reminds me of the API design for TCP/IP where the specification designers deliberately chose to avoid giving application developers direct control over when packets are sent. For new developers, that sounds rather annoying, but it is outweighed by the flexibility it gives to network developers for making the most efficient use of the network, and for packet fragmentation for different transport protocols. This design choice has proved itself with the exponential expansion of the Internet, and we should follow their wisdom.

We should therefore look at what kinds of communications metadata would be needed for smarter Web of things platforms. This could enable delayed transmission of updates so that a single message can convey multiple updates both for different properties or for the same property. Something of interest for telemetry with a rapid stream of updates.

When using JavaScript, the apartment threading model ensures that an application can read the value of several properties without having them change under its feet during the course of that event handler. This works fine if applications are able to read property values synchronously rather than being forced into waiting for a promise to be resolved. This is why in my WoT proposal, and node-js implementation, property reads are synchronous and immediate, relying on the platform to synchronise values in the background. This gives control to the platform in respect to network optimisation. The same applies to applications that update the value of several properties in the same event handler. A further advantage is the ability to use getters and setters for those languages like JavaScript that support them.

zolkis commented 6 years ago

I would prefer to support either/all of the following:

6d77 commented 6 years ago

Thanks for your ideas and suggestions!

There are interesting arguments to consider.

These are valid arguments, and indeed, perhaps there are more possibilities, which do not have these problems. Dave mentioned delayed execution, and also proactive execution could help.

The JavaScript threading model could indeed be helpful as Dave mentioned. We could create a set of Promises for read or write access to multiple (related) properties. The underlying protocol binding could combine these into a single (e.g.) MODBUS transaction, which is executed later. Success or failure of this transaction will resolve or reject all these promises.

So perhaps something like this will already work:

let p1promise = thing.properties.p1.get();
let p2promise = thing.properties.p2.get();

Promise.all([p1promise, p2promise])
    .then(result => {
    // result is an array of results
    })
    .catch(reason => {
    // reason is the first thrown error, others are ignored
    });

There is just one point: There is no guarantee for the application that the grouping is actually performed by the binding and the above code just says "I want to have these results together". Different binding implementations may possibly choose different strategies. Is it necessary to express the grouping strategies more formally? Zoltan mentioned a metadata annotation, which could formally describe such groups of properties. How could this look like?

Daniel's problem of partially successful writes would vanish if the individual property.set() calls would actually be combined into a single MODBUS transaction. Do we have to signal this explicitely to the user?

BTW: I'm just sitting in the same office as Sebastian and trying to apply the WoT concept to somewhat more complex MODBUS scenarios. Reading and writing multiple properties with one MODBUS transaction is a MUST here, and I'm trying to figure out how this could best be done with node-wot.

draggett commented 6 years ago

I would like an explanation as to why get and set for properties return a promise, as this seems to make an assumption about the underlying protocols, e.g. "get" and "set" involve sending a request and waiting for a response. I assume that you still need an event to signal when a property value has been updated. When setting a value in a consumed thing, this event would be delayed until the exposed thing has successfully applied the value, which may involve waiting if the target IoT device is sleeping until its next scheduled communication slot.

With the synchronous property API, get and set return immediately, and you still get the update event as described above. Applications may use this to provide user interface affordances that signal to the user when a change in setting has been applied. The WoT platform is responsible for protocol optimisations, e.g. gathering several updates into the same message. One scenario is where an application updates several different properties. Another scenario is a telemetry application where sensor readings are made at regular intervals, e.g. every 10 milliseconds. The WoT platform buffers a sequence of updates to the same property into a single message. Telemetry is an important class of use cases for IoT.

draggett commented 6 years ago

I forgot to explain the relationship to the means to request or transmit the value for multiple properties. The WoT platform needs to be able to initialise the state of a consumed thing when creating the object for that. The approach I took in my node-js implementation is to include the state (i.e. the value of all properties) in the response to a request to register a consumed thing with an exposed thing. The WoT platform automatically subscribes for update events for properties, and I chose to define an event that passes all of the property values as well as events for individual property values. If the underlying protocol doesn't support push notifications for events, then the WoT platform needs to poll for the events. The frequency for polling could be specified as part of the metadata for that protocol.

For some protocols, if you set a particular property, you may need to also pass the values for related properties as part of the request. This needs to be declared in the metadata for that protocol. The WoT platform can then use this metadata to construct the expected request messages. There is no need for this to be exposed to the application for a consumed thing, as I explained in my previous comment.

I don't believe that there is a strong business case for declarative protocol bindings, which in any case only work for a small set of IoT protocols, and which introduce considerable complexity for authoring thing descriptions and for the Web of Things platforms. Applications running on a gateway/hub that expose things at the network edge are in a good position to encapsulate the IoT communications technology whether it is Bluetooth, ZigBee, z-wave, OCF, oneM2M, ECHONET or whatever else is needed for the gateway to communicate with the IoT devices. For communications across the Internet, i.e. away from the network edge, we only need a tiny handful of protocols, for instance, HTTP with long polling/server-sent events for push notifications, and an equivalent sub-protocol for WebSockets. This would allow us to minimise the need for communications metadata as the protocol binding would be part of the standard. It would also allow us to provide stronger security as compared to layering on a heterogeneous set of protocols and security standards.

mkovatsc commented 6 years ago

Note that this is one side of a two-sided problem: Here the client view through the Scripting API is discussed, while https://github.com/w3c/wot-thing-description/issues/151 discusses the server view of how to express such a feature in a TD.

benfrancis commented 6 years ago

@draggett wrote:

I would like an explanation as to why get and set for properties return a promise, as this seems to make an assumption about the underlying protocols,

Not to mention an assumption that the implementation scripting language supports the concept of promises.

I don't believe that there is a strong business case for declarative protocol bindings, which in any case only work for a small set of IoT protocols, and which introduce considerable complexity for authoring thing descriptions and for the Web of Things platforms. Applications running on a gateway/hub that expose things at the network edge are in a good position to encapsulate the IoT communications technology whether it is Bluetooth, ZigBee, z-wave, OCF, oneM2M, ECHONET or whatever else is needed for the gateway to communicate with the IoT devices. For communications across the Internet, i.e. away from the network edge, we only need a tiny handful of protocols, for instance, HTTP with long polling/server-sent events for push notifications, and an equivalent sub-protocol for WebSockets. This would allow us to minimise the need for communications metadata as the protocol binding would be part of the standard. It would also allow us to provide stronger security as compared to layering on a heterogeneous set of protocols and security standards.

+1

@mkovatsc wrote:

Note that this is one side of a two-sided problem: Here the client view through the Scripting API is discussed, while w3c/wot-thing-description#151 discusses the server view of how to express such a feature in a TD.

If existing APIs for HTTP & WebSockets were used instead of defining a new Scripting API then one side of that problem is already solved.

For example, in JavaScript...

HTTP:

fetch('/things/lamp/properties',
  {
    method: 'PUT',
    headers: {
      'Content-type': 'application/json',
      'Accept': 'application/json',
      'Authorization': 'Bearer eyJhbGc...',
    },
    body: JSON.stringify({'on': true, 'level': 50})
  }
);

WebSockets:

const socket = new WebSocket(
  'wss://mywebthingserver/things/lamp', 
  'webthing');

socket.send(‘
  {
    'messageType': 'setProperty',
    'data': {
      'on': true,
      'level': 50
    }
  }
’);

By way of example, Mozilla now has open source implementations of the server side of this approach in NodeJS, Python, MicroPython, Java, Rust and C++ (Arduino), in addition to our gateway implementation which bridges HTTP/WebSockets to other protocols including ZigBee, Z-Wave and HomeKit. Clients can then use their existing APIs for HTTP & WebSockets to request property changes. It seems to be working OK so far.

Edit: Actually, tell a lie, getting multiple properties landed in our implementations last week, but setting multiple properties with HTTP hasn't landed yet. But you get the idea!

mjkoster commented 6 years ago

For some protocols, if you set a particular property, you may need to also pass the values for related properties as part of the request. This needs to be declared in the metadata for that protocol.

I don't see how protocol metadata is different from protocol bindings. In the DataSchema part of the protocol binding, we describe multiple properties to be passed. For example, brightness data and transition time data.

The form construct is also protocol metadata. It describes how to construct protocol options which may be important for some use cases.

I don't believe that there is a strong business case for declarative protocol bindings, which in any case only work for a small set of IoT protocols, and which introduce considerable complexity for authoring thing descriptions and for the Web of Things platforms.

This is misleading at best. There is essentially zero added complexity if the thing conforms to reasonable defaults, where only an address (URI) is needed in the protocol binding.

In addition, protocol bindings can work with most IP connected things that exist, and the number of IP connected things is continuing to increase and replace many non-IP connected things.

The business case is the same as the gateway, but even more generically useful in the case of expanding IP networking as we move into the future and gateways are no longer needed.

Applications running on a gateway/hub that expose things at the network edge are in a good position to encapsulate the IoT communications technology whether it is Bluetooth, ZigBee, z-wave, OCF, oneM2M, ECHONET or whatever else is needed for the gateway to communicate with the IoT devices.

All of these will be exposed via IP networks, and it will be possible to adapt to them using protocol bindings where necessary. The adaptation can take place in a gateway, but it shouldn't require anyone's box if we define the adaptation in a protocol binding. Maybe the adaptation only needs to run in the gateway in your system.

I don't agree with the requirement for everyone to implement an opaque gateway with hidden translations when they are all adopting IP networking.

For communications across the Internet, i.e. away from the network edge, we only need a tiny handful of protocols, for instance, HTTP with long polling/server-sent events for push notifications, and an equivalent sub-protocol for WebSockets. This would allow us to minimise the need for communications metadata as the protocol binding would be part of the standard. It would also allow us to provide stronger security as compared to layering on a heterogeneous set of protocols and security standards.

Cloud to cloud integration requires as much, if not more, adaptation than just interacting with devices. There are at least as many data schemas as there are products. At least with OCF and dotdot there are consistent patterns.

I see the ability and agility of protocol bindings to adapt to diverse Swagger definitions and other APIs to be a definite use case for the Web of Things.

Really, most of this isn't any harder than divs and lists and css classes are for the text web. Devices need adaptation just like web page styles, etc. need to be supported. It's even more important because the device networks really do have special requirements.

Another comment I'd like to make is we don't all think of servers as only the big things in data centers. Typically, IoT places servers at the most constrained end of the hardware. The client hosts the application layer and is responsible for integrating and orchestrating many servers.

benfrancis commented 6 years ago

@mjkoster wrote:

There is essentially zero added complexity if the thing conforms to reasonable defaults, where only an address (URI) is needed in the protocol binding.

But potentially infinite additional complexity for Web of Things clients, which now need to support an arbitrary number of IoT protocols which can (theoretically, though not in practice) be described by declarative protocol bindings in the Thing Description. This would prevent the kind of client-agnostic ad-hoc interoperability which characterises the web today.

Applications running on a gateway/hub that expose things at the network edge are in a good position to encapsulate the IoT communications technology whether it is Bluetooth, ZigBee, z-wave, OCF, oneM2M, ECHONET or whatever else is needed for the gateway to communicate with the IoT devices.

All of these will be exposed via IP networks, and it will be possible to adapt to them using protocol bindings where necessary. The adaptation can take place in a gateway, but it shouldn't require anyone's box if we define the adaptation in a protocol binding.

Zigbee, Z-Wave and Bluetooth are not IP protocols, they require a gateway to bridge them to the Internet and the web. This can not be achieved with just a few lines of JSON in a Thing Description.

All of the example SmartThings Thing Descriptions from the last PlugFest use either HTTP, CoAP or a non-standard URI scheme for MQTT. Is your intention that all Web of Things clients should support HTTP, CoAP and MQTT? Do they need to support Zigbee, Z-Wave, Bluetooth, Modbus and CAN too? If so, then how is that practical? If not, then how does that help with interoperabilty? Only some clients will be able to access some web things. That breaks basic principles of the web that to be part of the web you need to be 1) linkable and 2) client agnostic.

Really, most of this isn't any harder than divs and lists and css classes are for the text web. Devices need adaptation just like web page styles, etc. need to be supported.

I don't understand your analogy. divs, lists and CSS classes are defined in concrete W3C specifications, are transferred over a single protocol (HTTP) and are supported by all web browsers.

To try and get back on topic, if interoperability is at the web protocol layer rather than the scripting layer, then we already have the scripting APIs we need to get and set properties. No new Scripting API is necessary.

mkovatsc commented 6 years ago

@draggett wrote: I would like an explanation as to why get and set for properties return a promise, as this seems to make an assumption about the underlying protocols,

The initial wrong assumption is that the Promise abstraction must be related to a (network) protocol operation. It is an abstraction for asynchronous behavior of a function call that may also fail and that yields a single result (opposed to an Event abstraction, for instance).

@benfrancis wrote: Not to mention an assumption that the implementation scripting language supports the concept of promises. ... fetch('/things/lamp/properties', { method: 'PUT', ...

"It returns a Promise that resolves to the Response". Hmm.

It is good to see that you also started to make use of form metadata with which a server can tell a client how to formulate the request message (method: 'PUT', ...). The original motivation of the Scripting API is to make the client code agnostic of the protocol aspects, so that the runtime takes care of applying the form metadata. With your API examples, the client code has to be aware of the protocol details. Maybe @draggett remembers that he originally also motivated this.

If existing APIs for HTTP & WebSockets were used instead of defining a new Scripting API then one side of that problem is already solved

I am not sure how your examples solve the problem laid out in this Issue. Maybe it is because the thread was diverted to much.

benfrancis commented 6 years ago

"It returns a Promise that resolves to the Response". Hmm.

That's the JavaScript API. Basically every programming language on the planet has its own API for making HTTP requests. No new Scripting API is necessary.

It is good to see that you also started to make use of form metadata with which a server can tell a client how to formulate the request message (method: 'PUT', ...)

I didn't. That's just how HTTP works. The Thing Description need only provide a URL.

The original motivation of the Scripting API is to make the client code agnostic of the protocol aspects

But the client would still need to talk multiple protocols under the hood. If this scripting API was implemented by a web browser (unlikely given no browser vendors have expressed an interest in implementing it) then the browser would have to support those protocols. Any other kind of client which implements this API would have to support an arbitrary number of protocols in order to make readProperties or writeProperties actually happen.

mkovatsc commented 6 years ago

That's the JavaScript API

Yes. We define a JavaScript API. It is optional. It has appeal for those who do not want to care about the protocol parts in the application logic. Anyone else can start from the TD and implement against the network API.

draggett commented 6 years ago

It is an abstraction for asynchronous behavior of a function call that may also fail and that yields a single result (opposed to an Event abstraction, for instance).

A asynchronous model for accessing properties does introduce problems and feels awkward for people who are used to software objects where read/write access to object properties are synchronous. Perhaps this is matter of a design pattern - if you want to support an asynchronous model, use methods, otherwise use properties.

mkovatsc commented 6 years ago

@draggett How does your runtime behave when a Property is read synchronously, but the represention is stale, i.e., no valid value can be returned?

draggett commented 6 years ago

@mkovatsc You need to explain what you mean by "stale" together with the assumptions under which it can occur. If the underlying protocol relies on a pull mechanism, the WoT platform should poll at an interval suggested by the previous response (e.g. the HTTP expires header) or the TD metadata. If the underlying protocol supports a push mechanism, the consumed thing's state is updated whenever a new state is pushed to it. The initialisation of state should occur prior to resolving the promise from the method used to create the object for the consumed thing. What am I missing here?

mkovatsc commented 6 years ago

"Stale" is a term from caching and means that an available representation is not valid anymore and should not be used by the client. This can happen with both pull and push mechanisms when there is a short network outage, delay due to congestion, or worse. These challenges are in particular present in the IoT. Ignoring those might almost work when you expect Things only to exist on full-fledged Internet hosts, but it will not work when people want to include actual IoT Things in the Web of Things. Most Members active in the WG want WoT to work with actual IoT Things.

draggett commented 6 years ago

Thanks for the clarification. Clients may prefer the presumed stale value to being forced into an indefinite wait, and I can see a requirement for clients to determine if a property is currently stale, and potentially to know how old the current value is as well as the expected refresh time. This could be implemented in terms of the framework for reporting errors.

mkovatsc commented 6 years ago

Trying to take something constructive from @benfrancis's comments, I see a preference for re-use of existing APIs. This helps a bit with a discussion we currently have in the Scripting API call. The original requirement of having an API that makes use of the Thing abstraction to decouple application code from protocol specifics still holds as chartered in the WG.

Taking @draggett's proposal into account, let's investigate some possible API alternatives. As a first step, let's look into taking properties, actions, and events more literal:

With the above, we lose introspection capabilities. Yet we could do this on a separate object like this:

let td = await fetch(tdUri); // reuse an existing API instead of defining a WoT.fetch()
let myThing = WoT.consume(td.text()); // can also allow consume() to take JSON parsed to an object

let myMetadata = td.json();
console.log(myMetadata.properties.status.type);
console.log(myMetadata.properties.status["@type"]);
console.dir(myMetadata.actions.fade.input);

One issue here is that myMetadata = td.json() does not leave the chance to preprocess the TD to apply defaults etc.

zolkis commented 6 years ago

Periodic polling of devices is not efficient, especially if it needs to be done because of a software convention. IMHO developers who write code for IoT are familiar with asynchronous programming.

Nevertheless the model proposed by Dave works for me (fetch the Thing state when instantiating the object and then update the Thing state by events, allowing to access properties in a synchronous way, meaning the "last known good value", and eventually include a meta-property that tells when the last state update was made).

mkovatsc commented 6 years ago

How would this meta-property fit in, how can it be used? Can you give code examples?

draggett commented 6 years ago

@zolkis, periodic polling may be needed for some IoT technologies, e.g. non-IP devices that sleep to conserve battery life, and need to be polled at regular intervals when they are scheduled to wake up. However, this is at the network edge and Internet protocols could use more efficient push mechanisms. For HTTP, this can include long polling and chunked encoding as per server-sent events, or the alternative of Web Hooks where an HTTP server URI is provided for pushing notifications.

We should also allow for batching of property updates and associated events, something this is valuable for telemetry where the API deals with blocks that contain a time series of updates. Each update could involve multiple properties, or involve a single compound property with sub-properties.

draggett commented 6 years ago

@mkovatsc writes: A common requirement is to know when the write has actually finished (think of spinners or temporally gray buttons in UIs); not sure how to accomplish this with this approach.

The application for a consumed thing can update the value and show the spinner until it sees the corresponding property update event signalling that the update has been applied by the associated IoT device, e.g. a ZigBee based thermostat. This assumes that the scripting API allows applications to listen for update events to individual properties. It is also convenient to have an event that is called whenever the consumed thing's state is updated. This event could tell you which properties were updated. Note that that implies a need for identifiers for sub-properties, e.g. a path syntax.

draggett commented 6 years ago

@mkovatsc writes: Events with the Thing being an EventEmitter: myThing.on("change", (e) => { ... }); Here we get a problem when the Thing should also have an Action "on", which feels to be a very likely name choice.

One way to avoid such name clashes is using a reserved name for a sub-object, e.g. myThing["@meta"], as the host for meta properties and methods. In my implementation I was able to to that by insisting that thing properties and thing action names avoid starting with an initial "@" character. Thus:

  myThing["@meta"].subscribe("targetTemperature", "update", function (e) { ...});

An alternative approach that avoids this particular restriction is to define a method on the WoT object where you would pass the thing you are interested in as an argument, e.g.

  wot.subscribe(myThing, "targetTemperature", "update", function (e) { ...});

where targetTemperature is a property name, and "update" is an event name.

Introspection on the thing description is possible using either approach. In my implementation I expose the thing description as a meta property whose value is the parsed JSON for the TD. In principle errors and the timestamp for the last update to a property etc. could be exposed as meta properties accessible via the introspection object.

I can also see a benefit from exposing thing descriptions as a named RDF graph with a JavaScript API that makes it easy to traverse triples etc. I have such an API using a JavaScript module I wrote a couple of years back. I have combined that with another JavaScript module to visualise TDs with GraphViz using a web worker to do the rendering in the background of a web page. Such an API would allow applications to dynamically adapt to variations across TD's for devices from different vendors by inspection of their semantic descriptions.

mkovatsc commented 6 years ago

@draggett Based on your answers, I see your proposal at a higher level, as it makes some assumptions that not always apply and introduces some implicit behavior that is different from the model the WG needs in the TD:

Overall, I am not objecting to these ideas. I just want to point out the conflict with the WG's mission to complement existing standards and ecosystems, and the issues also pointed out by @zolkis around standardizing complex behavior.

A possible way forward could be to implement your expected behavior on top of the API we are currently standardizing. The latter is rather narrow and closer to the technological basis, hence easy to specify. It standardizes a secure way for applications to interact with devices from a sandbox and already abstracts from protocol details. Your solution could be put on top, similar to jQuery being on top of the browser APIs. @mjkoster follows a similar approach for the "semantic API" that focuses on bridging between different ecosystems.

What do you think about this? It would be great to have you in the Scripting API call on Monday to discuss this as well as to clarify some other details about your proposal (e.g., available implementation).

draggett commented 6 years ago

@mkovatsc writes:

Full replication of state across systems:

  • synchronous access without means to deal with network problems assumes very good connectivity, which is not always the case

The WoT platform is responsible for the networking and works in the background to deal with any network problems. Decoupling applications from the underlying networking is a key benefit of the Web of things.

  • replicating the complete state is often not possible, as devices often require high effort to retrieve many properties and often applications are not even interested in them (think of a home router that can produce all kinds of statistics and detailed configuration parameters)

For the Web of Things we require the replication of the property values. If there are large numbers of values that are only of occasional interest, the solution is to access them via actions rather than properties. I see this as a matter of best practice guidelines.

Implicit events

  • you propose to split reads and writes into a synchronous call that returns immediately and an event that informs later about success or failure

Yes, given that the WoT platform is handling the networking in the background, and needs a way to signal when errors occur, e.g. when the connectivity with exposed thing is lost. However, in principle, we could synchronously report data type errors when an application tries to write a value to a property that conflicts with the type declaration in the thing description. In JavaScript this would traditionally be handled by throwing an exception.

Device implementations have to follow API

  • features such as batching properties as well as messages are not generally available, so that devices actually have to be implemented in a way to satisfy your API

Batching is key to telemetry use cases and to devices that have to buffer readings and periodically transmit the buffer as a means to conserve power and prolong battery life perhaps for years. The WoT API should provide efficient support for buffered data, e.g. use cases where there are many thousands of sensor readings per second. If a given thing/property doesn't support buffering, then the WoT Platform could send each of the updates separately. The WoT platform in any case needs to know about the network/messaging requirements for a given thing, and this comes from the binding templates/communication metadata.

Complex transaction models and optimizations

  • from a specification perspective it is hard to fully standardize such complex behavior (nail down the algorithms in the text)

I think you are misunderstanding and see complexity where it doesn't exist. Could you provide some examples of what you envisage?

The points made by @zolkis reflect what I have been saying about the network edge - that there are many IoT technologies that address different requirements, and that this can be addressed by applications at the network edge that then expose things using regular Web protocols like HTTP. This means that we can create a market of services that work with existing IoT devices by using smart hubs/gateways for WoT applications. I see binding templates as nice but not essential, and not always practical given the diversity of IoT technologies.

In regards to a practical way forward. I plan to support both flavours of API on my NodeJS WoT platform and will work on this in August and September as part of my efforts on interoperability testing for F-Interop. I should then be able to demonstrate interoperation with node-wot along with example applications.

zolkis commented 6 years ago

One issue that comes in mind with a replicating, synchronous representation of Things is how exactly the implementations are going to do this. An application script may want to control that, for instance be able to give hints to the implementation on how to deal with local resources (frequency and size of updates, timing/jitter issues etc). We may need to figure out a way to give such hints when instantiating a SW object that represents a replicated Thing.

I am not sure what kind of developer profiling have we done for WoT, but my impression so far is that developers are able to handle asynchronous programming, and they do need domain knowledge of the system they are programming. The problem more often is whether the API is good enough to permit realizing the functionality they want.

Therefore I wonder whether forcing a synchronous paradigm has really have the expected benefits in the future developer community, or it's not the main thing that really matters.

However, simplicity is good, so we should continue the effort on this, nevertheless it would be nice to see the code (repository) with the work-in-progress (if this work belongs to the WoT WG), together with the discussion of eventual issues. Since Scripting is optional, there could be many takes at it that could learn from each other.

draggett commented 6 years ago

Rather than complicating the scripting API, a simpler solution is to use the thing description to provide metadata, e.g. sampling rate and buffering latency. This is what I used for my medical telemetry example with electrocardiogram, breathing etc. The buffering latency together with the sampling rate informs the consuming application how much data needs to be buffered when it wants to provide a continually scrolling display of samples. In short, we can support telemetry in a simple manner. There is no need for complexity here. I am committed to provide further working examples to demonstrate this.

zolkis commented 6 years ago

That is valid for the exposed Things, but I was talking about consumed Things, more specifically the WoT runtime vs application constraints (for whatever reason) on the client side.

Maybe you want to instantiate a consumed thing that has one tenth of the read (not sampling) frequency rate (and this is application specific), therefore it cannot be covered by TD. Of course when the runtime is polling then it could be adapted to client needs transparently between the client code and runtime (which still needs API support), but when data is pushed from the remote thing, a protocol negotiation is needed (and the TD can indeed tell whether this is possible).

Anyway, the client application should be able to tell hints to the runtime about read and eventually sampling frequencies, event frequency, buffer limits etc. Whether these will trickle down locally or remotely, will depend on the Thing.

The main point is that with an asynchronous API this is less of an issue than with a synchronous, replicating API (I say "less" because event frequency needs to be solved in both, but at least getting property updates are controlled by the client with the asynchronous API whereas it is automated with the synchronous API).

This could be regarded as an advantage or disadvantage, depending on viewpoint, but these issues need to be discussed and solved.

draggett commented 6 years ago

@zolkis writes:

The main point is that with an asynchronous API this is less of an issue than with a synchronous, replicating API

Please provide concrete details as I don't see why that is the case.

I see property updates as a class of events that are implicit in the thing description, and essential if applications are to receive asynchronous notifications, for instance, for an app for a consumed thing to be alerted when the app for the exposed thing updates a property.

I agree in principle with the idea that an app for a consumed thing might want to choke the rate of property update events, or any events for that matter, and I would like to see some well motivated concrete use cases to justify that. The ability for the API to handle batched events (i.e. an array of events) would improve efficiency.

We have a choice for how to pass preferences, one is via the event subscription API, another is via an API to update the metadata. We agreed in Korea to generalise thing descriptions to allow declaration of the data type for information passed as part of event subscription. We've also started to talk about the need for an API to inspect the thing description and to update the metadata where appropriate.

Of course we still need to talk about errors, e.g. when trying to set a value for a property that is invalid according to the property's declared data type as given in the thing description, when the connectivity is lost, when there are security errors, and so forth, including attempts to set metadata that is deemed invalid by the app that exposes a thing.

mkovatsc commented 6 years ago

@draggett You mention "WoT platform", "Decoupling applications from the underlying networking" (not only protocols!), and "create a market of services that work with existing IoT devices by using smart hubs/gateways for WoT applications".

Would you agree that you want to build a concrete WoT ecosystem?

draggett commented 6 years ago

I definitely want W3C to enable open ecosystems for suppliers and consumers of services that are grounded on the Web of Things. I am hoping that the WoT IG and WG can pave the way. Open source software for hubs/gateways would help. These would need to make it easy for people to install, setup and run services. I can certainly provide a proof of concept, but was hoping that node-wot could evolve into such an app platform with a sustainable development community around it.

zolkis commented 6 years ago

@draggett

We have a choice for how to pass preferences, one is via the event subscription API, another is via an API to update the metadata. We agreed in Korea to generalise thing descriptions to allow declaration of the data type for information passed as part of event subscription. We've also started to talk about the need for an API to inspect the thing description and to update the metadata where appropriate.

Of course we still need to talk about errors

We agree in all these points.

zolkis commented 6 years ago

I think we have discussed several times the layered WoT architecture concept, that would make possible to have

So basically we are not in conflict in these goals IMHO. But we need to clarify the layers and the scope of each layer.

mkovatsc commented 6 years ago

Would you agree that you want to build a concrete WoT ecosystem?

@draggett wrote:

I definitely want W3C to enable open ecosystems for suppliers and consumers of services that are grounded on the Web of Things. I am hoping that the WoT IG and WG can pave the way.

We should clarify this, because I think it is the source of many unnecessary discussions and misunderstandings. By "concrete WoT ecosystem" I mean one that has requirements on the infrastructure such as gateways/hubs that exhibit a particular behavior. To my understanding, this is what you are describing.

What we currently do in the WoT WG is "paving the way". We are charted to counter the fragmentation by describing and complementing existing ecosystems and standards. This is what most stakeholders need at the moment and what most WG Participants want. This is different from yours! We need to fully understand this to find a productive way forward. This mismatch has cost us a lot of time and energy already and I hope with this understanding we can improve this.

@draggett I hope you can agree with my statement that you want to go beyond what is currently done in the WG. Then let us treat it like this and not discuss across two different levels. My proposal is that you use the current WG goals -- which are goals by many W3C Members! -- as a basis for your concrete WoT Platform that enables market of services. But first things first. For instance, with actual devices, you will need the same network handling as done in the current runtime for the Scripting API. That API can make it much easier for you to put your network-abstracting API on top, and it also answers many lower-level issues that you did not consider yet.

Forcing your next step goals into the current WG activity (for which it is not even chartered) is only threatening its success. I hope you see the different levels of goals and we can find a productive way forward.

benfrancis commented 6 years ago

@mkovatsc wrote:

We are charted to counter the fragmentation by describing and complementing existing ecosystems and standards. This is what most stakeholders need at the moment and what most WG Participants want.

I disagree with this assertion, or at least your specific interpretation of how best to counter fragmentation and complement existing standards.

I believe that most stakeholders are here to counter fragmentation and complement existing standards, but I don't think everyone agrees with the current approach of the Working Group with the Protocol Binding Templates and Scripting API deliverables. I've certainly spoken to multiple web and IoT organisations who are not participating in the current Working Group because of the current direction.

This is really a discussion for the re-chartering process.

zolkis commented 6 years ago

For the re-chartering, given the current unbent divergence in opinions, in my personal view we should:

This will add to the overhead, but if opinions are divergent and rigid, we won't move forward without breaking anyway. Separation of concerns, layering and scoping should help, also showing tolerance to others' use cases.

draggett commented 6 years ago

@mkovatsc - thanks for clarifying an opaque question - I had no real idea what you meant and agree with @benfrancis that it sounds like part of the rechartering discussion. How about starting a separate issue?

mkovatsc commented 6 years ago

@draggett This part is not about re-chartering, but about figuring out what exactly your goals are to find a constructive way to address the issues.

I want to know if you can see the different levels I pointed out: standardizing building blocks that counter fragmentation by describing and complementing (current WG work) vs building a concrete ecosystem and market.

mkovatsc commented 6 years ago

@benfrancis FYI from our charter:

These building blocks will complement existing and emerging standards by focusing on enabling cross-platform and cross-domain interoperability, as opposed to creating yet another IoT standard

draggett commented 6 years ago

These building blocks will complement existing and emerging standards by focusing on enabling cross-platform and cross-domain interoperability, as opposed to creating yet another IoT standard

That isn't inconsistent with encouraging convergence on a tiny number of protocols for WoT platforms communicating across the Internet, and indeed within the WoT IG/WG there has been convergence on the use of HTTP together with long polling for asynchronous notifications. Imagine the Web without HTTP where every browser had to support a wide suite of protocols to span different proprietary ecosystems, such a Web would have been far less successful. Convergence on protocols and common Linked Data vocabularies will help the Web of Things grow and reach its full potential.

benfrancis commented 6 years ago

@mkovatsc wrote:

@benfrancis FYI from our charter:

I've read the charter, thank you.

You seem to be missing the point that you failed to address objections to the charter, continued on anyway, and are then surprised when everyone doesn't just fall in line.

But I would prefer to focus on the areas we can agree on, specifically the Thing Description.

I'm pleased to say that Mozilla's 0.5 gateway release which was announced today is much closer to the latest Editor's Draft of the Thing Description, we're making real progress on converging on agreement there :)

draggett commented 6 years ago

@mkovatsc wrote:

standardizing building blocks that counter fragmentation by describing and complementing (current WG work) vs building a concrete ecosystem and market.

W3C Working Groups have a narrow focus on developing the standards that companies need to build open ecosystems and grow the markets. Open source communities play a complementary role -- what are you seeking to accomplish with node-wot and the Eclipse thingweb project?

W3C Interest Groups can play a broader role, e.g. developing a vision, outreach, use cases, requirements, proof of concept interoperability demonstrations, prenormative work on evaluating technologies, etc.

I may be mistaken, but it seems like there is still some work to do towards a shared vision of what a successful Web of Things would look like and how we get there from here, and how to attract participation from a wider set of stakeholders. This is very much something in scope for the Web of Things Interest Group.

mkovatsc commented 6 years ago

@draggett wrote:

This is very much something in scope for the Web of Things Interest Group.

Yes, and nobody is challenging this. This is an issue about the WoT Scripting API and I trying organize the goals, so that we can have a productive way forward.

In the WG we have been defining an API that allows to connect JavaScript code to the central WoT Thing Description (TD), which is describing a network-facing API, but is also providing abstractions from the protocol-specific operations. There is no further abstraction step between TD and the Scripting API, as it helps the security considerations and to define the algorithms correctly.

Your API proposal introduces further abstractions between TD and API. This is valuable for your goal to grow the market by abstracting WoT applications completely from the underlying networking. Please continue with this!

What I do not understand is why you insist on replacing the valuable work we have already for the WoT Scripting API with an approach that needs further research, as completely and correctly abstracting from networking is challenging. The current API is not blocking your vision, it even paves the way as you said. It is a perfect analogy to the XHR API with the much richer jQuery on top.

I again would like to ask you if you can see the difference in goals and abstraction level between the current WoT Scripting API and your proposal.

draggett commented 6 years ago

It is perfectly legitimate to discuss ideas for the scripting API such as batched events for telemetry, synchronous property read/write, and using metadata to avoid complicating the scripting API, e.g. in respect to protocol constraints on multiple values. I see this as complementary to the existing API specification and helping to draw out unstated assumptions about the networking layer. The following provides further technical details.

Synchronous read and write of property values can be layered on top of the exiting promise based API, assuming there is a means to listen for property value update events. This further covers scenarios where dependencies between property values require multiple property updates to be sent in the same protocol message.

Synchronous reads of property values are easy to implement using getters. The WoT platform itself subscribes to property update messages in order to update property values in the background. If the underlying protocol provides expiry times for values, then the platform can set timers to refresh property values when they become (or will soon become) stale. Applications may listen for events that signal changes to the connectivity between consumed and exposed things.

Synchronous writes are likewise easy to implement using setters. The platform throws an exception if the value is disallowed by the type declaration in the thing description, e.g. an out of range numeric value, or an attempt to set a string value for a numeric property. The setter then initiates an asynchronous update of the property value on the exposed thing. Applications can listen for update events that signal when the value has been applied by the exposed thing. This enables applications to display a spinner or other user experience affordance for the time between the user changing a value and it being applied by the remote IoT device.

Batched events involves the delivery of an array of events rather than a single event. This could be controlled through information passed when subscribing to events. We should discuss the details further given the recent agreement on extending thing descriptions to allow such information to be declared as part of the thing description for events.

One use case for batched events is for telemetry where you might get thousands of property updates per second. This raises the possibility of an interface for applications to write batched updates involving an array of values. Thing descriptions may include metadata for the sampling frequency and the buffering latency. @zolkis talked about the need for applications to be able to slow down the rate of events. One way to do that would be to pass rate limits as part of event subscription, and potentially to allow for a means to update an existing subscription.

In respect to protocols that require related properties when updating a given property. A good way to support this is via metadata in the thing description. The WoT platform should then consult this metadata to determine which if any related properties need to be added to a given property update message. JavaScript uses an event driven programming model that guarantees that only one event handler is executed at any one time. This means that applications could synchronously update several properties in the same event handler. The Wot platform can then queue the updates and process them when the handler exits, thereby dealing with any dependencies regardless of the order in which the application wrote the property updates, thereby keeping the application developer's life much easier than if he or she had to be aware of the dependencies between properties.

Protocols may provide the means to update all property values with the same message and similarly to request the value of all property values. This needn't be exposed to the application developer, and should rather be seen as something the WoT platform makes use of as appropriate.

zolkis commented 6 years ago

For the record, this issue has been discussed in https://www.w3.org/2018/09/17-wot-minutes.html. Main points: