w3c / sensors

Generic Sensor API
https://www.w3.org/TR/generic-sensor/
Other
127 stars 59 forks source link

API must allow Web devs to easily create fused sensors in JS with an API consistant with native sensors #42

Open tobie opened 9 years ago

tobie commented 9 years ago

The platform may expose both fused and raw(?) sensors. For example, Geolocation is generally a fused sensor (fusing GPS, WiFi, and network data, in a way that's totally opaque to the consumer of the API). It's also possible that the platform decides to give access to the raw sensors directly (e.g. the GPS sensor, etc.). In which case, it should be easy to polyfill the Geolaction sensor, using those raw sources.

Similarly, it would be useful for Web developer (or 3rd party library builders) to be able to create new sensors from the combination of existing sensor data, other relevant contextual info, algorithms such as the Kalman filter, etc. and expose them to the platform with the same API as native sensors have, as per the Extensible Web Manifesto.

gmandyam commented 9 years ago

Need more specificity. In many mobile platforms, this is handled at the HW-accelerator level. Would the User Agent also be providing its own sensor fusion?

tobie commented 9 years ago

Agreed, @gmandyam. This was really a note for me so this doesn't get lost. To answer your question, it seems that when the fusion happens at the HW or underlying platform -level, then the fused(?) sensors are exposed as if they were a unique data-source to the Web platform. Afaik, this is how Geolocation already works (fusing GPS, WiFi, and network data, in a way that's totally opaque to the consumer of the API). This is good and should continue like this.

What might be worth enabling however, would be to give (Web) developers the power to create their own sensors from the fusion of multiple sensors and exposing them as a single data source. Imagine, for example, combining various sensors to create a high-level AirQualitySensor based on a given air quality index which would have the same interface as if it was built at the HW or underlying platform level.

robman commented 9 years ago

From the point of view of a web dev that's trying to fuse sensor data to create Augmented Reality then this seems really useful (at the higher web api level - the lower level should obviously stay as is). But we view this more along the lines of what's discussed in issue #4 so that the different sensor data is more closely aligned in time (and also not unnecessarily over-cycling).

tobie commented 9 years ago

But we view this more along the lines of what's discussed in issue #4 so that the different sensor data is more closely aligned in time (and also not unnecessarily over-cycling).

Noted. Thanks for that comment. It helps.

gmandyam commented 9 years ago

Developers can always implement their own sensor fusion algorithms at the app layer by combining discrete sensor data in a means that they see fit (assuming that the raw data is sufficiently time-aligned for this purpose, as per issue #4). I believe what is required is not really "sensor fusion" but a way for the developer to request time-alignment of the data being returned by discrete sensors. Correct?

If so, it seems the application can create its own timestamps for the data returned by discrete sensors and deal with time-alignment as it sees fit (e.g. interpolating missing values if possible, etc.). In other words, sensor fusion at the app layer can be achieved without changes to the existing API.

tobie commented 9 years ago

So agreed, @gmandyam. But my point here was to allow Web devs to expose fused sensors of their own with the same API as native ones (in the spirit of the Extensible Web Manifesto) and to make polyfilling dead easy. Renaming the issue accordingly.

tobie commented 8 years ago

Reopening this issue following a discussion with @slightlyoff during the W3C TAG review.

While the current API absolutely allows building fused sensor in application level code, it is not quite clear whether (or how) it would be possible to have such sensors be subclasses of the generic sensor API.

The idea here would be something along the following lines. For example, imagine you wanted to create a high-level pedometer sensor, that just provided step count. This would filter the output of the accelerometer or perhaps the gyroscope, and might do some sensor fusion between both.

JS implementation would look something like that:

class Pedometer extends Sensor {
    constructor(options) {
        this.gyroscope = new Gyroscope();
        this.gyroscopeReadings = [];
    }
    start() {
        this.gyroscope.onchange = e => {
            this.gyroscopeReadings.push(e.reading);
            if (this.gyroscopeReadings.length > 8) {
                let stepcount = filter(this.gyroscopeReadings);
                this.gyroscopeReadings.length = 0;
                if (stepcount > 0) {
                    let r = new PedometerReading(this.reading.stepcount + stepcount);
                    this.reading = r;
                    this.dispatchEvent(new SensorReadingEvent("change", r));
                }
            }
        }
        return this.gyroscope.start();
    }
    filter() {
        // etc
    }
}
marcoscaceres commented 8 years ago

Drive by comment... 🚗 🔫🔫🔫

This is really screaming to be an observable. You could observe changes on the gyroscope and filter out the ones that don't meet the criteria, and then emit each "step". I'm a bit concerned about seeing so many classes being used (e.g., PedometerReading, SensorReadingEvent), when I think one really just wants to be working with the underlying data to create different abstractions on top.

Ideally, you would never need to extend Sensor at all (as it only helps for the purpose of the WebIDL binding to the underlying platform) - and instead could compose these observables into different kinds of abstractions (i.e., like "Pedometer", or whatever ).

Yes, we would need to sit down and figure out what all this looks like as an implementation using obsevables... I'm not saying we shouldn't also allow Classes to be used (because we need them for the WebIDL binding), but they appear to be the totally wrong abstraction to use in user-land code.

tobie commented 8 years ago

Yes. I absolutely agree. The problem is observables are still 2-3 years out, aren't close to consensus and we'd like to ship sensors soon.

I'm ready to revisit this if the observables situation has changed. But afaik it hasn't.

We had a long thread on this topic here, btw: #21.

rwaldron commented 8 years ago

https://github.com/w3c/sensors/issues/21

marcoscaceres commented 8 years ago

Argh.. I had forgotten about #21... re-reading it, yeah... events.

kenchris commented 8 years ago

Discussing with Tobie in real life, it seems that this is an issue about how to inherit from built-in components, in a similar way that Custom Elements are. We should probably have a look at their discussions or talk to people who worked on that.

alexshalamov commented 7 years ago

We've tried prototyping few use-cases for custom fusion. Wrapping EventTarget interface for multiple sensors and managing permissions for each fusion source is not a nice thing to leave for web developers.

If there would be an interest in custom sensor fusion, we can introduce separate fusion interface that will:

For now, moving issue to Level 2 feature set.

anssiko commented 7 years ago

@alexshalamov, thanks for your careful research on the topic. I agree this makes for a good Level 2 issue.