w3c / ambient-light

Ambient Light Sensor
https://www.w3.org/TR/ambient-light/
Other
26 stars 21 forks source link

Request for Ambient Light Sensor web developer feedback #64

Open anssiko opened 3 years ago

anssiko commented 3 years ago

RESOLUTION: Continue monitoring ALS API developer interest and work with more browser vendors. Encourage developers to experiment with existing prototypes.

via https://www.w3.org/2020/10/23-dap-minutes.html#r01

📢 Web developers - To help accelerate the spec advancement, please share here any pointers to web experiments or prototypes using the Ambient Light Sensor.

I'm cross-linking here one such innovative usage https://github.com/w3c/ambient-light/issues/13#issuecomment-496935850 from @Joe-Palmer that has been brought to the group's attention earlier.

tomayac commented 3 years ago

(Signal-boosted on Twitter.)

tomayac commented 3 years ago

The comment from @willmorgan at https://github.com/w3c/screen-wake-lock/issues/129#issuecomment-737462720 makes a connection from maximum screen brightness to ALS.

anssiko commented 3 years ago

@willmorgan could you expand on your use case for "QR code scanning, document scanning, and other interactive authentication methods" https://github.com/w3c/screen-wake-lock/issues/129#issuecomment-737462720 and explain how Ambient Light Sensor would help realize these use cases? Also, what type of interactions with other specs (e.g. Screen Wake Lock API, yet-to-be-specced brightness control API) you foresee?

Your feedback will help inform implementers shipping decisions as well as future work on related specs, so it is much appreciated. Feel free to provide also other use cases that could not be realized without the ALS API.

willmorgan commented 3 years ago

Hi @anssiko, gladly.

I've actually been working with @Joe-Palmer and @GlenHughes on the same Web-based product at iProov that uses light reflection from human features to assert identities online, similar to how Face ID works, but more secure and resistant to replay attacks and compromised devices. It is possibly the most complex and cool thing I've ever worked on and hinges on a lot of new web platform tech.

To expand upon Joe's original message, we do still rely on a good level of signal strength of light reflecting back from the user's face in order to perform authentication in a user-friendly way. The easy way to obtain this is to maximise the screen brightness, which we can do with a native app on iOS and Android. We can't currently do this on mobile (or laptop!) web.

Without that ability, one idea is to fall back to detecting the current environmental conditions and directing the user to orient themselves away from harsh lighting conditions to increase the signal strength in that way. Ideally one would be able to detect the orientation of the ambient light sensor relative to other devices but I imagine that would complicate the rollout of any standard significantly.

In real-world use cases, consider industries like banking "know your customer" requirements (KYC) and travel, and security in general. I would much prefer to scan my passport, driving license or travel document and then use my mobile device to assert my identity against that document all on the mobile web, without downloading an app. Much more than the paper based process which is painful at the best of times - try applying for a mortgage in the UK and EU, I believe the US is even more challenging!

QR codes are a slightly separate use case, but for scenarios like boarding a plane with a boarding pass, entry into events or gyms, or granting access to an Amazon locker, then having the ability to increase brightness to increase scan-ability of the displayed QR code would also help usability. Failing that we would fall back to ambient light to direct the user accordingly.

I am sure that our competitors in the eKYC space would appreciate the same, but I won't speak for them 😁

Ultimately we are for all initiatives that help close the gap between mobile web and native feature set and performance so would be happy to assist in any reasonable way.

anssiko commented 3 years ago

Thank you @willmorgan for explaining the use cases. What are in your mind the MVP requirements for a Web API for adjusting screen brightness? E.g. do you need to know the minimum, current, or maximum brightness, or is it enough to have a boolean to flip to turn the brightness to its maximum (that might be user settable, or rejected by the user, for example)?

Often Web APIs do not directly map to the full feature set of respective platform APIs due to privacy and other reasons, so I'm trying to gauge what is the minimum feature set that'd enable your use cases.

willmorgan commented 3 years ago

Thanks for getting back to me @anssiko.

Right now our MVP requirements for a Web API are:

Screen Brightness

Ambient Light

anssiko commented 3 years ago

@willmorgan, thanks for the MVP requirements!

Your Ambient Light requirements will be taken into consideration as we plan new work in this Working Group.

The proposal for Screen Brightness would need to be incubated in a Community Group first, given this is a new Web API proposal. If you'd like to help get this process started, see the instructions for writing such a proposal and to capture interactions please drop a link to the proposal here. After adequate incubation, this Working Group could consider adopting it.

willmorgan commented 3 years ago

Thanks @anssiko, done ☝️ above.

willmorgan commented 3 years ago

At the 2021 Q2 DAS meeting, we discussed how to advance the ALS spec, potentially by moving the Ambient Light Sensor into getUserMedia in order to benefit from the existing privacy and UI framework that could be used to gate permissions for this data.

ALS inside getUserMedia

This clever hack would potentially provide an expedient foundation to bring ALS into the web platform.

However, having looked into this further, I'm not sure this worth pursuing:

In short, having looked into this, I'm not sure that getUserMedia would represent a fast way of making this data available for use cases: it would present a new and unique UX challenge for browser vendors, and present an unusual or quirky API surface for developers to interact with.

Bringing this capability into the Web Platform using the specs we have today

Per the above, I would prefer to help tackle the remaining problems with generic sensors to achieve an outcome where we have a sensible and standard way to access this and other APIs.

As we know, the way that existing sensors like the gyroscope are currently accessed is generally to bind to the devicemotion / deviceorientation event on window. In Safari you'd call DeviceMotionEvent.requestPermission() before receiving this information, and the information is further gated by permissions policy (or feature policy 😉).

Today, web developers requiring motion data can obtain it in this way, feature detecting if further permission prompting is required, and handling that as needed. It isn't the cleanest of APIs but it's fairly uncontroversial barring a few feature policy issues which are getting fixed.

The way the existing Chrome implementation or ALS works is perfectly suitable for my use case at the moment:

We also discussed that the concerns around privacy intrusion have been disproven, are generally of low severity, are mitigated by reducing resolution and frequency of readings, and may benefit from further mitigation strategies in the future through the generic sensor spec.

As an example, this blog post shows how one could determine browser history using devicelight events and inspecting visited link styles, logos and so on, before resolution was reduced. Lukasz's post is wonderfully creative, and has implications on other areas this group is focusing on such as the Wake Lock API, but the techniques shown also rely on a lot of modern security tooling like Content Security Policy either being misconfigured or simply not present.

I would point out that the use case I'm advancing works in a similar, but higher fidelity way to replicate these scenarios, as that's a major part of its value proposition. It fundamentally differs only in the way that it needs to produce a correlation score of colours flashed on the screen to produce a true/false pass result, rather than estimating websites visited, user behaviour, or some other sensitive credential.

In order to achieve acceptable precision, my own use case requires a full RGB camera feed. The reason why my use case requires ambient light readings is because it can only reach a high degree of confidence without environmental light introducing too much noise. I can't envisage how a single lux data stream, even at high precision or frequency, could do this alone -- but would honestly be fascinated and grateful if someone out there could show me how! 😉

Summary and my own thoughts on next steps

To summarise, I do not believe there is much risk of harm in introducing ALS as it is today, or even under the devicelight event. What we have today meets my use case, and the security and privacy implications can be mitigated with the appropriate tooling (CSP and Permissions Policy).

In the spirit of moving things forward, perhaps it would make sense to keep on the look out for use cases and slightly reduce the existing spec's scope, if needed, and proceed from there?

Thanks for reading my massive wall of text!

anssiko commented 3 years ago

Thanks @willmorgan for your detailed assessment of pros/cons of ALS inside getUserMedia versus standalone ALS. This addresses the resolution we took at our recent virtual meeting.

This issue will remain open and accepts further use case input.

In parallel we look for opportunities to reduce the ALS scope in a way that won't affect negatively the known key use cases. The group welcomes proposals on ways to further reduce privacy risks while still enabling key use cases. Please consider your proposals in context of the Security and Privacy Considerations that note potential privacy risks and current mitigations strategies. Please note that with new information, the existing considerations can be revised.

mburrett-jumio commented 3 years ago

Even the most minimal implementation of this - providing a rough estimate of ambient light level coupled with an approximate timestamp - would provide significant benefits to web applications performing any kind of biometric analysis via the camera. The lighting itself is of course a critical component in this type of process.

anssiko commented 3 years ago

Here's another use case with a proof-of-concept:

https://tonytellstime.netlify.app/ via https://github.com/w3c/ambient-light/issues/69 (thanks @Aayam555) uses getUserMedia to approximate ambient light level (by reading pixel values off of canvas) and announces the current time using the Web Speech API when it detects changes.

A more privacy-preserving and energy-efficient version of this app would use Ambient Light Sensor instead.

rakuco commented 3 years ago

As we know, the way that existing sensors like the gyroscope are currently accessed is generally to bind to the devicemotion / deviceorientation event on window. In Safari you'd call DeviceMotionEvent.requestPermission() before receiving this information, and the information is further gated by permissions policy (or feature policy ). [...]

  • speculatively in WebKit browsers, calling DeviceLightEvent.requestPermission() if required. [...] To summarise, I do not believe there is much risk of harm in introducing ALS as it is today, or even under the devicelight event.

Just a few clarifications that don't disprove @willmorgan's points:

marcoscaceres commented 3 years ago

A more privacy-preserving and energy-efficient version of this app would use Ambient Light Sensor instead.

Sure, but that example is totally contrived (it's hard to imagine anyone doing that in a real app). It also requires a permission to allow camera access, which mitigates the privacy aspects.

anssiko commented 3 years ago

This is an important discussion as we figure out the path forward for this API.

First, I wouldn't dismiss any proof-of-concept but encourage all web developers to share their experiments. Thanks all who already shared experiments with us!

Specific to https://github.com/w3c/ambient-light/issues/64#issuecomment-925605677, this is perhaps not the next Instagram, but a minimal functional example of a long-running task that wants to react to changes in a specific attribute in the environment, available light aka ambient light. The long-running nature imposes further requirements on energy efficiency of the implementation.

Given this group is committed to privacy-preserving APIs, I think using a camera API to monitor the ambient light level would be a violation of the data minimization principle. I'm not blaming anyone, web developers will use what they have at their disposal to get their job done. But I think we can do better and help web developers do the right thing the right way.

TL;DR: I'd challenge the group to think of appropriate abstractions that map close enough to the real-world use cases.

Another thought. Using an API for a purpose other than its primary function will likely confuse the user. For a camera API, the primary function would be to capture and/or display a visual image. This concern applies to all APIs that are multi-purpose, and is not specific to this case. Just wanted to note how gating an API behind a permission when it is used in unexpected ways will likely lead to a confusing user experience.

I know some native platforms allow prompting with a custom description with more context, but faking that to get the user to grant access is a concern. @marcoscaceres did Permissions API consider adding that feature and how did that discussion go?

larsgk commented 2 years ago

Another thought. Using an API for a purpose other than its primary function will likely confuse the user. For a camera API, the primary function would be to capture and/or display a visual image.

Not only confusion. A concrete case we have would be using the ALS to adjust color scheme (e.g. for day / dusk / night mode) for dashboards in cars and on the bridge on vessels, where it's important not to not blind users' night sight and generally provide the best UI for the ambient light at the time. This will require continuous monitoring of the ALS and not make sense to bundle with a camera API (including permissions).

marcoscaceres commented 2 years ago

The use case is not in question: it's a great and valid use case. However, what's in question is the solution (ALS) to address the use case.

The use case seems very tied to prefers-color-scheme (literally for UIs, as mentioned). The ALS doesn't have a nice way of hooking into CSS. Wouldn't it make more sense to just add "dusk" or whatever to prefers-color-scheme? That would afford users control over when the UI is applied, without the need ASL at all (ALS can still be used by the browser to make the "dusk" determination - or the user can just choose "I always prefer dusk... or just let the system decide (auto), like in MacOS").

tomayac commented 2 years ago

(Bikeshedding, I know.) Wording it as "dusk" is dangerous, though, as dusk is connected to the time between day and night. You don't call temporary darkness in a tunnel "dusk".

The Google Maps platform-specific apps turn dark when one drives through a tunnel. Experiments suggest it's not using an ALS for doing so, which to me is surprising.

marcoscaceres commented 2 years ago

Yes, I agree... that wasn't to imply that we would use "dusk". I should have been more clear.

Experiments suggest it's not using an ALS for doing so, which to me is surprising.

I think that's correct/good... I think MacOS also does it based on the time sunset happens, but I'm not sure. I think having multiple heuristics is actually a good thing (which may or may not include ALS) - including just giving users control.

anaestheticsapp commented 1 year ago

I have added ALS to my logbook app a year ago to adjust the color scheme of the app based on ambient light (https://twitter.com/AnaestheticsApp/status/1499425060402212864). In many countries, it is mandatory for anaesthetic doctors to log every case they do and many people do this in theatre. The problem is that, depending on the type of surgery, light conditions in theatre are either very bright (dark mode becomes unusable) or very dark (light mode is too bright and distracts other people in theatre). Users currently have to manually switch the color scheme multiple times a day or manually change the brightness of their screen. It would be great to see this implemented!

anssiko commented 1 year ago

@anaestheticsapp thank you for your encouraging feedback! I can't stress how important it is for us folks working on new web capabilities to hear directly from forward-looking web developers (you!) who understand the context-sensitive real-world user needs.

marcoscaceres commented 1 year ago

This still seems like an OS wide problem, not a web page level problem.

@anaestheticsapp, like, what do the rest of the apps in the OS do?

anaestheticsapp commented 1 year ago

Good question, I don't develop native apps so I don't know if they have access to an ALS. But I wouldn't expect every app to behave this way, just the ones that are actively being used in frequently changing light conditions and I would expect that users opt in for this behaviour for each app.

willmorgan commented 1 year ago

Google Maps on iOS/CarPlay adapts the display based on the ambient light sensor. If I drive through a tunnel, for example, it knows to switch to dark mode.