w3ctag / design-reviews

W3C specs and API reviews
Creative Commons Zero v1.0 Universal
322 stars 55 forks source link

Personalization Semantics Explainer and  Module 1 #476

Closed lseeman closed 3 years ago

lseeman commented 4 years ago

Hello TAG!

We are requesting a TAG review of Personalization Semantics Explainer and  Module 1 - Adaptable Content. It contains supporting syntax that provides extra information, help, and context. We aim to make web content adaptable for users who function more effectively when content is presented to them in alternative modalities - including people with cognitive & learning disabilities. 

Our wiki and getting-started page.

Security and Privacy self-review: PING review

GitHub repo: Repo and issues

Primary contacts (and their relationship to the specification):

-Charles LaPierre charlesl@benetech.org (Personalization Taskforce co-facilitator)  -Lisa Seeman lisa.seeman@zoho.com  (Personalization Taskforce co-facilitator)  -Janina Sajka janina@rednote.net (APA chair)

Organization(s)/project(s) driving the specification: Accessible Platform Architectures (APA) Working Group -  Personalization Taskforce  

Key pieces of existing multi-stakeholder review or discussion of this specification: Taskforce discussions and stakeholder discussion (such as wiki vocabulary-in-content discussion and wiki symbol discussion.)

External status/issue trackers for this specification (publicly visible, e.g. Chrome Status): Beyond the discussion in the wiki there is also Easyreading and  Firefox alpha from our demo at TPAC 2019 in Japan.

Special considerations: We would like advice on porting from the "data-" prefix to a more permanent syntax in CR. Should we suggest a prefix or just plain attributes? Note as more modules are produced we could have a sizable number of attributes.

Further details:

We'd prefer the TAG provide feedback as (please delete all but the desired option):

🐛 open issues in our GitHub repo for each point of feedback

alice commented 4 years ago

Hi Lisa et al,

Thank you for bringing this work to us.

@torgo, @atanassov and I have had a chance to go through the explainer in our face to face meeting in Wellington. Unfortunately, we were unable to understand from the explainer what the proposed technology/API is.

Would it be possible to have a document more closely based on our Explainer template?

Specifically:

It seems like perhaps this needs to be three smaller explainers, one for each module, possibly with one meta-explainer framing the larger picture?

We were left unclear on what the actual proposed technology was - the Explainer or Explainers should clearly explain the proposal, without requiring the reader to click through to supplementary material. Supplementary material may go into extra detail or more technical depth.

We are excited for this work, and would like to help review the proposal, but we were left scratching our heads a bit here - we hope the above feedback helps explain what would be more useful for us.

alice commented 4 years ago

We got a draft explainer restructure via email, so we'll be looking at that at our virtual face-to-face.

ruoxiran commented 4 years ago

We got a draft explainer restructure via email, so we'll be looking at that at our virtual face-to-face.

Hi Alice, Thank you for bringing this up, the draft explainer you got is prepared for review. With regard to the Content module (AKA Module 1) document, which is mentioned in Explainer, we want to publish the document as CR this time, but it is not fully ready for review yet, we may have some updates on this document in the next few days. I will leave a comment here when the Content document is ready. Thank you.

alice commented 3 years ago

Our apologies for taking so long to look at this, again.

@atanassov and I just looked at this at our virtual face to face. Here are our thoughts:

snidersd commented 3 years ago

Edited to remove user needs not applicable to Module 1.

Thank you very much for your detailed review and feedback on the restructured explainer according to your comments. (See Explainer-for-Personalization-Semantic https://github.com/w3c/personalization-semantics/wiki/Explainer-for-Personalization-Semantics

Regarding the tools that will be used, we do have a JavaScript solution as a proof of concept implementation that can be seen in this video prepared for TPAC: https://www.w3.org/2020/10/TPAC/apa-personalization.html We also expect that this technology will be important for education publishers who use EPUB. With an HTML attribute, this information can be embedded within EPUB documents where reader software or assistive technologies can use it to assist with learning. We also expect that custom or general browser extensions will be developed by 3rd parties to assist the various disability groups. This includes updating of AAC software tools to take advantage of the additional semantic information. Also, the addition of personalization information can enhance machine learning. For example, providing alternatives to idioms, it’s raining cats and dogs, or other ambiguous terms. Tools would parse the HTML for the personalization attributes and make the necessary substitutions into the DOM or assistive tool based on the identified user group or individualized need.

This module can also support a number of needs exposed by COGA's Content Usable (https://www.w3.org/TR/coga-usable/) such as:

To be honest we did not thoroughly investigate microformats. We are wary of relying on a specification that does not fall under the auspices of the W3C. While personalization may be a reasonable use case for this technology, it would slow down the development of the Personalization specification while working to advance microformats to meet our additional, diverse needs. We would also be very interested in hearing @tantek‘s input on this.

The I18N group raised the same question about the similarities between autocomplete and purpose. While autocomplete can only be used on form fields, the purpose values can be used on other element types. Where there is overlap with the autocomplete values, we have included the definition from the WCAG 2.1 Input Purposes for User Interface Components reference: https://www.w3.org/TR/WCAG21/#input-purposes. We can update the purpose values section of the content module to specify that.

With regard to your question about distractions. We do understand that advertising constitutes the critical revenue stream for many content providers. However, not all distractions are third party advertisements, and may be within the sites ability to allow the user agent to remove them. Further, the purpose of allowing users to hide (or systematically show and sequentially review) on page advertising is simply to give users the control other users have over such content. The user without a disability can ignore the add and complete the task. The user who cannot ignore it, or TAB past it conveniently, is forced to grapple with a stumbling block that prevents them from completing a task.

We believe users will choose to look at advertising because it's informative. It's an important mechanism for learning about options in life. By allowing users to control when and how they see ads, we allow them the ability to avoid becoming frustrated by processes that prevent task completion. We also allow them to see advertising as potentially useful information, not a source of frustration. Surely, we don't think a frustrated user will follow up on the ad that caused the frustration? With regard to the cross over with ARIA. ARIA-Live covers distractions such as a ticking clock for screen reader users making the interruptions less invasive but does not address the COGA use cases where the constant change distracts a person with ADHD etc, who does not use a screen reader. Whether this content is essential, or if it can be removed is not addressed in ARIA.

Please let us know how we can further assist.

Thanking you in advance

The personalization task force

alice commented 3 years ago

Thank you for following up!

I'm still mulling over much of your response, but I wanted to make a clarification to my comments on advertising and the distraction attribute:

I completely agree that there is a real user need around removing distractions, and that advertising which doesn't provide useful information to a user in their context might have little value to the advertiser, and certainly has no value to the user.

However, if an advertiser is creating a distracting ad, and a site is prepared to show it, it's because the distracting design of the ad serves some purpose which in turn serves the purpose of both the advertiser and the site. In that case, the user isn't so much choosing to look at it because it's informative, but because the design makes it difficult or impossible not to.

I can't imagine why the advertiser or the site would be prepared to make it easier for all users to evade such an ad by annotating it, without some way to make up the implied loss of revenue to the site, or some stronger incentive to outweigh the revenue concern.

In other words, unfortunately I think the distraction attribute is at risk of not being used on distracting ads in practice.

Also, I mentioned the prefers-reduced-motion media query, rather than ARIA - did you have a chance to evaluate the overlap in user needs there?

snidersd commented 3 years ago

Dear Alice:

Thank you for raising Issue #476 asking about duplication between Distraction in our Module 1 specification draft and CSS' Media Queries 5 (MQ5).

We have spent a good deal of time considering this in our past several teleconferences and have come to the following conclusions:

While there is some overlap distraction is much more comprehensive than MQ5. However, our examples have inadvertently focussed on the overlap, not the broader scope supported in our specification. We are creating new examples to remedy this.

We have decided to let the overlap stand for now rather than to try and split out the items covered in MQ5. As this is all new technology, we feel this is more appropriate at this stage. Let's see how developers and content authors apply both our spec and MQ5.

Thanks again for bringing this overlap to our attention. We had not previously addressed it in our conversations, and we do need to be aware of the overlap and prepared to respond substantively when questions like yours come up.

Best,

Lisa and Sharon Personalization TF Facilitators

alice commented 3 years ago

Thank you for the update.

At this point, we're not sure how much help we can be here.

We're would like to reiterate that the work you have collectively done identifying the problems to solve is extremely good, and that the problems are real and should be solved.

However, we think we need to say more clearly that the shape of the solution as an ARIA-like vocabulary doesn't seem like a good fit.

ARIA had the characteristics that it was based on an existing vocabulary (MSAA), and was intended to be consumed by a single class of technologies (screen readers, later expanded to other types of assistive technologies.)

In contrast, this vocabulary seems like it would have a wide range of (programmatic) consumers, and it's not clear that those consumers have been brought in as stakeholders to this design process.

We would like to more strongly suggest that instead of trying to solve all of these problems with one solution, that you could take the excellent work already done identifying problems, and look at solving them individually, working with the relevant stakeholders. The relevant stakeholders may include users, authors, publishers, assistive technology creators, and potentially others as well.

This aligns with our general design principles: we advise all API authors to prefer simple solutions.

Some of these problems may be "stickier" than others; for example, trying to come up with a system to allow users to avoid unwelcome distraction from revenue-generating ads is going to involve buy-in from the publishers who need that revenue in order to keep operating as a business.

However, other problems, like annotating words with the relevant Bliss or other symbol, seem very well scoped in terms of user need, authoring responsibility, and assistive technology implementation requirements.

Not all of these solutions may even need to go through a standardisation process immediately, but may be better suited to incubation to allow prototyping and rapid iteration as a collaborative process with the various stakeholders, before settling on a standard.

We know this is not the feedback you were likely hoping for, but we would like to emphasise how rare it is that we get a proposal with the level of work put in to user needs as we have seen here, and that this is one of the most critical parts of the design process.

We would welcome the opportunity to continue working with you on better scoped proposals to address subsets of the user needs you have identified, including very early stage ideas in incubation.

atanassov commented 3 years ago

One more follow-up to your comment about overlapping with MQ5. As you rightfully pointed out, there are already a number of media features that are intended to improve user preferences. I would encourage you to open a new issue with the CSSWG in their repo.

Explaining the user problem, proposal and overlap with MQ5 will get their attention and help solving the set of overlapping features between Personalization and MQ5.

I would be happy to guide you through the process if you @-mention me in the issues.

atanassov commented 3 years ago

After reviewing the overall issue during our "Kronos" VF2F, we resolved to close it. Thank you for working with us.

plehegar commented 1 year ago

See APA/TAG @TPAC2022 : Unblocking the WAI-Adapt Content Module 1.0 CR

ruoxiran commented 1 year ago

Follow up after TPAC 2022 at: https://lists.w3.org/Archives/Group/group-apa-chairs/2022Nov/0066.html