pulibrary / figgy

Valkyrie-based digital repository backend.
Other
36 stars 4 forks source link

Add a link to the harmful content statement from our viewer #5089

Closed tpendragon closed 2 years ago

tpendragon commented 2 years ago

We can implement this similar to the Rights button.

This is blocked until there's a public page to link to. Presumably @kevinreiss or @escowles can tell us when that page is up

Relevant recommendations from the harmful content working group:

The Working Group also recommends adding a “Harmful Content Statement” icon to all Figgy records that links to the full statement (mirroring how the “Rights and Permissions” (Takedown Policy) icon is currently accessible).

Sudden Priority Justification

The output of this group is in direct support of the library's mission, vision, and north star statements. We should ensure we get it in place as soon as possible in an effort to support that effort and ensure it's respected.

kelea99 commented 2 years ago

Checked in with @kevinreiss and @escowles. We wrote to Barbara V. and Jen Hunter to get approval to place harmful content statement page as a child of About page at https://library.princeton.edu/about/harmfulcontent. We are awaiting approval.

kelea99 commented 2 years ago

update. Got approval for page and placement of it. @kevinreiss also had a look and helped with the dropdown placement. the URL is: https://library.princeton.edu/statement-harmful-content

Taking Block label off of this ticket

hackartisan commented 2 years ago

The rights and permissions link has a sort of alert icon (which I'm honestly not sure is clearly affiliated with the link given there's a row of icons before it, but it is there). Do we want an icon for this? If so, what icon? If not, how can we set it apart from the rights and permissions link?

Maybe we should remove the Rights and Permissions icon and put in some shading to make these look like buttons?

Here's the PR that added it: https://github.com/pulibrary/figgy/pull/4578

escowles commented 2 years ago

I do feel like there are a number of ways a user might want to report an object — harmful content, metadata (offensive language, factual error, etc.), rights/privacy, etc. I wonder if we should have a single alert-style button that brings up a brief page listing the options and going to the harmful language/content pages, takedown page, etc. would be better?

hackartisan commented 2 years ago

So just have the alert icon without any link text? That does seem like it would fit better with what we've done otherwise.

kelea99 commented 2 years ago

I kind of like the idea of one button, when clicked, giving you options for what you want to alert folks to. It would be cleaner and I think would serve the same purpose. @sdellis , what are your thoughts?

sdellis commented 2 years ago

@kelea99 my first thought on this is that the Statement on Harmful Content page does not allow users to report harmful content. It just says we are working on a form that will arrive at some point in the future. My thoughts are that this ticket will be blocked until we can get the appropriate forms in place for reporting.

As for the single button, our recent usability testing with Finding Aids was that it was unintuitive for folks to use a single button for both "suggesting a correction" and "reporting harmful language". If the label were generic enough (i.e., "report a problem"), then it might work to have a dropdown of options. I would advocate for an "Other" option for anything we haven't listed.

One additional issue is that we do not always know where a user is seeing the problem. How will we provide enough contextual information if we are sending the user from the viewer off to another site to report the problem? For example, it could be in a paragraph on page 325 of an ACLU manuscript in an unknown component and unless the user can convey all the correct information manually, we may have trouble tracking it down. We also don't know where to forward the report without the contextual information that tells us who is responsible for taking action, so it would become someone's task to locate the problem and then route the correspondence accordingly.

Finally, people who are reporting harmful content and offensive language may want to do so anonymously as they may fear repercussions. We should have this conversation as opening up such a channel could subject those who manage the queue of complaints to harmful content as an anonymous form could be easily abused.

tpendragon commented 2 years ago

Our colleagues have worked hard on this statement and recommendation. What can we do to ensure we respect that work and implement the output of their time and research?

Full recommendation from that working group is here: https://lib-confluence.princeton.edu/display/COM/Statement+on+Harmful+Content

My reading of this recommendation is that part of the goal is to show that we're taking harmful content seriously as a profession and masking it behind extra buttons or pages will bury its importance. With that in mind, I'd like to repeat the options I've heard so far with some inline questions:

  1. Do this ticket as written, find a more appropriate icon - maybe a question mark?
    • What impact does this have on the "busy-ness" of the viewer?
  2. Do this ticket, but replace the text with just an icon
    • What impact does having no text have on folks actually clicking these buttons?
  3. Replace the existing "Rights and Permissions" link with "Policies" or something, have it link out to a page that links out to other pages.
    • Will this result in a visible enough showing of our stance on harmful content?
  4. Do nothing until there's a form to link out to - prepopulate the form.
    • How high is the chance that this will result in the working group's recommendations not being undertaken?
escowles commented 2 years ago

I don't know if this is feasible or practical, but one idea I had was to make the alert icon open up a menu or otherwise expand to show a few sentences that would briefly explain the options for reporting items or engaging with us., including links to the takedown form, and other forms as they are ready to link to. Doing this in the viewer instead of a separate page would address the concern that @sdellis raised above about losing the referrer URL (which I agree is essential in those forms to make the feedback actionable). In theory, I could imagine adding even more info to the URL (such as which page the viewer is displaying) to make this even more granular than it is now.

kelea99 commented 2 years ago

If it helps, I was on this group and can give my two cents:

  1. Is the lack of a form a blocker for the dissemination of our statement? The recommendations are important to see as multiple, versus an all or nothing. The importance of disseminating/making discoverable our dynamic statement on harmful content was a priority for the group and something we felt we could do now versus waiting. The full statement has already been edited to include the great work folks have done in PULFAlight, and will be updated again when similar forms are available in other platforms. I will say, when we shared our work with folks doing similar initiatives at other institutions that have implemented forms/feedback mechanisms, all parties confirmed that they had zero feedback for this issue to date.
  2. That is really good information to have, @sdellis with regard to the user testing of the buttons combined versus broken out and what those buttons go to (i.e. how will we know what they are commenting on, if it goes to a form?). I wonder if we should ask ourselves what makes sense to have as a link in the viewer versus a button/buttons within the application. what is redundant? what works better where? I hope this is making sense.
sdellis commented 2 years ago

I may have misunderstood. This ticket is not about implementing a way to report harmful content or description, it's about making people aware of our statements and policies. Just to be clear, the Statement on Harmful Content provides no information on who to contact or how. I think if we care about this issue (which we all do) we need to provide a good user experience for actually reporting harmful language. A bad experience or dead end is going to make it look like we threw it together to check a DEI box without actually investing in the goal.

Also, if it's placed on the viewer, where do we put the content warning for content that is not digitized? Or are we only warning users about digital content?

FWIW, the icons I've seen for reporting problems are typically "word bubbles with an exclamation mark" or a flag (for flagging problems).

hackartisan commented 2 years ago

I like the idea of a word bubble with exclamation mark icon that triggers a pulldown menu with each of these links.