The current version of the contribution tool enables partners without technical abilities to add documents in batches. This has been demonstrated to work well.
However, this user interface requires a 20-30 minutes initial training, with regular followup. This prevents opening up usage to a wider audience, as the cost of training is too high for the core team.
Opportunity assessment
Until now, this limitation was acceptable as the cost of reviewing and integrating inbound contributions was anyway too high, making it the main bottleneck.
Two points are now changing this dynamic:
The reinforcement of our partnership with ToS;DR, that will provide external reviewing capability from ToS;DR for the terms it manages.
The GitHub Social Impact Community Manager program, that will provide Open Terms Archive with open-source community managers for one semester, increasing reviewing capability and structuring the guidelines and processes.
We intend to build upon these opportunities to enable crowdsourcing, with a progressive opening starting with a collection on generative AI tools. More context and associated call for funders is available on the Generative AI Domain page.
Existing material
User interviews
@clementbiron & @MattiSG interviewed users in August 2022, with figma notes and paper notes:
And the following conclusions:
Tool is easy to use.
Hardest is terms type selection.
Second hardest is service name.
Partners tend to make their own internal agreements on terms type, and to choose the terms type before interacting with the tool.
Most features are not seen / used, even though relevant. Too low discoverability.
Insignificant parts are not understood as only relevant within significant parts.
Most contributors open the document in its own tab, to read more easily or use translation tool.
Pre-filled links from spreadsheets database leads to laziness and thus errors in name checks.
All used Google Chrome.
Goals
Enable users to add or update terms tracked in an Open Terms Archive collection without technical knowledge (no need to understand JSON or selectors) and without human training.
Reduce the reviewing load following contributions.
Increase the visibility of Open Terms Archive.
Non-goals
Enabling anyone to track any terms they want: the value derived from Open Terms Archive is not so much in the quantity of terms but more in the quality of the datasets and the associated analysis capability.
Open questions
Is this interface made available only on specific occasions, or is it open all the time? ⇒ This mostly depends on the remaining load on maintainers.
Principles
Document limitations and workarounds of the UI rather than trying to handle every case.
The crowdsourcing interface is decentralised and thus provides an entry point to specific collections.
Publish as an NPM module.
Proposed user journey
Separate in steps to enable the UI to provide clear guidance and the user to focus on one point at a time.
Scope: describe what is Open Terms Archive, the goal of contributing, and the scope of the collection.
URL: the entry point is the source document URL. A check for pre-existence in the collection should be carried.
Scripts execution: if the document does not load, offer the option to execute scripts.
Content selection: the terms extraction is carried graphically. Optionally, inform about filters and how to create them.
Version review: the resulting version is presented to the user, who confirms the quality checklist.
Service name: the service name is typed in. Optionally, an automated check of the presence of the name in the version is carried and alternatives (in casing, for example) are offered.
Terms type: the type of the terms is selected.
Comments & authentication: the user can enter comments to explain limitations and sign off their contribution (email, name, GitHub handle).
Thank you page with next steps and links to pull request. Optionally, link to more Open Terms Archive material and to adding a new document.
At each step, the user might encounter a problem and want to bail out. We aim at making this an acceptable, non-frustrating possibility rather than a failure. Proposed path is to open an issue in the UI repository / provide feedback somehow to progressively prioritise the features that are truly worth implementing.
When using the contibution tool, I played around with it, and created a PR without intent (I wanted to see/learn how it goes about it; I did not expect/know a bot account would immediately create a PR
I had seen “suggested declarations via tool” in the ticket, consequently assuming the tool would work on or at least alternatively work on tickets rather than PRs (no idea now if that is a different tool)
When using the contribution tool, I had opened a link from a PayPal ticket with suggested declarations, and was surprised that there was such a tool thing. I wondered if I would not have to use my browser DOM inspector anyway to identify the selectors, and the tool then only served as a UI for JSON, and a preview of selected content. The preview seemed useful to me, the JSON UI not - but I understand it could be useful for less-tech-savy/json-experienced people.
CONTRIBUTING does not seem to mention the contributing tool - its existence, or intended use.
Problem statement
The current version of the contribution tool enables partners without technical abilities to add documents in batches. This has been demonstrated to work well. However, this user interface requires a 20-30 minutes initial training, with regular followup. This prevents opening up usage to a wider audience, as the cost of training is too high for the core team.
Opportunity assessment
Until now, this limitation was acceptable as the cost of reviewing and integrating inbound contributions was anyway too high, making it the main bottleneck. Two points are now changing this dynamic:
Existing material
User interviews
@clementbiron & @MattiSG interviewed users in August 2022, with figma notes and paper notes:
And the following conclusions:
Goals
Non-goals
Open questions
Principles
Proposed user journey
Separate in steps to enable the UI to provide clear guidance and the user to focus on one point at a time.
At each step, the user might encounter a problem and want to bail out. We aim at making this an acceptable, non-frustrating possibility rather than a failure. Proposed path is to open an issue in the UI repository / provide feedback somehow to progressively prioritise the features that are truly worth implementing.