This is a proposal for a new, internal-only tagging field to help tools and contributors make sense of the data while reviewing and editing it.
I'm suggesting an annotations field that could be used in __compat and status objects and simple support statements. It would accept an array of (enumerated) strings. It would not be a free-form notes field. Instead, we'd have a file (or a lint script) declare which annotations are permitted and their meaning. At build time, these annotations would be stripped from the data.
Here are some use-cases:
Provenance
We could use this field to mark the origin of a piece of data. For example, when the collector runs and creates a new support statement, it could leave a breadcrumb that it produced that value (e.g., that it was not manually introduced).
Suppose we discovered some issue with data, but had not been able to resolve it yet. We could mark that data, setting ourselves up for a burndown of data that needs investigation. To ensure that the flags are useful, we could require an issue URL:
How impactful do you think this enhancement will be?
I think it would help us mark up data to make it easier for our future selves to understand what we've done, without doing heavier and riskier alternatives, such as switching formats (e.g., JSONC, JSON5, YAML) and allowing traditional code comments.
What would you like to see added to BCD?
This is a proposal for a new, internal-only tagging field to help tools and contributors make sense of the data while reviewing and editing it.
I'm suggesting an
annotations
field that could be used in__compat
andstatus
objects and simple support statements. It would accept an array of (enumerated) strings. It would not be a free-form notes field. Instead, we'd have a file (or a lint script) declare which annotations are permitted and their meaning. At build time, these annotations would be stripped from the data.Here are some use-cases:
Provenance
We could use this field to mark the origin of a piece of data. For example, when the collector runs and creates a new support statement, it could leave a breadcrumb that it produced that value (e.g., that it was not manually introduced).
Meanwhile, a human contributor could mark a new data point (such a behaviorial feature) as being hand-written:
In the course of other investigations, we might note when something was originally migrated from the wiki but was later confirmed:
FIXME comments
Suppose we discovered some issue with data, but had not been able to resolve it yet. We could mark that data, setting ourselves up for a burndown of data that needs investigation. To ensure that the flags are useful, we could require an issue URL:
How impactful do you think this enhancement will be?
I think it would help us mark up data to make it easier for our future selves to understand what we've done, without doing heavier and riskier alternatives, such as switching formats (e.g., JSONC, JSON5, YAML) and allowing traditional code comments.
Do you have anything more you want to share?
No response