design-tokens / community-group

This is the official DTCG repository for the design tokens specification.
https://tr.designtokens.org
Other
1.55k stars 63 forks source link

Clarify if manually typing and reading token files is a primary concern #149

Open romainmenke opened 2 years ago

romainmenke commented 2 years ago

From the introduction

While many tools now offer APIs to access design tokens or the ability to export design tokens as a file, these are all tool-specific. The burden is therefore on design system teams to create and maintain their own, bespoke "glue" code or workflows. Furthermore, if teams want to migrate to different tools, they will need to update those integrations.

This specification aims to facilitate better interoperability between tools and thus lower the work design system teams need to do to integrate them by defining a standard file format for expressing design token data.

This I interpret to mean that the design token file format is intended to be written and read by software, not by people.

However in issues there is often feedback given that certain choices were made to facilitate writing token files manually.


I would personally prefer a format that is more difficult to read and write by hand but is simple to consume and produce in code.

A simpler and more straight forward format will lead to tools with fewer bugs. Fewer bugs in tools will lead to better interop between tools. This again will drive adoption of the format....


None of this is made impossible with a format that is easy to read and write by hand but it is often a conflicting concern and focus.

c1rrus commented 2 years ago

That's a very good question, thabks for bringing this up. The format editors will be discussing this and we aim to document a set of guidelines for things like this, so everyone (including us) is aware of how we should prioritise things when there's conflicting concerns.

kevinmpowell commented 2 years ago

@romainmenke can you elaborate on how:

A simpler and more straight forward format

is:

more difficult to read and write by hand

Perhaps with an example? Those two characterizations strike me as opposites.

romainmenke commented 2 years ago
{
  "value": "10px"
}

vs.

{
  "value": {
    "number": 10,
    "unit": "px"
  }
}

The first requires less keystrokes, so it is objectively simpler to write by hand. Also easier to read as you do not even have to consciously think about 10px, you immediately understand this as 10 with a unit of px.

The second is easier for software. No need to format as a string before writing to a file and no need to parse when reading from a file.

rdlopes commented 2 years ago

@romainmenke cannot agree more. Everyone thinks that they need to be able to easily write portable formats.

Problem is that given the number of tokens present for a design system, I doubt someone will write it without the help of a tool. So I'm totally in favor of fostering on the portability, calling for ISO standards, global formats, able not only to cover the current dimensions and units, but also to stay consistent with evolving practices.

I was reading through the spec and - for instance, fell upon the notion of duration, which apparently is defined as the amount of time that an animation will take to execute. Why only think about animations? Cause that's our current use case. Why only think in terms of milliseconds? Cause that's our current dimensionality.

And we want to be able to write it like a CSS file...

The question you raised here is really cardinal. Thanks for bringing it up.

kevinmpowell commented 2 years ago

Related to #121

kevinmpowell commented 1 year ago

After discussing further with the editors we believe favoring “human authoring” over “ease-of-parsing” is a principle we wish to standardize. We believe human authoring and flexibility within the spec will play a large role in community adoption of the specification. We know that many design system practitioners author and maintain token files directly, without the aid of a tool, and we want to ensure the syntax of our spec is approachable and understandable.

At this time our decision making will favor putting more responsibility on a parser or tool to support a readable, human authorable format, instead of putting more responsibility on a human to support an easier-to-parse format. Code should do the heavy lifting, token authors should not.

We also understand the specification has to lead to token files that are parsable and portable. If part of the specification is vague, it can introduce errors in portability, and we will take steps to update the spec when those areas are identified. Unfortunately there’s not a decision tree or a concrete mechanism we can use to apply this principle to each decision, since context informs us on a case-by-case basis. To help illustrate this principle in practice here’s an example where we feel this principle has informed our decision making process:

Group level $type inheritance. $type is a required property for all design tokens. To make that requirement easier for humans to author $type can be inherited from the group level. Specifying it on each individual token would be easier for a tool to parse, but we’d rather provide flexibility for human authors by letting tools do the heavy lifting.

Here’s two contrasting code examples to illustrate. In this particular case the definition of “easier for a human to author” means “less keystrokes required to author and maintain.”

Easier to parse

{
 "spacing" : {
  "1x": {
   "$value": "0.25rem",
   "$type": "dimension",
  },
  "2x": {
   "$value": "0.5rem",
   "$type": "dimension",
  },
  "4x": {
   "$value": "1rem",
   "$type": "dimension",
  },
  "8x": {
   "$value": "2rem",
   "$type": "dimension",
  },
  "16x": {
   "$value": "4rem",
   "$type": "dimension",
  }
 }
}

Easier for a human to author

{
 "spacing" : {
  "$type": "dimension",
  "1x": {
   "$value": "0.25rem",
  },
  "2x": {
   "$value": "0.5rem",
  },
  "4x": {
   "$value": "1rem",
  },
  "8x": {
   "$value": "2rem",
  },
  "16x": {
   "$value": "4rem",
  }
 }
}
romainmenke commented 1 year ago

Is it possible to provide and explore more examples?

The specific case of type inheritance wasn't problematic at all. To add support for this in my current implementation I had to write 4 lines of code. It didn't really complicate anything and I agree that it is a useful feature for human editing.


More relevant examples:

Currently it is a bit arbitrary which data has a micro syntax and which data is structured.

https://tr.designtokens.org/format/#border

Borders are largely structured but contain micro syntaxes :

{
  "border": {
    "heavy": {
      "$type": "border",
      "$value": {
        "color": "#36363600",
        "width": "3px",
        "style": "solid"
      }
    }
  }
}

Could be :

{
  "border": {
    "heavy": {
      "$type": "border",
      "$value": "#36363600 3px solid"
    }
  }
}

Or :

{
  "border": {
    "heavy": {
      "$type": "border",
      "$value": {
        "color": {
          "colorSpace": "srgb",
          "channels": [0.212, 0.212, 0.212],
          "alpha": 0
        },
        "width": {
          "value": 3,
          "unit": "px"
        },
        "style": "solid"
      }
    }
  }
}

This arbitrary in between feels like the result of borrowing from past experience and other contexts but without really considering the best option for this format.


We also understand the specification has to lead to token files that are parsable and portable. If part of the specification is vague, it can introduce errors in portability, and we will take steps to update the spec when those areas are identified. Unfortunately there’s not a decision tree or a concrete mechanism we can use to apply this principle to each decision, since context informs us on a case-by-case basis. To help illustrate this principle in practice here’s an example where we feel this principle has informed our decision making process:

I think the minimum requirement is detailed parsing steps in the specification.

Other things that can help :

romainmenke commented 1 year ago

We know that many design system practitioners author and maintain token files directly, without the aid of a tool, and we want to ensure the syntax of our spec is approachable and understandable.

Are we sure this is not done solely because a specific tool doesn't support design tokens? I am not convinced that a sufficiently large group wants to keep creating and maintaining token files manually if enough quality tools exist.

Would be interesting to collect some data on this. Maybe a survey?

kaelig commented 1 year ago

Thank you for raising this, and I can say this dilemma came up frequently in our meetings!

If we take SVG as an example, I've been saying things along the lines of: some of us have drawn icons using SVG by hand, but none of us really want to, especially for more advanced cases. As for readability/editability: unless it's a very simple shape, it's impossible to read an SVG file and draw a mental picture of its visual output. That's why we use tools to create pictures and then export them to SVG (in a lossy way) for interoperability.

On the other hand, CSS is quite different in that it's much more optimized for editability and readability. For now, we typically want humans to be able to author and edit CSS (that said, with the rise of AI in the creative process, I don't know that people will manually write much CSS in the future). Some exceptions apply to gradients and perhaps shadows, where the syntax can get so complex that tools help a lot.

When Jina, Garth, Danny and I started this group, we discussed what our principles should be from a technical and community perspective. At the time we were convinced that we should lean more toward the CSS model than the SVG model. I think it's still relevant today to meet people where they are, but I expect things might evolve once we have much better tooling available to everyone.

romainmenke commented 1 year ago

we should lean more toward the CSS model than the SVG model.

Why was JSON picked in that case? It isn't well known for it's easy reading and writing by hand. Was this purely because other formats for design tokens use JSON?

For me personally it feels weird to start a new format and immediately have these compromises when the benefits only exist in the short term.


When Jina, Garth, Danny and I started this group, we discussed what our principles should be from a technical and community perspective. At the time we were convinced that we should lean more toward the CSS model than the SVG model. I think it's still relevant today to meet people where they are, but I expect things might evolve once we have much better tooling available to everyone.

We also understand the specification has to lead to token files that are parsable and portable. If part of the specification is vague, it can introduce errors in portability, and we will take steps to update the spec when those areas are identified. Unfortunately there’s not a decision tree or a concrete mechanism we can use to apply this principle to each decision, since context informs us on a case-by-case basis. To help illustrate this principle in practice here’s an example where we feel this principle has informed our decision making process:

Not only ambiguity in the specification as it exists today. More importantly are all the future extensions to the format.

A format that is explicit about everything and uses structured data without requiring any additional parsing of micro syntax will be easier to extend.

Inversely it needs to explored which extension points become impossible by the choices made in favor of easy editing by hand.

Relevant to this : https://github.com/design-tokens/community-group/issues/162