Open romainmenke opened 2 years ago
You've raised a really interesting point that we've not really addressed at all.
In one of our earliest drafts (before it was even public) we were considering some kind of import mechanism, where one token file could import another. But we quickly realised there's quite a lot of complexity to resolve with an approach like that. For example, if token file A imports token file B, and a tool reads token file A...
In the interest of keeping our version 1 spec simple, we decided to drop the idea for the time being. I think there was a hope/assumption that tools would solve this somehow.
But, as shown in your example, that does raise an interesting question when it comes to references. Is a token that references another which does not exist in the same file valid? If you take the view that, since the spec says nothing about working with multiple token files, each token file must be self-contained, then I'd say that should not be valid. But overriding some tokens is desireable for use-cases like theming. And being able to split very large sets of tokens over several files is also desirable. So there probably should be an official way for a token in one file to reference a token in another.
My personal preference would be to revisit the import idea. That would put the onus on the spec to clearly define what the behaviour should be which will benefit interoperability between tools. I think it would also help make the order in which files are being included explicit.
To encourage more discussion, here's a rough proposal of how this could work...
file1.tokens.json
(a self-contained tokens file, where all references must point to tokens in the same file):
{
"token-a": {
"$value": "#123456",
"$type": "color"
},
"group-b": {
"token-b-1": {
"$value": "1.5rem",
"$type": "dimension"
}
},
"alias-token-c": {
"$value": "{group-b.token-b-1}"
}
}
file2.tokens.json
(another self-contained tokens file, where all references must point to tokens in the same file):
{
"token-a": {
"$value": "#abcdef",
"$type": "color"
},
"group-b": {
"token-b-2": {
"$value": "320ms",
"$type": "duration"
}
},
"alias-token-d": {
"$value": "{group-b.token-b-2}"
}
}
file3.tokens.json
(which includes file1
& file2
. Tokens in file3
are therefore allowed to reference tokens in file1
and file2
):
{
"$includes": [
"./path/to/file2.tokens.json",
"https://design-system.example.com/tokens/file3.tokens.json"
],
"alias-token-c": {
"$value": "{token-a}"
}
}
The behaviour I would suggest when parsing file3
is:
$includes
array are loaded, parsed and then deep merged into the current file$includes
array in reverse order.So, in this example: tokens in file3
override tokens in file2
, which in turn override tokens in file1
.
Therefore, the end result is equivalent to a single file like this:
{
// token-a in file2 overrides token-a in file1, so
// the value is #abcdef
"token-a": {
"$value": "#abcdef",
"$type": "color"
},
// Since group-b exists in both file1 and file2, a
// merged version of those is added here:
"group-b": {
// this token comes from file1
"token-b-1": {
"$value": "1.5rem",
"$type": "dimension"
},
// this token comes from file2
"token-b-2": {
"$value": "320ms",
"$type": "duration"
}
},
// alias-token-c in file3 overrides alias-token-c in file1
// so it references token-a.
// Therefore, its resolved value is #abcdef
"alias-token-c": {
"$value": "{token-a}"
}
// this token comes from file2
"alias-token-d": {
"$value": "{group-b.token-b-2}"
}
}
Thoughts?
$include
is definitely interesting as it allows the creation of a single tokens collection that is composed of multiple sources that can originate from multiple tools.
I would however not include a network protocol as a way to include design token files. This has obvious security concerns and doesn't solve anything that can not be worked around :)
$include
as a feature does not eliminate the need to define the parsing and resolving steps in this specification.
My example above was also just to illustrate the need for a full definition of parsing and resolving.
This is something that other specifications also define for their syntaxes and helps to eliminate subtle interop issues.
When a tool needs the actual value of a token it MUST resolve the reference - i.e. lookup the token being referenced and fetch its value. In the above example, the "alias name" token's value would resolve to 1234 because it references the token whose path is {group name.token name} which has the value 1234.
Tools SHOULD preserve references and therefore only resolve them whenever the actual value needs to be retrieved. For instance, in a design tool, changes to the value of a token being referenced by aliases SHOULD be reflected wherever those aliases are being used.
Found this recently after re-reading the current draft.
I might be wrong but I think that the intention here is to define value invalidation, not the order or timing of dereferencing.
If this is the intention I think it might be fine to do early de-referencing as long as all relevant values are invalidated and updated in case of a change.
Is this correct?
My concern with late de-referencing is that it is undefined how this works when there are multiple token files.
Related to #166
Another possible way to process multiple files :
At the moment there doesn't seem to be a specific order in the specification for certain operations during parsing.
This leads to ambiguity when multiple tokens files are combined.
Example definition :
This seems more powerful as it allows publishing a "system" of tokens that end users can manipulate with a few overrides.
or
This might work more intuitively.
Examples of ambiguity :
file a, loaded first
file b, loaded second
What is the value of
font.family.base
?sans
Helvetica
file a, loaded first
file b, loaded second
Does this give an error because
font.family.sans-serif
is not yet defined? Or does it lazily resolve?