I'm working on a large system of themes (collection of design-token: value pairs) which represent brands, modes, and experiments. In lots of cases, it would be beneficial to define a small set of tokens to inform the greater set. A visual example of the expectation is the Microsoft Fluent UI Theme Designer.
In the UI, the theme author selects a few colors which will inform the dozens of tokens that are applied to areas of the UI. To do this, the UI will transform the given colors in different ways. The benefit of this is the curation exercise is more accessible; it is easier for someone without much design token experience to create an appropriate theme with a few input values. However, the computations are embedded in the UI itself and resulting values would be written as-is into a tokens JSON file.
The problem I found with this approach is that this requires the spec to define what alpha means. Also, any new special keys will need to be introduced, reviewed, and agreed before we'd potentially see platforms support them.
The Tokens Studio folks are solving this with generators and resolvers. I believe this is a valid solution with a few drawbacks. At the moment, this is only available within the Tokens Studio ecosystem. While it's possible that this direction could be included in the spec, it brings me to the second point. The resulting file which defines these relationships is no longer human readable. It must be managed within a UI and a larger ecosystem. [^1]
One of the principles I've held is that the tokens file should be human editable, even if a bit intimidating. So I believe a solution should attempt to be human readable while defining how a value might be transformed.
I am not recommending that this project be the de-facto library for sharable operations. This is a very succinct example in an otherwise plugin-like ecosystem.
I believe this direction could solve for a few other issues that I've seen in the community.
{
"some-token": {
"$operations": [
// The number 1 below represents true, previous operations can determine the binary to pass in
["Import.operations", "token-operations/lib/binary-if-string", 1, "yellow", "purple"]
]
}
}
// Does not work in the token-operations project yet, but could!
{
"some-token": {
"$operations": [
["Import.token", "./some/token/file.json", "{colors.base.primary}"]
]
}
}
The project is available as an NPM install. Love to know what the community thinks of the idea. Please read the README before commenting, there's a lot more detail there.
[^1]: Visit resolver.dev.tokens.studio and click the "Save" button at the top right corner and review the resulting file.
Hi folks,
I'm working on a large system of themes (collection of design-token: value pairs) which represent brands, modes, and experiments. In lots of cases, it would be beneficial to define a small set of tokens to inform the greater set. A visual example of the expectation is the Microsoft Fluent UI Theme Designer.
In the UI, the theme author selects a few colors which will inform the dozens of tokens that are applied to areas of the UI. To do this, the UI will transform the given colors in different ways. The benefit of this is the curation exercise is more accessible; it is easier for someone without much design token experience to create an appropriate theme with a few input values. However, the computations are embedded in the UI itself and resulting values would be written as-is into a tokens JSON file.
An idea from an issue here was the following:
The problem I found with this approach is that this requires the spec to define what
alpha
means. Also, any new special keys will need to be introduced, reviewed, and agreed before we'd potentially see platforms support them.The Tokens Studio folks are solving this with generators and resolvers. I believe this is a valid solution with a few drawbacks. At the moment, this is only available within the Tokens Studio ecosystem. While it's possible that this direction could be included in the spec, it brings me to the second point. The resulting file which defines these relationships is no longer human readable. It must be managed within a UI and a larger ecosystem. [^1]
One of the principles I've held is that the tokens file should be human editable, even if a bit intimidating. So I believe a solution should attempt to be human readable while defining how a value might be transformed.
Enter token-operations.
Pre-defined composite type for refering color modifications (e.g. rgba)
I believe this direction could solve for a few other issues that I've seen in the community.
A token value with multiple aliases
Conditional token values
Remove REM/EM from specification?
And with a little bit of thought, it might even solve this if there's interest:
Can alias tokens reference tokens in another file?
The project is available as an NPM install. Love to know what the community thinks of the idea. Please read the
README
before commenting, there's a lot more detail there.[^1]: Visit resolver.dev.tokens.studio and click the "Save" button at the top right corner and review the resulting file.