design-tokens / community-group

This is the official DTCG repository for the design tokens specification.
https://tr.designtokens.org
Other
1.52k stars 62 forks source link

The Designer’s Workflow #43

Closed jina closed 1 year ago

jina commented 3 years ago

Hi folks! There is one thing we’d love for you to participate in to get your perspective (if you have the time this week):

As a designer, using the design tool of your choice, what would your ideal workflow be to define/deliver tokens? What do you wish your tool had in place to do this? What would make your life easier?

Think about color and type as the main use cases.

The deliverable is open to what works for you and how much time you can give — doesn’t have to be a formal slide presentation (unless you want it to be). Can just be wireframe(s), or even just draw on a napkin and take a photo. Whatever you want to do to share your ideas.

Please share here in the comments by end of day Thursday so the Format team can review and discuss in the next Friday format meeting.

lukasoppermann commented 3 years ago

Hey @jina, here are my current thoughts:

Colors

Design Tokens Colors

Typography

Assumption: "Text Styles" are also considered tokens (otherwise they would be styles but not much would change)

Type

Export

dashouse commented 3 years ago

I might be an odd one out designer myself but I do not believe the actual / real world token definition should exist within design software, it feels like there is not enough visibility of the impact that a token change will have in production. As tokens don't usually change very often it feels like the design software should be a proving ground for the change but not be the actual trigger for change.

Therefore I would prefer design software to be more accepting of coded token packages. For example we can provide colour tokens to Figma from the same source of truth as development through a custom plugin, however tokens like "space" seem very hard to translate as the software has a different concept of space.

As well as coded tokens mapping to relevant styles in Figma it would be interesting if you were able to apply tokens to properties of objects. For example I would like to apply space-xl to the height attribute of a rectangle to visually represent the space token.

The only other token specific typography issue I've struggled with is definition of line-height. As we would want to use relative, unitless line-heights in production, the font-size and line-height only work as a pair. e.g. line-height: 1.5 is only 24px when paired with 16px text, so a token with the output 1.5 should be a named pair with the font-size. For example font-size-m and line-height-m go together.

With this in mind I would also like design software to be more accepting of line-height in this format so we don't have to create tokens for both Figma and production separately. Currently Figma will accept px and % for line-height but 1.5 is recognised as 1.5px not 150%.

lukasoppermann commented 3 years ago

I might be an odd one out designer myself but I do not believe the actual / real world token definition should exist within design software, it feels like there is not enough visibility of the impact that a token change will have in production. As tokens don't usually change very often it feels like the design software should be a proving ground for the change but not be the actual trigger for change.

Therefore I would prefer design software to be more accepting of coded token packages. For example we can provide colour tokens to Figma from the same source of truth as development through a custom plugin, however tokens like "space" seem very hard to translate as the software has a different concept of space.

While I don't necessarily agree with the consequence of not allowing design tokens to be defined in the design software, I feel your concern as well. I would rather solve this with token specific library files that only certain people who "own" the design system can update.

The other two points I totally agree with.

nesquarx commented 3 years ago

I know color and type are the main use cases, I'll just be a bit more generic in my description. I'm repeating some stuff from previous comments.

Here's my ideal design tool token flow (something I've been trying to create for 30+ products in multiple families)

sebfriedrich commented 3 years ago

Hey @jina,

I had a brief look at several issues in this project here and I appreciate that your demand targets the direction of design process. It is quickly getting very technical here and there.

As a designer, using the design tool of your choice, what would your ideal workflow be to define/deliver tokens?

tool chain along the design process

It is about tools (plural) not THE tool :-)

Figma in general gives a good presentation of what I expect these days from a interaction design tool. The controls offered are fairly good enought to setup a representation of the design system's tokens in regard to what a designer produces in that tool (mainly prototypes). But as @dashouse already points out to some extend, Figma (or any competitive tool) is not the source of truth here, it is one of the deployment targets of a design system as well as some (JS) code is. Token definition should also not be limited by a certain tool, we should accept the fact that usage of a token is projected into application but it is one way and might involve potential loss in the conversion to be acknowledged.

the role of the designer

It seems natural expecting a designer to curate colors and then share them for implementation as a designer mostly owns this domain. But it is a typical designer perspective on the world and doesn't allow for developers to become stakeholder in the process. To ensure developers can apply color correctly, token names and structure must align with a shared vocabulary. Any suggestion which promotes writing up some js files in a repo as well as generating a file from a design tool and pass it to a repo is biased thinking which I disagree with. These approaches create silos controlled by the one or the other role.

color picking

I would actually wish for a tool a little more in the fashion of common color pickers on the web Adobe Kuler or Colormind or Color Review. The perfect picker is not existing yet (in my opinion) as all these pickers leave you alone with the user flow you should follow to leverage them. Personally I use at the moment a combination of Excel to inform myself meeting expected levels of saturation, lightness (compensated for human perception), reading contrast, different eye cone stimulus, gamut coverage etc. while best-guessing utilizing self-made PSD templates and trying to tweak one color dimension at a time. This means I control what I can visually compose with some number indicators helping me judge the output reliably. This isn't perfect nor quick to do, but allows for measuring outcome, keep personal preference out and realize when its done. So Excel basically runs some kind of test cases and the basic idea is test driven color design what I try to achieve.

typography

Somewhat same with test-driveness it is with tokens representing typographic style. If you want to judge and optimize a line-heigth which supports reading a long copy text best, you actually need a testcase with exactly such a text, in the right font and the right size, with the right runlength etc. Normally I use a design tool like Figma and build out samples of the use cases what turns the generic design tool into some token design environment, but it isn't out-of-the-box. Also it is work intense to replicate as-is when starting with something given from legacy. Eventually I have a set of tokens to use for text. If I can categorize, what the text is about, I can choose the right token. I am not allowed to pick the font-size here and the text-color from there, I pick a tuple instead which offers a valid combination.

the technical side / exchange formats

For sure, we need some human and machine readable format that is well-documented and stays reliable even when extended over time. So far so good.

I hesitate to just make a call for some JSON notation, just because many of the token usage will happen in the space of developing for the web, what effectively means ECMAScript to the widest extend. But design systems can also serve regular print, interior or physical product designs. First of all, a design system is just the logical extension of what was before known as corporate identity or other design guidelines. With the difference, that rules are more and more shaped in a mathematical or logical sense and computers assist to apply these rule sets, where before the engine was all human only.

other pitfalls

I just want to mention briefly to consider

lukasoppermann commented 3 years ago

I think what would be good to keep in mind is, that the needs will change drastically depending on the project & the state of it.

In one scenario a designer works in a big organisation or otherwise on a project were the design tokens are defined by a different team. In this case, having your tool mere consume tokens and presenting them as options would be fine.

In another scenario, a designer creates a new product or for other reasons works on the design system. In this case being able to define tokens from within the tools where you figure out their values is extremely helpful and will make the adoption of this methodology much more likely. From my personal experience it is rare, that tokens are a thing that you define only once. Especially in the beginning adjusting colors is common, e.g. when adding a secondary color you may need to nudge your primary one into a specific direction or add a tint to your neutrals.

@sebfriedrich concerning your second to last point:

When tokens can define other tokens (yes in theory I wish for that too @lukasoppermann), we should be able to do more, than just referencing them. Blending them or swapping between options by some threshold and so on are Methods taking tokens as input. Probably they are expected to work agnostic to the input and output format of a color - to run such Methods, again managing color spaces comes into play. I see assigning a new pointer to some otherwise unchanged token as the simplest form of such a Method.

Can you please elaborate? I don't quite get what the "pitfall" is here? 😄

sebfriedrich commented 3 years ago

Can you please elaborate?

Sure :-) You initially mentioned:

just like colors tokens should be used to create new tokens (e.g. button / primary / label / fontsize = typescale / xs)

This is the additional pointer I mean as the simplest scenario - make the fontsize the same value as typescale / xs. Slightly extending this case might be simpe mathematical equations, e.g. fontsize = 2 * typescale / xs or also something like fontsize = min(typescale / xs, 0.5rem) … to use variable value if not smaller than 0.5rem.

Takeaway: To evaluate the second example, one already needs to know what rem in the actual environment is, except for the case where by accident typescale / xs is also already defined as a rem measure. In this case both measures rely on the same reference system and the decision is trivial.

Color tokens

For color tokens, such a reference system which gives context to the color coordinates is the color space.

Let's explore first the simplest scenrario, …which is totally common these days in web frontends and where everything is based on sRGB color space (like an unwritten law); We define a color token awesome-red / middle = #cc3600. Alternatively we could even define it as awesome-red / middle = HSL(16°, 100%, 40%). Conversion between HSL and RGB can easily happen, as both relate to the sRGB color space, so we don't leave the reference system called sRGB.

I can now imagine deriving new colors with methods common in CSS post-processors, which can for instance make a color lighter(x) by x percent. So I could do something like awesome-red / light= lighter(awesome-red / middle, 120%). In the background this would multiply each color component (R1.2, G1.2, B*1.2). An HSL color would need to be converted into RGB, then made lighter and then converted back. As they share the same color space isn't a problem and almost losslessly doable.

After this proof of concept seems to work, oftentimes ideation stops here. But we want to see whether and where the system breaks, don't we?

What is happening, when we define a color token based on AdobeRGB? First of all, if the token internally (in our exchange format and reference implementation) represents the colors as sRGB, clipping or scaling of the gamut occurs when initializing a token. Just putting a AdobeRGB value in and afterwards receiving the same color would alter it already as it would convert AdobeRGB > sRGB > AdobeRGB.

But let's assume the best in a very happy world and say the token can internally also just be AdobeRGB. If we apply our lighter(x) method, the math would still be the same: multiply each color component by factor 1.2, so it should produce the expected results.

And here is the pitfall with these methods, because it doesn't (and nobody not doing color management with pencil on paper as a hobby would assume it) -- AdobeRGB has a gamma value of 1, what makes numbers behave linear. rgb(127, 127, 127) in AdobeRGB is a 50% black, the exact mid point. sRGB has a gamma value of ~2.4. An increment of 1 unit near 0 is a smaller step than an increment near the end of the scale (255). rgb(127, 127, 127) in sRGB is a 79% black. Through the gamma compression (and other properties of the color space), the effect of making something 120% lighter can be perceptionally quite different.

And we not even started blending two colors defined in two different reference systems or tried to manage a considering the differences between adaptive and subtractive color models.

Conclusion For this reason, it is important to not just define the color space and internal representation of color in a token, it is probably also desirable to standardize how methods which alter color (or in a wider extend any measure in tokens) work.

Fun fact We cannot learn from Photoshop's well known blend functions here, as PSD it comes with the following restrictions:

mbrookes commented 3 years ago

An HSL color would need to be converted into RGB, then made lighter and then converted back.

Why is that?

sebfriedrich commented 3 years ago

@mbrookes

Why is that?

Because the lighter() method in my example is meant to work based on RGB components. One might assume that multiplying the L component of HSL representation also makes the color lighter in the same fashion. But this is not the case - even if the HSL color model seems to be a perfect match for this kind of problem.

Feel free to try it out, try making a red lighter, also try to make a vibrant yellow lighter. You will see the yellow is less affected by a change of just lightness, while the red changes probably close to expectation, but both give very different results to the RGB based approach (will appear more washed out via HSL).

If can only assume that you thought along this way. If yes, it shows again how different actual behavior of such modifier methods is in reality from what one might expect just using them ;-)

NateBaldwinDesign commented 3 years ago

These are all fantastic comments and I second a lot of what has been said above.

I firmly believe that the creation of tokens is fundamentally a different type of design — something that existing tools have not yet figured out for our space. So the creation of design tokens in any current UI design tool is not appropriate.

I love how @sebfriedrich put it:

“…rules are more and more shaped in a mathematical or logical sense and computers assist to apply these rule sets, where before the engine was all human only.”

Creating design tokens is done by designing a system. And designing a system component design. A step further, I would add that in my opinion there are two types of tokens: system authoring tokens and consumption tokens.

Color and type are the best examples to amplify this difference, so I’m glad they are the requested perspective to use.

System authoring tokens

What I mean here are the minimal values used to define a system.

Consumption tokens

These are what most people identify as “tokens” today. They are the proxy used to deliver, consume, or use a system in order to design or build components, patterns, or applications (or other branded non-digital material). These are what others identify in the “export” line of thought.

However, in some platforms, the authoring tokens would still be a more desirable form to consume. Line height is a great example, where some tools need a resolved pixel value, iOS and Web are much better suited with the unitless value (eg 1.3)

I will touch on each type of token for Typography and Color:

Typography (system authoring)

System authoring tokens for typography would require only these tokens: base size = 14 multiplier = 1.125

Those values create all other tokens for type sizing. There would need to be additional configurations, of course, to define how many sizes are needed, what direction to increment (up or down), and a naming system. By that I mean that all type tokens are given a name, such as font-size and the author can define a specific system or convention for naming. We do this pretty commonly:

Typography (consumption)

This would be the output of some sort of export feature, which to some degree would also be “exporting” within the internal systems authoring tool in order to allow users to assign aliases to the generated tokens.

System generated consumption tokens:

Manually created consumption tokens:

Color (system authoring)

Color systems are very complex. Most color tools out there are so lacking that multiples must be used, or one is left to build their own. Much like @sebfriedrich, we used to use excel to help audit the many aspects of color, and have since then built our own open-source tool, Leonardo.

The things that need to be systematic for color palettes are:

We use Leonardo for our system authoring tokens, which lets us define a single color (by hue family name) based on parameters that give us a full scale of color: colorKeys (list of specific hue/sat/lightness values that we want to interpolate between colorspace (the color space/model we want to use for interpolation)

Then we can create each output color swatch by defining the contrast ratio we want it to have with the background, such as 3 or 4.5, etc.

We leverage the same naming conventions above as a system to generate the consumption tokens for each color theme we support (6 total)

Note regarding color format: I highly prefer that in an authoring environment, you can select specific values for each supported colorspace. One of the things I love about the Pantone system is that CMYK and RGB colors are not a mathematical 1:1 conversion in Pantone — they know that these are perceptually different, as one color will be perceived via reflected light, the other via direct. So the values defined for a specific color are done so to ensure color constancy to the best extent possible. Brands will want to do the same.

Color (consumption)

Along the lines of my note above, there may be specific colors or unique transformations to color in order to support the desired consumption format (such as CMYK), however at a bare minimum these transformations should be available by default. To that point, there’s all sorts of nuance when you consider authoring a color system for consumers that will use CMYK or P3 colorspaces, especially when you make choices and preview all these decisions within the sRGB colorspace.

General notes on Color: CSS preprocessors should never be used for color conversions. Period. Trying to use these functions to traverse colorspace is like drawing with an etch-a-sketch.

If you’re looking into color to the degree of having concerns about the interpolation of color (making lighter, darker, more saturated, less saturated, or shifting the hue, etc…) you need to use alternative tools to do so. CSS is a place for defining style attributes, not for negotiating color science :-)

Not to be shamefully plugging this tool here, but we battled these same issues for Spectrum (Adobe), which we just started throwing all sorts of features into our tool Leonardo to help with. Color interpolation happens in the tool, so now you define a “color” as a full color scale; now color is variable. You don’t need to lighten() or desaturate() ever — because the interpolation path is predefined, and colors are output from this tool/system. You can visualize this with the 2d charts or the 3d model to see just how your color changes within each space. If you wanna get crazy, you can see how interpolation in one space looks in another space. All great for subjectively evaluating color. Color spaces are offered that are outside the RGB spectrum, but all colors are clamped to RGB since that’s where they will be displayed (generally speaking).

@sebfriedrich, the issues you point out for color are 100% spot on. What I want to add to that is that linear interpolation is flawed in its own right. It results in unusual outcomes (eg “making something 120% lighter”) because the interpolation is happening in a non-perceptual color space (transforming each channel of RGB color). Lab and Lch get closer to a perceptual space, and there are color appearance models that (when using a normalized baseline) appear to enhance these previous models (CIECAM02 for example). These models change the numeric representations of color to fit more closely with how humans perceive color properties. So in theory, and from a mathematical standpoint, linear interpolation in one of these spaces should give the best result, however that is not always the case. Designers must still make informed, subjective decisions about color regarding the path that a color should take as it becomes lighter or darker, because it’s not only the color identity that is influencing these paths, but it’s also the subjective feeling or interpretation of brand that they represent — in some cases this means color must change hue* as well.

Conclusion

Sorry for the rant about color. :-)

Tokens should be authored in a hybrid tool that is neither an existing design tool, nor an existing engineering tool. Fundamentally, systems design is a unique form of design that is a prerequisite to the creation of tokens. We need that tool.

lukasoppermann commented 3 years ago

Hey @NateBaldwinDesign, very cool insights.

Tokens should be authored in a hybrid tool that is neither an existing design tool, nor an existing engineering tool. Fundamentally, systems design is a unique form of design that is a prerequisite to the creation of tokens. We need that tool.

I fully agree that it would be awesome to get a specific tool to define tokens. But don't you think this scares off a lot of people who may work on smaller projects or are new to the topic? It certainly makes the workflow more cumbersome, especially when you are still working your way into tokens.

I would rather think a tool like this would be the optimal and an option, but the tools we use today should still be able to create tokens, be it a design tool or a text editor.

I was probably lost somewhere in the color theory, but I don't see why using existing tools is such a big no-no for the common case (not saying there aren't cases where you need something more specific). Can you please, in simple terms, explain why I shouldn't be allowed to define tokens in Figma, Sketch or VS Code?

joestrouth1 commented 3 years ago

Loving the discussion here so far.

Tokens should be authored in a hybrid tool that is neither an existing design tool, nor an existing engineering tool. Fundamentally, systems design is a unique form of design that is a prerequisite to the creation of tokens. We need that tool.

I think I agree with the spirit of this but also with @lukasoppermann. There's every reason for design or engineering tools to be capable of authoring tokens and they already are. I can open Figma or VS Code, record a specific value, preview it to some degree, and reuse it. If that's what I need to do, then great! That capability isn't likely to go anywhere. We can create and manipulate tokens without designing an entire system. If I handpick my favorite font families from around the web and use them on my personal site and business cards, those can still be tokens, right? Can't a system arise from the tokens chosen, rather than the tokens being an artifact of the system?

Those tools don't offer many ways to visualize or manipulate the relationships between tokens, as a set. Generative tools like Leonardo, Colorbox.io, and visual type scale calculators have their own place in my workflow. I use them at different stages and with different goals than when using 'systems of record' or applying the tokens in practice.

A new system design tool that tries to own more than one aspect of token generation, management, storage, and application might be better than the disparate web of current tools. To what extent would vary by tool and team. Given enough time, it's likely that a general purpose implementation would grow pretty complex to meet the needs of enterprise customers.

I'd prefer a common token interface upon which we could build token pipelines or services. Connecting all the great tools we have would be a greater boon to my workflow than any new token authoring program. Large orgs could build transformers to interpolate through CIELAB or re-implement lighten without baking in complexity for folks trying to curate hex codes. The current landscape is diverse, flexible, and incrementally adoptable - hopefully we can keep those attributes while improving interoperability.

kevinmpowell commented 3 years ago

Just FYI for all of those who have contributed to this thread. The tokens working group is going through this thread today and discussing it. You're sparking great conversation so thank you for your contributions.

No updates or outcomes to share yet, but wanted to let you know your ideas are being seen and considered.

sebfriedrich commented 3 years ago

Opinion on: Adobe DSP package format

Even if it might not reach the discussions anymore, I want to leave a remark on Adobe's DSP format here, as some might think of it as a solution to what we discussed here so far. I post it here, as I have no interest in supporting Adobe as a company for free with their pseudo open source efforts, which are full of hooks to establish connection to protected intellectual property of them later, after they absorbed all the interesting feedback.

So what is so interesting about it? It has various bad design decisions, that it can serve as an example of how to not do it, seriously.

I am referring to these repos: https://github.com/AdobeXD/design-system-package-dsp https://github.com/demianborba/spectrum-dsp

Exchange format or storage format?

It seems to be meant for being hosted in some sort of git repository or other version controlled storage. If this is the case, it doesn't need author and lastUpdated information on each item in the json files, this is meta information managed by the hosting system.

As a schema for data exchange, including such time stamps might be ok (while I miss the actual schema definition, there are just examples of application given). But then I wonder why there is the concept of folders, a server application is missing and import paths seem to be allowed being local and relative instead of forming URIs.

It seems design goals and value propositions of the format are not clear at this point. This involves the risk of messing up the structure quickly.

Versions of a standard

It states some Version 1.0, while Adobe's own example Spectrum-DSP states 0.0.1 what is another theme. Is it defining the parsing standard or is this free to use for indication of your own's design system grade. What can be expected from version increments is not at all described.

What's in the box?

It is very fuzzy, what Adobe aims for to pack into this container.

Fun fact: Markdown isn't really a standard and its inventor seems to have no interest in developing it into one. Using CommonMark would be at least a safer choice.

Vendor extension

The /ext directory is the Pandora's box. Instead of accepting the fact, that a young standard might not cover all aspects right away, this allows for building and establishing any proprietary extensions to that format with a shortcut. Adobe could for the context of their software bring in any extension and by market dominance force other vendors into compliance with that, charging for development documentation or even requiring reverse engineering. It bypasses a discussion and agreement in an open form, while DSP users, who don't look into code, will wonder about growing incompatibilities.

To give a good example: I would prefer defined profiles as MPEG LA is using since h.264 codecs. HTML/XML namespaces work in a similar fashion.

Tests for the Interface

Is there any sort of tests or validators provided for the format? Maybe at least a reference implementation for writing it? Am I not supposed to expect that from a vendor which earns millions a year and is so experienced in the field of code and design, am I?

Spectrum's VS Code integration

phun-ky commented 2 years ago

Oookay. A lot of great discussions here and I must admit I had to scim trough most of it, so sorry in advance if my comment seem redundant.

Again, I see a lot of bias here towards design token being a design-only token upon creation. Design token definitions are data, regardless of the format it is provided in (JSON, yml, csv), and it needs to be processed regardless of end user; designer in a design tool, developer in an editor or IDE and reader of the documentation.

From my perspective, most likely tokens are created and found by devs in a DS, not designers. Designers (i am putting it to the extremes here, so sorry) rarely know what can be in a token and what can't be in the token. They have not the full grasp of what the whole DS is,the capabilities and limitations, because that is set in code.

A DS team does not agree upon tokens, they agree upon parts of a design system that can be used as a token and then processed to variables or documentation.

Design tokens, upon creation, needs to be versioned, and then, only then, it is the source of truth.

I might seem cranky here, but I just want my concerns to be heard. Much love <3

NateBaldwinDesign commented 2 years ago

@phun-ky i think you illustrate a good point: there may be some teams where designers take no part in the creation of tokens, while there are others in which a designer manages the entire token repository 🙋‍♂️… so there may be no singular and definitive “designer experience”. So long as the spec is flexible, teams (or tools) may be able to surface the appropriate experience for each type of team/designer.

phun-ky commented 2 years ago

@NateBaldwinDesign thanks for the reply!

I think I also need to be a bit clearer with this statement:

A DS team does not agree upon tokens, they agree upon parts of a design system that can be used as a token and then processed to variables or documentation.

Changing this to:

A DS team does not agree upon tokens, they agree upon parts of a design system, and then devs identify the smallest common denominatiors of those parts that can be used as a token and then processed to variables or documentation.

Part of the process is to export the tokens to the consumers of the DS, whether it's a design tool, code or browsed in an IDE.

I'm happy to participate in any discussion related to the gap/interface between devs and design, or design tokens in general :)

jina commented 2 years ago

From my perspective, most likely tokens are created and found by devs in a DS, not designers.

A DS team does not agree upon tokens

These statements are definitely not true in my experience, including at Salesforce where design tokens were concepted.

Designers (i am putting it to the extremes here, so sorry) rarely know what can be in a token and what can't be in the token. They have not the full grasp of what the whole DS is,the capabilities and limitations, because that is set in code.

I find this very offensive, to be honest. I'm a designer, and I helped develop and architect several token systems, including how it manifests in code.

Please note, this issue was not created to make a statement that design tokens need to originate with designers. But in some organizations, they do. So this thread was more about for THAT particular workflow, what kinds of things would people like? Obviously, this wasn't addressing the dev-centric workflow, which some people are much more familiar with.

phun-ky commented 2 years ago

These statements are definitely not true in my experience, including at Salesforce where design tokens were concepted.

Yes, I can understand and respect that, but different cultures and company/team situations apply here. In my current DS team (and previous clients), the tokens are identified by the developers. But that is not the case for all teams/companies/cultures.

I find this very offensive, to be honest. I'm a designer, and I helped develop and architect several token systems, including how it manifests in code.

And I really do apologize for my offensiveness, but that was never my intention. And I was very clear that I "put it to the extremes". This statement was from my experience, and as I stated earlier in a different thread, I am very happy to see that you've experienced it differently, and I'm humbly envious of your situation.

I am not trying to be gatekeeping or non inclusive here. If that was what I came across like, that's my bad.

So this thread was more about for THAT particular workflow, what kinds of things would people like? Obviously, this wasn't addressing the dev-centric workflow, which some people are much more familiar with.

So, I am happy to take hold of the "The developers workflow", as an issue here, if that is acceptable? Not trying to step on any toes here. Just eager to help <3

phun-ky commented 2 years ago

@jina We got off on the wrong foot here. Please feel free to get in touch with me, so we can iron things out. Available 09:00AM to 22:00 PM CET time :)

kevinmpowell commented 1 year ago

Closing due to inactivity.