blue-build / cli

BlueBuild's command line program that builds custom Fedora Atomic images based on your recipe.yml
https://blue-build.org/
Apache License 2.0
71 stars 9 forks source link

feat: Add Jsonnet support for matrix based building #140

Open gmpinder opened 7 months ago

gmpinder commented 7 months ago

So there was some conversation in discord and some in a discussion about trying to have support for a recipe building multiple versions of itself. For example @qoijjj has mentioned that it would be much easier to support the transition of SecureBlue to Fedora 40 if all that needed to be done was to mark each recipe to build both a 39 version and a 40 version.

My proposal is to allow the existing image-version property in the top level to accept both a single value and an array of values. This would allow existing recipes to continue working and allow scaling up builds. So an example recipe could look something like this:

name: cli/test
description: This is my personal OS image.
base-image: ghcr.io/ublue-os/silverblue-surface
image-version:
  - 40
  - 39
modules:
  - from-file: akmods.yml

The CLI would then go through and build 2 separate images, one based on 39 and one on 40. All tags that are currently created contain the version of Fedora that built except latest. In this instance, we would set latest to the highest version that is being built. This could also open the door to supporting a gts tag for the second highest version.

Now there is a possibility that there will be recipe's that have module definitions that make it incompatible across versions. At this point, it would be up to the user to separate out the recipes and manage these changes. Or we can support having a for-version or some property for each module that makes it so that you can cordon certain modules to only be ran on a specific version. Like:

modules:
  - type: rpm-ostree
    for-version: 40
    install:
      - binutils
xynydev commented 7 months ago

The main issue here is that building multiple versions of an image with the same name is not possible with the current tagging system, which always assigns the latest tag to the currently building image.

I see two ways to add support for multi-version images. These are not mutually incompatible, but can be used to achieve the same thing.

I think we should either do both A&B, or just B, but not A alone. I'm not sure if option A would be required to support this fully, or even desirable due to its drawbacks.

A.

(originally from @gmpinder)

# recipe.yml
image-version: [38, 39]

B.

(originally from @gerblesh)

# recipe-gts.yml
image-version: 38
image-tags: [gts] # replaces the 'latest' tag
# recipe-current.yml
image-version: 39
# 'latest' tag is applied by default
gmpinder commented 7 months ago

The main issue here is that building multiple versions of an image with the same name is not possible with the current tagging system, which always assigns the latest tag to the currently building image.

And latest should ALWAYS be the latest version of the image where the base image is the same (see: https://learn.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images#stable-tags). This isn't a good point. The CLI fully manages what tags are set, so it is not too much to ensure that the highest version number for that recipe is marked as latest. There is also precedent with Ublue where they don't mark their version 38 as latest when they are also building 39.

no way to have separate configuration for different versions, unless a clunky syntax like for-version as proposed above is used

That's unfair. Adding an optional property to a module isn't "clunky", we already do that for other aspects like source or from-file. Option B would state that you need to create a completely new file in order to make a new version. This does NOT scale at all. There is so much boilerplate that you would have to maintain in order to keep all the recipes up to date.

SecureBlue has 50 recipes. So with option B they would be required to create 50 more recipes to even think about trying to have a second version of each of the recipes. Whereas with an array of versions, the CLI would manage the tagging for the final images and only mark the most recent version as latest as is the convention with docker tags.

opens up the possibility to tag images in a multitude of ways, like gts

As would my proposed solution:

The CLI would then go through and build 2 separate images, one based on 39 and one on 40. All tags that are currently created contain the version of Fedora that built except latest. In this instance, we would set latest to the highest version that is being built. This could also open the door to supporting a gts tag for the second highest version.

break 1-1 correspondence with recipes to images

Only in the sense that it's supports multiple versions of the same base image. I have no intent to have the base image follow in the same pattern.

issue with template/generate command now generating two containerfiles

This can be handled with an arg and also default to templating the first version in the array. This can be figured out more later.

Now I want to be clear here. I'm not bashing on other's ideas. I'm simply stating facts about scalability of the two solutions and solution B completely falls short of that scalability factor.

xynydev commented 7 months ago

The main issue here is that building multiple versions of an image with the same name is not possible with the current tagging system, which always assigns the latest tag to the currently building image.

And latest should ALWAYS be the latest version of the image where the base image is the same (see: https://learn.microsoft.com/en-us/archive/blogs/stevelasker/docker-tagging-best-practices-for-tagging-and-versioning-docker-images#stable-tags). This isn't a good point. The CLI fully manages what tags are set, so it is not too much to ensure that the highest version number for that recipe is marked as latest. There is also precedent with Ublue where they don't mark their version 38 as latest when they are also building 39.

Yes, setting latest as the tag for the latest version is best practice. Yes, option A would allow for that to be done automatically. No, it wouldn't be such a big deal to make the user get the final say on this with option B.


no way to have separate configuration for different versions, unless a clunky syntax like for-version as proposed above is used

That's unfair. Adding an optional property to a module isn't "clunky", we already do that for other aspects like source or from-file.

Let me illustrate my point a bit further here; I never thought of the recipe syntax as a DSL. The vision (in my mind) is/was for a static configuration language, that describes the steps to build and push an image, in a sufficiently abstracted way. I think adding for-version would be adding control flow in a kind of clunky way, going further from the static configuration ideal. The equivalent in the GitHub Actions DSL would be if: ${{ IMAGE_VERSION=="38" }}. If a DSL with control flow & co is the right way to go, I think YAML should be abandoned.

Option B would state that you need to create a completely new file in order to make a new version. This does NOT scale at all. There is so much boilerplate that you would have to maintain in order to keep all the recipes up to date.

SecureBlue has 50 recipes. So with option B they would be required to create 50 more recipes to even think about trying to have a second version of each of the recipes. Whereas with an array of versions, the CLI would manage the tagging for the final images and only mark the most recent version as latest as is the convention with docker tags.

I agree with the multiple files being needed feeling unscalable. But @qoijjj seemingly disagreed: (link)

secureblue has large folders of recipes because we have almost 100 recipes. The existing structure is highly efficient because it permits us to factor out and reuse large chunks of yaml dozens of times. So I'm confused on multiple fronts 😄 One is what inefficiency you're referring to, and two how the recipe structure relates to this thread.

Also, @tulilirockz 's Atomic Studio uses Jsonnet in a rather elegant way to solve the scalability issue. (link) I quite like her approach, and think supporting something like it officially would be a pretty good way to solve scalability for multiple purposes, not just the multi-version case


opens up the possibility to tag images in a multitude of ways, like gts

As would my proposed solution:

It is my understanding that while option A would allow us to implement tagging for things like gts, option B would allow the users to implement it without our intervention. I think that here the option B would be a better option in terms of future plans regarding other image-based operating systems.


break 1-1 correspondence with recipes to images

Only in the sense that it's supports multiple versions of the same base image. I have no intent to have the base image follow in the same pattern.

Ok, yeah. Option A would directly only do this in the sense of multiple versions. One might argue, though, that versions with different base images are also just different versions of the same image. This would set precedent for other properties following the same pattern. I don't find the precedent fully threatening, though. My earlier points have some more thoughts on this.

I just also realized, that GitHub Actions supports the matrix being generated by another action, so that could be leveraged in a case like this.


issue with template/generate command now generating two containerfiles

This can be handled with an arg and also default to templating the first version in the array. This can be figured out more later.

Ok, yeah. Maybe it could generate files like 38.Containerfile & 39.Containerfile, following some specific pattern of course.

tulilirockz commented 7 months ago

Honestly I personally think that using a yaml/json generator language is the best approach for this kind of situation, as @xynydev said I'm using jsonnet but we could easily use something like pkl to manage the high level aspects of image generation and would make scaling a bunch easier, like, the user could just add "make images with xfce4" in some fancy way and the recipes would just be generates without any issue. Something like:

// Bluebuild Library For Ublue Images (could also make an implementation for VanillaOS)

class Image {
        suffix: String
        base_url: String = "ghcr.io/ublue-os/"
        type: "main" | "nvidia" | "asus" | "surface"
        desktop: "silverblue" | "kinoite"
        ignore_type: Boolean = false // If someone wanna do something fancy
        modules: Listing<String>
        tags: Listing<String>
}

class Meta {
        name: String
        description: String
        images: Mapping<String, Image>
}

// This is what the user is gonna be using

test = new Meta {
        name =  "atomic-studio"
        description = "Operating system based on Fedora Atomic meant for content creators and artists"
        images = new Mapping<String, Image> {
                ["gnome"] = (Image) {
                        suffix = "-gnome"
                    type = "main"
                        desktop = "silverblue"
                        modules = new Listing<String> { "chungus" }
                        tags = new Listing<String> { "latest" "gts" }
                }
        }
}

That way we can enforce rules for the recipes directly through the built in type system, whitespace wouldnt matter, and stuff like that! We can both make it super flexible and strict when we want. It would be a bit more annoying to make everyone install pkl to test their things but like, it could be an interesting alternative to using raw yaml

gmpinder commented 7 months ago

I agree with the multiple files being needed feeling unscalable. But @qoijjj seemingly disagreed: (link)

secureblue has large folders of recipes because we have almost 100 recipes. The existing structure is highly efficient because it permits us to factor out and reuse large chunks of yaml dozens of times. So I'm confused on multiple fronts 😄 One is what inefficiency you're referring to, and two how the recipe structure relates to this thread.

And yet, they changed their mind which is what spurred this issue in the first place.

image

You know I'm honestly getting tired of all this bike shedding. I'm trying to implement features the users are asking for that are functional and can still maintain backwards compatibility. I'm then met with so much push back and grandiose ideas that far exceed the scope of the problems at hand.

qoijjj commented 7 months ago

@gmpinder I apologize if you think this was a waste of time to discuss :pensive:

I'm not sure where I changed my mind, I still think being able to specify a matrix of base versions would be nice and I don't see where in that screenshot I contradicted myself.

But regardless if you don't want to implement this, that's okay.

qoijjj commented 7 months ago

I agree with the multiple files being needed feeling unscalable. But @qoijjj seemingly disagreed: (link)

I wasn't commenting on the number of recipes. I was saying that the "structure is highly efficient because it permits us to factor out and reuse large chunks of yaml dozens of times."

In other words, being able to factor out redundant config is a benefit. For the same reason I think being able to have multiple base image versions in the same config file would be a benefit: reduced redundant config.

qoijjj commented 7 months ago

But I will find a workaround, please feel free in the future to decline my asks if they're not of interest or out of scope :)

gmpinder commented 7 months ago

I'm sorry guys for the rude response. I'm dealing with things IRL and it spilled over here. @qoijjj I am interested in building out a feature like this. @xynydev I'm sorry for being overly defensive. I'm reopening the issue for further discussion. I'm taking a break though so I probably won't respond for a couple weeks.

xynydev commented 7 months ago

Aight, I've been conciously not thinking about this for the past few days, so here are some fresh thoughts outlined:


Jsonnet example
I took the Jsonnet configuration from Atomic Studio and did some pretty violent refactoring on it to showcase what a config file might look for someone just looking to cleanly generate some recipes based on some rules. ```jsonnet local project = { base_name: "atomic-studio", description: "Operating system based on Fedora Atomic meant for content creators and artists", base_images: "ghcr.io/ublue-os/", }; local suffix(base_image, nvidia) = ( (if (base_image == "silverblue") then "-gnome" else "") + (if (nvidia) then "-nvidia" else "") ); local image(base_image, nvidia, image_version) = { "name": project.base_name + suffix(base_image, nvidia), "description": project.description, "base-image": project.base_images + base_image + (if (nvidia) then "-nvidia" else "-main"), "image-version": image_version, "modules": std.flattenArrays([ [ { "from-file": "common/shared/gui-apps.yml" }, { "from-file": "common/shared/packages.yml" }, { "from-file": "common/shared/files.yml" }, { "from-file": "common/shared/scripts.yml" }, { "from-file": "common/shared/bling.yml" }, { "from-file": "common/shared/services.yml" }, ], if (nvidia) then [ { "from-file": "common/shared/nvidia/scripts.yml" }, ] else [ { "from-file": "common/shared/amd/packages.yml" }, { "from-file": "common/shared/amd/scripts.yml" }, ], if (base_image == "silverblue") then [ { "from-file": "common/gnome/apps.yml" }, { "from-file": "common/gnome/files.yml" }, { "from-file": "common/gnome/scripts.yml" }, ] else [ { "from-file": "common/plasma/apps.yml" }, { "from-file": "common/plasma/files.yml" }, { "from-file": "common/plasma/scripts.yml" }, ], [ { "from-file": "common/audio/audinux.yml" }, { "from-file": "common/audio/pipewire-packages.yml" }, { "type": "yafti" }, { "type": "signing" }, ], ]), }; local images() = { ["recipe" + suffix(base_image, nvidia) + "-" + std.toString(image_version) + ".yml"]: image(base_image, nvidia, image_version) for nvidia in [ false, true ] for base_image in [ "kinoite", "silverblue" ] for image_version in [ 38, 39 ] }; images() ``` This file can be then turned into the separate `.yml` files: ```sh ❯ jsonnet -m ./ studio.jsonnet ./recipe-38.yml ./recipe-39.yml ./recipe-gnome-38.yml ./recipe-gnome-39.yml ./recipe-gnome-nvidia-38.yml ./recipe-gnome-nvidia-39.yml ./recipe-nvidia-38.yml ./recipe-nvidia-39.yml ``` The filenames are output by the program, so multi-stage GitHub Actions could easily be used to generate the build matrix for the BlueBuild Action. The files are `JSON`, but work perfectly, as `JSON` is a `YAML` superset.
Lua example I translated the Jsonnet example to Lua. This is my first time using Lua, so I might not be "doing it correctly", but I found that Lua is not that well suited for this purpose. The `json.lua` library has to be statically included by downloading the file, and the table creation is somewhat lacking for this purpose, thought that could be helped by including all of the common/gnome/nvidia/etc. modules in one file instead of separate files. ```lua # studio.lua json = require "json" -- https://github.com/rxi/json.lua project = { base_name = "atomic-studio", description = "Operating system based on Fedora Atomic meant for content creators and artists", base_images = "ghcr.io/ublue-os/", } function suffix(base_image, nvidia) local suffix = "" if base_image == "silverblue" then suffix = suffix .. "-gnome" end if nvidia then suffix = suffix .. "-nvidia" end return suffix end for _, nvidia in ipairs({true, false}) do for _, base_image in ipairs({"kinoite", "silverblue"}) do for _, image_version in ipairs({38, 39}) do local config = { name = project.base_name .. suffix(base_image, nvidia), description = project.description, base_image = project.base_images .. base_image .. (nvidia and "-nvidia" or "-main"), image_version = image_version, modules = { { from_file = "common/shared/gui-apps.yml" }, { from_file = "common/shared/packages.yml" }, { from_file = "common/shared/files.yml" }, { from_file = "common/shared/scripts.yml" }, { from_file = "common/shared/bling.yml" }, { from_file = "common/shared/services.yml" }, -- yeah, lua makes this kinda clumsy... (nvidia and { from_file = "common/shared/nvidia/scripts.yml" } ), (not nvidia and { from_file = "common/shared/amd/packages.yml" } ), (not nvidia and { from_file = "common/shared/amd/scripts.yml" } ), (base_image == "silverblue" and { from_file ="common/gnome/apps.yml" } ), (base_image == "silverblue" and { from_file ="common/gnome/files.yml" } ), (base_image == "silverblue" and { from_file ="common/gnome/scripts.yml" } ), (base_image == "kinoite" and { from_file ="common/plasma/apps.yml" } ), (base_image == "kinoite" and { from_file ="common/plasma/files.yml" } ), (base_image == "kinoite" and { from_file ="common/plasma/scripts.yml" } ), { from_file = "common/audio/audinux.yml" }, { from_file = "common/audio/pipewire-packages.yml" }, { type = "yafti" }, { type = "signing" }, } } local json_str = json.encode(config):gsub("_", "-") local file_path = "./recipe" .. suffix(base_image, nvidia) .. "-" .. image_version .. ".yml" print(file_path) f = io.open(file_path, "w") f:write(json_str) f:close() end end end ``` This file can be then turned into the separate `.yml` files: ```sh ❯ lua studio.lua ./recipe-38.yml ./recipe-39.yml ./recipe-gnome-38.yml ./recipe-gnome-39.yml ./recipe-gnome-nvidia-38.yml ./recipe-gnome-nvidia-39.yml ./recipe-nvidia-38.yml ./recipe-nvidia-39.yml ``` The filenames are output by the program, so multi-stage GitHub Actions could easily be used to generate the build matrix for the BlueBuild Action. The files are `JSON`, but work perfectly, as `JSON` is a `YAML` superset.
JS Example I translated the Jsonnet example to JS. I'm such a webdev, that this feelt very natural and easy for me, thought the line count is marginally bigger and the amount of boilerplate required marginally larger. This could become the most ergonomic way to write multi-recipe configs, if I just quickly made a TS library to have the types and some ergonomic functions for the whole script. ```js # studio.js import * as fs from "node:fs"; import { join } from "node:path"; const outputDir = "./recipes"; try { if (!fs.existsSync(outputDir)) { fs.mkdirSync(outputDir); } } catch (err) { console.error(err); throw new Error(); } const project = { baseName: "atomic-studio", description: "Operating system based on Fedora Atomic meant for content creators and artists", baseImages: "ghcr.io/ublue-os/", }; const suffix = (baseImage, nvidia) => (baseImage == "silverblue" ? "-gnome" : "") + (nvidia ? "-nvidia" : ""); const files = []; for (let nvidia of [true, false]) { for (let baseImage of ["kinoite", "silverblue"]) { for (let imageVersion of [38, 39]) { const config = { name: project.baseName + suffix(baseImage, nvidia), description: project.description, "base-image": project.baseImages + baseImage + (nvidia ? "-nvidia" : "-main"), "image-version": imageVersion, modules: [ { "from-file": "common/shared/gui-apps.yml" }, { "from-file": "common/shared/packages.yml" }, { "from-file": "common/shared/files.yml" }, { "from-file": "common/shared/scripts.yml" }, { "from-file": "common/shared/bling.yml" }, { "from-file": "common/shared/services.yml" }, ...(nvidia ? [{ "from-file": "common/shared/nvidia/scripts.yml" }] : [ { "from-file": "common/shared/amd/packages.yml" }, { "from-file": "common/shared/amd/scripts.yml" }, ]), ...(baseImage == "silverblue" ? [ { "from-file": "common/gnome/apps.yml" }, { "from-file": "common/gnome/files.yml" }, { "from-file": "common/gnome/scripts.yml" }, ] : [ { "from-file": "common/plasma/apps.yml" }, { "from-file": "common/plasma/files.yml" }, { "from-file": "common/plasma/scripts.yml" }, ]), { "from-file": "common/audio/audinux.yml" }, { "from-file": "common/audio/pipewire-packages.yml" }, ], }; const json = JSON.stringify(config, null, 2); const filePath = join( outputDir, "recipe" + suffix(baseImage, nvidia) + "-" + imageVersion + ".yml" ); try { fs.writeFileSync(filePath, json); files.push("./" + filePath); } catch (err) { console.error(err); throw new Error(); } } } } // GitHub Actions needs JSON to generate a build matrix. console.log(JSON.stringify(files)); ``` This file can be then turned into the separate `.yml` files: ```sh ❯ node studio.js ["./recipes/recipe-nvidia-38.yml","./recipes/recipe-nvidia-39.yml","./recipes/recipe-gnome-nvidia-38.yml","./recipes/recipe-gnome-nvidia-39.yml","./recipes/recipe-38.yml","./recipes/recipe-39.yml","./recipes/recipe-gnome-38.yml","./recipes/recipe-gnome-39.yml"] ``` Can also be run with `bun` and `deno`. If the script uses `TS` installation of some JS dependencies is required, and the run command becomes `npx tsc studio.ts && node studio.js` (`bun run studio.ts` and `deno run --allow-read=. --allow-write=. studio.ts` work without additional setup, though, so that would be the recommended option for CI probably). Furthermore, `deno` can be embedded into `rust`. The filenames are output as a `JSON` string by the program, so multi-stage GitHub Actions could easily be used to generate the build matrix for the BlueBuild Action, as GitHub Actions (apparently) require `JSON` for auto-generation of build matrices. The files are `JSON`, but work perfectly, as `JSON` is a `YAML` superset. (JS could be used to generate `YAML` too, though.) A TS library for this could also include the following function I just AI-generated ( :flushed: ). ```js // some functional magic an AI wrote that i like 66.666...% understand const generateMatrix = (matrix) => Object.entries(matrix) .map(([key, values]) => values.map((value) => ({ [key]: value }))) .reduce((a, b) => a.flatMap((d) => b.map((e) => ({ ...d, ...e })))); ``` As that would allow the for loop mess be transformed into this: ```js const matrix = { baseImage: ["kinoite", "silverblue"], nvidia: [true, false], imageVersion: [38, 39], }; for (let { baseImage, nvidia, imageVersion } of generateMatrix(matrix)) { ... } ```
gmpinder commented 6 months ago

I think the configurations languages suggested/used by @tulilirockz solve this problem pretty elegantly.

  • https://pkl-lang.org/ https://jsonnet.org/
  • An approach like this could offer well-structured configuration for advanced use cases, that allows generation of (possible multiple files of) the simple recipe format.
  • We could keep the recipe format minimal and describing one image using one file, keeping the entry level for understanding the whole loop lower.
  • There would be less need to do custom implementation in the CLI for supporting these sorts of multi-image builds.

Alright, I think moving in this direction would probably be better. I've taken some time to think this over and look at what we could do. I think that out of all the options specified here, jsonnet would be the best path forward. There is a crate that would allow us to statically compile the libjsonnet library into the CLI tool so that users aren't required to have it installed to take advantage of this feature.

An advantage to this would be to allow the use of serde to take the output from the jsonnet file and convert it directly into a recipe yaml. The output could also end up being stored in memory using existing structs and allow the user to build all the images locally using one command or have an interactive prompt for the user to choose which image to build.

Like @xynydev said, this would help to keep the individual recipe files simple for less technical users while also opening up an avenue to give power users more options to better automate image building.

xynydev commented 6 months ago

I dislike jsonnet syntax, and I think supporting multiple options would be great. JS with Deno could also be integrated into Rust, and I think I could make a pretty nice library for it.

You are free to work on CLI integration, but that is not a priority for me, as this would need changes in the build.yml by the user anyways.

I think the course of action in order of importance regarding this issue would be to:

  1. Implement idea B. from above to facilitate multi-version builds with different recipe files.
  2. Start an examples -repo and catalogue, document, and improve on different ways of recipe generation.
  3. Work on documentation related to matrixed recipe generation for big repositories.
  4. Work on integrating the best ways to do this in CLI to streamline the builds.
gmpinder commented 6 months ago

As much as I like and am used to JS, I think including an entire JS (or Lua) runtime is really overkill for the requirement. Jsonnet or PKL would be perfect as they are designed specifically for the purpose of dynamic configuration file generation. Unfortunately PKL is very new and does not have any rust bindings so that is out of the question which only really leaves Jsonnet.

xynydev commented 6 months ago

Well, there is no reason to include an entire JS runtime, then. It doesn't have to be integrated onto the Rust-based CLI. Integrating Jsonnet to give JSON commandline output would be great. I think documenting multiple options would be great, so that people can pick their favorites.

gmpinder commented 6 months ago

Integrating Jsonnet to give JSON commandline output would be great

So this would be something for creating matricies in GHA? Also sounds like something that I could make for GitLab CI. There's a way to generate another ci yaml file with more jobs that contain the artifacts of the previous job (in this case the new generated recipes).

Would there be a way to pass the recipe files to the new jobs in GitHub? Cause could move recipes from the jsonnet generate job and pass the paths to the recipes to the new jobs.

xynydev commented 6 months ago

So this would be something for creating matricies in GHA?

Yup!

Also sounds like something that I could make for GitLab CI. There's a way to generate another ci yaml file with more jobs that contain the artifacts of the previous job (in this case the new generated recipes).

Yeah, we may figure out for as many CI as we please.

Would there be a way to pass the recipe files to the new jobs in GitHub? Cause could move recipes from the jsonnet generate job and pass the paths to the recipes to the new jobs.

Like, keeping the recipe files between jobs? Cause I tried looking for options when making this Atomic Studio PR, but found that the least complicated option would just be to regenerate the recipes in the second job.

xynydev commented 6 months ago

I made a short TS library file for the JS/TS configuration, and I think that made it quite nice. The recipe config is also typed, so using a compatible editor makes the experience very nice. I'll hold off on making this anything official, though, until #138 is done, and we establish an examples repo.

example.ts ```ts import { Recipe, generateMatrix, saveRecipes } from "./bluebuild"; const project = { baseName: "atomic-studio", description: "Operating system based on Fedora Atomic meant for content creators and artists", baseImages: "ghcr.io/ublue-os/", } const suffix = (baseImage, nvidia) => (baseImage == "silverblue" ? "-gnome" : "") + (nvidia ? "-nvidia" : "") const matrix = { baseImage: ["kinoite", "silverblue"], nvidia: [true, false], imageVersion: [38, 39], } const recipes = generateMatrix(matrix).map(({ baseImage, nvidia, imageVersion }): Recipe => { return { name: project.baseName + suffix(baseImage, nvidia), description: project.description, "base-image": project.baseImages + baseImage + (nvidia ? "-nvidia" : "-main"), "image-version": imageVersion, modules: [ { "from-file": "common/shared/gui-apps.yml" }, { "from-file": "common/shared/packages.yml" }, { "from-file": "common/shared/files.yml" }, { "from-file": "common/shared/scripts.yml" }, { "from-file": "common/shared/bling.yml" }, { "from-file": "common/shared/services.yml" }, ...(nvidia ? [{ "from-file": "common/shared/nvidia/scripts.yml" }] : [ { "from-file": "common/shared/amd/packages.yml" }, { "from-file": "common/shared/amd/scripts.yml" }, ]), ...(baseImage == "silverblue" ? [ { "from-file": "common/gnome/apps.yml" }, { "from-file": "common/gnome/files.yml" }, { "from-file": "common/gnome/scripts.yml" }, ] : [ { "from-file": "common/plasma/apps.yml" }, { "from-file": "common/plasma/files.yml" }, { "from-file": "common/plasma/scripts.yml" }, ]), { "from-file": "common/audio/audinux.yml" }, { "from-file": "common/audio/pipewire-packages.yml" }, { type: "signing" } ], } }) saveRecipes(recipes, "./recipes") ``` ```sh ❯ bun run example.ts ["./recipes/recipe-atomic-studio-nvidia-38.json","./recipes/recipe-atomic-studio-nvidia-39.json","./recipes/recipe-atomic-studio-38.json","./recipes/recipe-atomic-studio-39.json","./recipes/recipe-atomic-studio-gnome-nvidia-38.json","./recipes/recipe-atomic-studio-gnome-nvidia-39.json","./recipes/recipe-atomic-studio-gnome-38.json","./recipes/recipe-atomic-studio-gnome-39.json"] ``` The recipe filenames use the `.json` file extension, as they are proper JSON files. The filenames are autogenerated from the `name` and `image-version` properties.