Open MattDodsonEnglish opened 2 years ago
For single sourcing blocks of text, I think it turns out to be pretty easy to do with components. Components can call each other, too.
This commit probably says everything that one needs to do: https://github.com/grafana/k6-docs/pull/836/commits/836ca34b061019a19d30c940b6d62477b863e3d6
The docs have several places where content could be single-sourced, templated, or both. Finding ways to programmatically present content could make the docs less error-prone, easier to standardize, and easier to reformat.
Currently I see two ways to reuse content:
Both of these solutions could already be used in multiple places across the docs. I realize this could be two separate issues, but I think they have the same goal. I'm happy to separate them, but for now I'll use a todo list:
Single sourcing with variables
By single sourcing, I mean being able to write text in one place and reuse it in other places programmatically (no copy and pasting). Doing this would make the writing process easier and prevent content drift.
The solution that I imagine would involve a "variable". E.g., A writer could say:
And that would render as:
Of course, it doesn't need to be variable-maybe it'd make more sense to use MDX files. However, the idea of a variable (or function) is nice, especially if it could it could take arguments and change output depending on the page's frontmatter.
Here are some places where content is duplicated:
In code snippets that import from JSLib. For example
In admonitions. The same admonition appears in multiple places. Well, nearly the same: sometimes one gets edited before its twins do. Some examples:
using-k6/environment-variables
, Option referenceIn the "Inspect" pages of the cloud docs: e.g. https://k6.io/docs/cloud/analyzing-results/thresholds/
The example UUID generator is almost exactly the same as its API reference: https://github.com/grafana/k6-docs/pull/869
Each one of these pages has the same structure. Being able to call data from the page metadata would mean we could reuse headings (This one is less important, for now)
Generate data from a YAML file.
I say YAML here because it's more comfortable to write in. Since it's a superset of JSON, the two are nearly interchangeable in this conversation.
If I'm not mistaken, the Explore extensions is generated from a JSON file, and I think there were a few others. I'd like to use this functionality more.
Templating is especially useful for highly structured documents like references. Readers who scan can find information easier if it is absolutely repeated across pages. Generating a page from data "guarantees" a uniform structure and makes structural changes easier to implement globally.
Writers also may prefer using YAML for structured templating:
Besides that, having a data file would make it easier to reference some subsections. For example, if the glossary had a data page that could be called from any other page, then we could use it not only to make a long list, but also to call individual tools as a tool-tip.
Examples of local use
Some writers are already using templates to make docs. I used it for the Glossary.
gloss.json
```json [ { "term": "Application performance monitoring", "definition": "*(Or APM)*. The practice of monitoring the performance, availability, and reliability of a system. You can export k6 OSS and k6 Cloud results to an APM to analyze system metrics alongside k6 metrics.", "see_also": [ "[k6 OSS APM integrations](/getting-started/results-output/#external-outputs)", "[k6 Cloud APM integrations](/cloud/integrations/cloud-apm/)" ] }, { "term": "Concurrent sessions", "definition": "The number of simultaneous VU requests in a test run." }, { "term": "Checks", "definition": "Checks are true/false conditions that evaluate the content of some value in the JavaScript runtime.", "see_also": ["[Checks reference](/using-k6/checks)"], "usage_note": "Grafana also uses checks as a concept in their synthetic monitoring service." }, { "term": "Data correlation", "definition": "The process of taking [dynamic data](#dynamic-data) received from the system under test and reusing the data in a subsequent request.", "see_also": [ "[Correlation and dynamic data example](/examples/correlation-and-dynamic-data/)", "[Correlation in testing APIs](/testing-guides/api-load-testing/#correlation-and-data-parameterization)" ], "usage_note": "Avoid using \"correlation\" in the statistical sense, unless the usage is precise and necessary" }, { "term": "Data parameterization", "definition": "The process of turning test values into reusable parameters, e.g. through variables and shared arrays.", "see_also": [ "[Data parameterization examples](/examples/data-parameterization/)" ] }, { "term": "Dynamic data", "definition": "Data that might change or will change during test runs or across test runs. Common examples are order IDs, session tokens, or timestamps.", "see_also": [ "[Correlation and dynamic data example](/examples/correlation-and-dynamic-data/)" ], "usage_note": null }, { "term": "Endurance testing", "definition": "A synonym for [soak testing](#soak-test).", "see_also": null, "usage_note": "Prefer soak testing" }, { "term": "Goja", "definition": "A JavaScript engine written in Go. k6 binaries are embedded with Goja, enabling test scripting in JavaScript.", "see_also": ["[Goja repository](https://github.com/dop251/goja)"], "proper_name": true, "usage_note": null }, { "term": "Graceful stop", "definition": "A period that lets VUs finish an iteration at the end of a load test. Graceful stops prevent abrupt halts in execution.", "see_also": [ "[Graceful stop reference](/using-k6/scenarios/graceful-stop/)" ] }, { "term": "HTTP archive", "definition": "*(Or HAR file)*. A file containing logs of browser interactions with the system under test. All included transactions are stored as JSON-formatted text. You can use these archives to generate test scripts (for example, with the har-to-k6 Converter).", "see_also": [ "[HAR 1.2 Specification](http://www.softwareishard.com/blog/har-12-spec/)", "[HAR converter](/test-authoring/recording-a-session/har-converter/)" ], "usage_note": null }, { "term": "Iteration", "definition": "A single run in the execution of the `default function`, or scenario `exec` function. You can set iterations across all VUs, or per VU.", "see_also": [ "The [test life cycle](/using-k6/test-life-cycle/) document breaks down each stage of a k6 script, including iterations in VU code." ], "usage_note": "Applies only to code in VU context." }, { "term": "k6 Cloud", "definition": "The proper name for the entire cloud product, comprising both k6 Cloud Execution and k6 Cloud Test Results.", "see_also": ["[k6 Cloud docs](/cloud)"], "proper_name": true, "usage_note": null }, { "term": "k6 options", "definition": "Values that configure a k6 test run. You can set options with command-line flags, environment variables, and in the script.", "see_also": ["[k6 Options](/using-k6/k6-options)"] }, { "term": "Load test", "definition": "A test that assesses the performance of the system under test in terms of concurrent users or requests per second.", "see_also": ["[Load Testing](/test-types/load-testing)"], "usage_note": null }, { "term": "Load zone", "definition": "The geographical instance from which a test runs.", "see_also": [ "[Private load zones](/cloud/creating-and-running-a-test/private-load-zones/)", "[Declare load zones from the CLI](/cloud/creating-and-running-a-test/cloud-tests-from-the-cli/#load-zones)" ] }, { "term": "Metric", "definition": "A measure of how the system performs during a test run. `http_req_duration` is an example of a built-in k6 metric. Besides built-ins, you can also create custom metrics.", "see_also": ["[Metrics](/using-k6/metrics)"] }, { "term": "Metric sample", "definition": "A single value for a metric in a test run. For example, the value of `http_req_duration` from a single VU request.", "usage_note": null }, { "term": "Reliability", "definition": "The probability that a system under test performs as intended.", "see_also": null }, { "term": "Requests per second", "definition": "The rate at which a test sends requests to the system under test.", "see_also": null }, { "term": "Saturation", "definition": "A condition when a system's reaches full resource utilization and can handle no additional request.", "see_also": null, "usage_note": null }, { "term": "Scenario", "definition": "An object in a test script that makes in-depth configurations to how VUs and iterations are scheduled. With scenarios, your test runs can model diverse traffic patterns.", "see_also": ["[Scenarios reference](/using-k6/scenarios)"] }, { "term": "Scenario executor", "definition": "An property of a [scenario](#scenario) that configures VU behavior.\n: You can use executors to configure whether to designate iterations as shared between VUs or to run per VU, or to configure or whether the VU concurrency is constant or changing.", "see_also": ["[Executor reference](/using-k6/scenarios/executors/)"] }, { "term": "Smoke test", "definition": "A regular load test configured for minimum load. Smoke tests verify that the script has no errors and that the system under test can handle a minimal amount of load.", "see_also": ["[Smoke Testing](/test-types/smoke-testing)"] }, { "term": "Soak test", "definition": "A test that tries to uncover performance and reliability issues stemming from a system being under pressure for an extended duration.", "see_also": ["[Soak Testing](/test-types/soak-testing)"] }, { "term": "Stability", "definition": "A system under test’s ability to withstand failures and errors." }, { "term": "Stress test", "definition": "A test that assess the availability and stability of the system under heavy load.", "see_also": ["[Stress Testing](/test-types/stress-testing)"] }, { "term": "System under test", "definition": "The software that the load test tests. This could be an API, a website, infrastructure, or any combination of these." }, { "term": "Test run", "definition": "An individual execution of a test script over all configured iterations.", "see_also": ["[Running k6](/getting-started/running-k6)"], "usage_note": "Prefer *run* over *execution*." }, { "term": "Test concurrency", "definition": "In k6 Cloud, the number of tests running at the same time." }, { "term": "Test duration", "definition": "The length of time that a test runs. When duration is set as an option, VU code runs for as many iterations as possible in the length of time specified.", "see_also": [ "[Duration option reference](/using-k6/k6-options/reference/#duration)" ] }, { "term": "Test script", "definition": "The actual code that defines how the test behaves and what requests it makes, along with all (or at least most) configuration needed to run the test.", "see_also": ["[Single Request example](/examples/single-request)."] }, { "term": "Threshold", "definition": "A pass/fail criteria that evaluates whether a metric reaches a certain value. Testers often use thresholds to codify SLOs.", "see_also": ["[Threshold reference](k6.io/docs/using-k6/thresholds)"], "usage_note": "When using in conjunction with Grafana, be carefult to distinguish k6 Thresholds from Grafana thresholds, which configure colors in a graph." }, { "term": "Throughput", "definition": "The rate of successful message delivery. In k6, throughput is measured in requests per second." }, { "term": "Virtual user", "definition": "*(Or VU)*. The simulated users that run separate and concurrent iterations of your test script.", "see_also": ["[The VU option](/using-k6/k6-options/reference#vus)"] } ] ```This JSON file gets auto-turned into a Gatsby page with a description list in this code. I doubt it's beautiful, but I've used it at least 10 times.:
make-gloss.json
```javascript "use strict"; const fs = require("fs"); const rawdata = fs.readFileSync("gloss.json"); const words = JSON.parse(rawdata); const frontmatter = `--- title: Glossary excerpt: 'A list of technical terms commonly used when discussing k6, with definitions.' ---\n What we talk about when we talk about k6.\n In discussion about k6, some terms have a precise, technical meaning. If a certain term in these docs confuses you, consult this list for a definition. \n`; let gloss = ""; let list = ""; function makeGloss(words) { for (let i = 0; i < words.length; i++) { let hash = words[i].term?.replaceAll(" ", "-").toLowerCase(); gloss += `- [${words[i]["term"]}](#${hash})\n`; } return `${words[i].see_also.join(", ")}\n\n`; } else { list += `\n\n`; } } list += `
If I'm not mistaken, the recent redis docs were also generated programmatically on the writer's local machine (cc @oleiade).
If we were to implement this, it'd probably be best to start out small with the Glossary. Then perhaps we could think about extending it to the API docs. That would be a long effort, but it would perhaps make programmers happier to write references, and it would remove all discrepancies in structure (of which there currently are many.)
Reading about MDX and YAML templates