push-based / user-flow

📦 Combine Chrome tooling like Lighthouse userflows and DevTools reconder scripts in your CI
MIT License
118 stars 3 forks source link

Setup sandbox to check flackiness and test selfhosted github action #201

Closed ChristopherPHolder closed 1 year ago

ChristopherPHolder commented 1 year ago

Motivation

Document the variance in performance scores between Lighthouse User Flow runs on GitHub runners.

To improve a users ability to draw conclusions about a performance-impacting change:

Lighthouse has documented that sometimes the performance score is different between audits. They have also identified and documented the main sources of variance and how to mitigate them.

Source Impact Typical End User PageSpeed Insights Controlled Lab
Page nondeterminism High LIKELY LIKELY LIKELY
Local network variability High LIKELY UNLIKELY UNLIKELY
Tier-1 network variability Medium POSSIBLE POSSIBLE POSSIBLE
Web server variability Low LIKELY LIKELY LIKELY
Client hardware variability High LIKELY UNLIKELY UNLIKELY
Client resource contention High LIKELY POSSIBLE UNLIKELY
Browser nondeterminism Medium CERTAIN CERTAIN CERTAIN

Implementation

TODO - Improve this description

Using a sandbox environment we will run a user-flow audit chronologically in GitHub Actions. The results will be compiled and used to generate generate charts. These will help users understand how much results can vary using a similar setup.

Sandbox Environment Setup

Questions & Answers

  1. What source of variance do we want to document?
  1. How are we mitigating the impact of other sources of variance?
  1. How will this sandbox environment be generated?

Note: Regarding the chose of target application

  1. Where will the sandbox be located?

Github Workflow Configuration

Implementation details:

Data Collection & Compilation

The data will be collected running npx user-flow in a GitHub action with the following configuration in the .user-flowrc.json:

{
  "collect": {
    "url": "http://localhost:4173/",
    "ufPath": "./user-flows",
    "serveCommand" : "npm run serve",
    "awaitServeStdout": "4173",
    "config": {
      "settings": {
        "onlyCategories": ["performance"]
      }
    }
  },
  "persist": { "outPath": "./measures", "format": ["json"] },
  "assert": {}
}

After the completion of the user-flow audit a script will be run to compile and clean up the data. The script will perform the following actions:

The script will be located at ~/scripts/compile-and-cleanup.js

The compiled report will be stored in a file at ~/data/sample-1.json. The file will contain an array of the fetchTime and performance score of all the audit runs.

Example compiled results object

[
  {
    "fetchTime": "2022-04-21T21:00:00.000Z",
    "performance": 0.93
  }
  {
    "fetchTime": "2022-04-21T21:15:00.000Z",
    "performance": 0.94
  }
  {
    "fetchTime": "2022-04-21T21:30:00.000Z",
    "performance": 0.92
  }
]

The results will be saved in the repo by committing the updated compiled report file as a step CI job.

Results Documentation

TODO

  • Explain what will be the final outcome
  • What charts will be displayed
    • How the chart will be generated

TODOS

TODO - Add more steps based on information above

push-based-bot commented 1 year ago

We also have to reference the outcome inc. cool img in the action repo