rstudio / shinycoreci-apps

SUPERSEDED - Apps for Shiny continuous integration testing
Other
36 stars 4 forks source link

SUPERSEDED!!! shinycoreci-apps

:triangular_flag_on_post::triangular_flag_on_post:
rstudio/shinycoreci-apps has functionality is being merged into rstudio/shinycoreci
Please do not use this repo
:triangular_flag_on_post::triangular_flag_on_post:

Run Tests Docker Deploy Trim Branches

Tools for manual and automated testing of shiny apps.

Running manual tests

First, clone the shinycoreci-apps repo. Next, install remotes::install_github("rstudio/shinycoreci"). You may need to add your GITHUB_PAT to your R Environ file (See ?usethis::edit_r_environ and ?usethis::browse_github_pat)

All testing functions may be run from within the IDE (except for R Terminal / R Gui).

IDE Example

remotes::install_github("rstudio/shinycoreci")
# Sitting at the root folder of the rstudio/shinycoreci-apps repo
shinycoreci::test_in_ide()

View and manage automated test results

To view and manage test results, first make sure your working directory is the shinycoreci-apps repo.

Use shinycoreci::view_test_results() to obtain an overview of the most recent test runs (it should prompt a shiny app that looks similar to this):

If you see failures that indicate a difference in shinytest baselines (as above), you may need to just view and approve the differences.

To obtain and correct the shinytest differences, use shinycoreci::fix_all_gha_branches(). This function will walk you through the steps needed to update all shinytest failures and merge in the latest information from each gha- branch. To approve the differences, click on the "Update & click" button. To reject the differences, click on "Quit" button.

If you receive the error No information found for sha: ABC1234 . Do you have a valid sha?, you may have to provide the git sha value directly: shinycoreci::fix_all_gha_branches(sha = "XYZ5678").

In the event that all testing failures can not be addressed by updating shinytest baselines, have a look at the GHA actions build log and keep the following troubleshooting tips in mind:

Troubleshooting test failures

  1. Failures on old versions of R

If a testing app passes on recent version(s) of R, but fails in a suprising way on old R version(s), it may be due to an old R package version. In that case, modify the tests to run only if a sufficient version of the relevant package is available (for example).

  1. Other failures that can't replicated locally

Other surprising failures are often the result of timing issues (which can be difficult, if not impossible, to replicate locally). If your testing app uses dynamic UI and/or doesn't have proper input/output bindings, shinytest probably needs to know how long to wait for value(s) to update (in this case, use waitForValue(), for example). Somewhat similarly, when checking DOM values with shinyjster, you may need to wait for an update to DOM element(s) before checking value(s), in which case you can write a recursive function that keeps calling itself until the DOM is ready (for example).

  1. All of the windows shinytest plots have failed

When Windows virtual images update on GitHub Actions, the graphics device may behave exactly as the prior graphics device. Check to see if your windows Image Version has updated. (To view this, inspect the top lines in ./apps/sys-info-win-XX.txt for a change.) You should accept the updated shinytest output for the build with the higher Image Version.

Contribute a testing app

When contributing a testing app, try to do the following:

Note that shinycoreci supports 3 different testing frameworks, and provides helper functions to provide the file scaffolding for each case (that is, use_tests_shinytest(), use_tests_shinyjster(), use_tests_testthat())

  1. shinytest: primarily useful for taking screenshots of shiny output binding(s) (before or after interacting with shiny input bindings). See here for an example (note that shinytest::recordTest() can be used to generate shinytest testing scripts).

  2. shinyjster: primarily useful for asserting certain expectations about the DOM (in JavaScript). See here for an example (note that shinyjster::shinyjster_js() needs to be placed in the UI and shinyjster::shinyjster_server(input, output) needs to be placed in the server).

  3. testthat: primarily useful in combination with shiny::testServer() to test server-side reactive logic of the application.

    • See here for an example.
    • Call shinycoreci::use_tests_testthat(app_dir) to provide the file scaffolding necessary to run the testthat tests

Pruning old git branches

To help us store and manage the test results, git branches are automatically created for each test run. These branches are automatically removed on GitHub after 1 week of no activity, but you may want to periodically remove them on your local machine as well:

git fetch --prune

What workflows are available?

This repo contains several GitHub Actions workflows:

The runTests workflow runs automatically on every code change to shinycoreci-apps as well as every night (around midnight UTC). The other workflows may be triggered via shinycoreci::trigger_docker() and shinycoreci::trigger_deploy()

Managing R package dependencies

"Core" shinycoreci-apps R package dependencies come from shinycoreci's DESCRIPTION file; and so, that file may be modified to test different versions of different packages in the shinyverse.

Application-specific R package dependencies are automatically inferred (and installed at run-time) using renv::dependencies().

Note: renv::dependencies() are taken from CRAN, not GitHub Remotes.