doc-pipeline
converts DocFX YAML to HTML. You can run it locally, but it is set
up to run periodically to generate new docs in production or dev.
doc-pipeline
uses
docker-ci-helper to
facilitate running Docker. See the instructions below for
how to test and run locally.
doc-pipeline
also depends on
docuploader
to compress and
upload a directory to Google Cloud Storage.
doc-pipeline
is only for converting DocFX YAML to HTML suitable for
cloud.google.com.
You can generate DocFX YAML using language-specific generators.
Here is how to use doc-pipeline. All of the steps except the credential setup should be automated/scripted as part of the release process.
before_action {
fetch_keystore {
keystore_resource {
keystore_config_id: 73713
keyname: "docuploader_service_account"
}
}
}
Generate DocFX YAML. Usually, this is done as part of the library release process.
toc.yml
file. However, do not include a
docfx.json
file because doc-pipeline generates one for you..yml
files.Create a docs.metadata
file in the same directory as the YAML:
docuploader create-metadata
Add flags to specify the language, package, version etc. See
docuploader
.
Upload the YAML with the docfx
prefix:
docuploader upload --staging-bucket docs-staging-v2-dev --destination-prefix docfx .
The buckets are docs-staging-v2
(production) and docs-staging-v2-dev
(development). Use -dev
until your HTML format is confirmed to be
correct.
doc-pipeline
periodically runs, generates the HTML for new
docfx-*
tarballs, and uploads the resulting HTML to the same bucket. The
HTML has the same name as the DocFX tarball, except it doesn't have the
docfx
prefix.DocFX supports cross references using xrefmap files. Each file maps a UID to the URL for that object. The xref map files are automatically generated when DocFX generates docs. One generation job can refer to other xref map files to be able to link to those objects.
Here's how it works in doc-pipeline
:
xrefs
directory of the bucket. You can see them
all using gsutil ls gs://docs-staging-v2/xrefs
.xref-services
argument for docuploader create-metadata
to refer to
cross reference services.doc-pipeline
package,
you need to configure it. Use the xrefs
argument of docuploader create-metadata
to specify the xref map files you need. Use the following format:
devsite://lang/library[@version]
: If no version
is given, the SemVer latest is used. For example,
devsite://dotnet/my-pkg@1.0.0
would lead to the xref
map at gs://docs-staging-v2/xrefs/dotnet-my-pkg-1.0.0.tar.gz.yml
.
devsite://dotnet/my-pkg
would get the latest version of my-pkg
.doc-pipeline
will then download and use the specified xref maps. If an xref map cannot
be found, a warning is logged, but the build does not fail. Because of this,
you can generate docs that depend on each other in any order. If the dependency
doesn't exist yet, that's OK, the next regen will pick it up.You can regenerate all HTML by setting FORCE_GENERATE_ALL=true
when triggering
the job.
You can regenerate the HTML for a single blob by setting
SOURCE_BLOB=docfx-lang-pkg-version.tgz
when triggering the job.
If you want to use a different bucket than the default, set SOURCE_BUCKET
.
You can delete old tarballs using the delete-blob
job. Trigger the job with
the BLOB_TO_DELETE
environment variable set to the full name of the blob
you want to delete, for example gs://my-bucket/lang-library-1.0.0.tar.gz
.
Be sure to delete the docfx-
and non-docfx-
tarballs! Also, after deleting
the tarball, be sure to delete the content on the site; it is not automatically
deleted.
See .trampolinerc
for the canonical list of relevant environment variables.
TESTING_BUCKET
: Set when running tests. See the Testing section.SOURCE_BUCKET
: The bucket to use for regeneration. See Running locally.SOURCE_BLOB
: A single blob to regenerate. Only the blob name - do not
include gs://
or the bucket.LANGUAGE
: Regenerates all docs under specified language. For example: LANGUAGE=dotnet
FORCE_GENERATE_ALL
: Set to true
to regenerate all docs.FORCE_GENERATE_LATEST
: Set to true
to regenerate all latest versions of
docs.BLOB_TO_DELETE
: Blob to delete from storage. Include full bucket and object name. For
example: gs://my-bucket/docfx-python-test-tarball.tar.gz
Formatting is done with black
and style is verified with flake8
.
You can check everything is correct by running:
black --check docpipeline tests
flake8 docpipeline tests
If a file is not properly formatted, you can fix it with:
black docpipeline tests
my-bucket
).my-bucket
to
/dev/shm/73713_docuploader_service_account
.my-bucket
with your development bucket:
TEST_BUCKET=my-bucket TRAMPOLINE_BUILD_FILE=./ci/run_tests.sh TRAMPOLINE_IMAGE=gcr.io/cloud-devrel-kokoro-resources/docfx TRAMPOLINE_DOCKERFILE=docfx/Dockerfile ci/trampoline_v2.sh
UPDATE_GOLDENS=1
environment variable.pip install -e .
pip install pytest
black --check tests
flake8 tests
GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json TEST_BUCKET=my-bucket pytest tests
--update-goldens
flag:
pytest --update-goldens tests
doc-pipeline
(for example, my-dir
).docs.metadata
file in my-dir
. You can copy one from here..yml
files for one package to my-dir
.my-dir
with your directory name:
INPUT=my-dir TRAMPOLINE_BUILD_FILE=./generate.sh TRAMPOLINE_IMAGE=gcr.io/cloud-devrel-kokoro-resources/docfx TRAMPOLINE_DOCKERFILE=docfx/Dockerfile ci/trampoline_v2.sh
docfx build
over the package in my-dir, and places the
resulting HTML inside a subdirectory in my-dir
. The subdirectory is
named after the package name found in the metadata.docfx-*.tgz
file. For example:
gsutil cp gs://docs-staging-v2-dev/docfx-nodejs-scheduler-2.1.1.tar.gz gs://my-bucket
my-bucket
to
/dev/shm/73713_docuploader_service_account
.my-bucket
with your development bucket:
SOURCE_BUCKET=my-bucket TRAMPOLINE_BUILD_FILE=./generate.sh TRAMPOLINE_IMAGE=gcr.io/cloud-devrel-kokoro-resources/docfx TRAMPOLINE_DOCKERFILE=docfx/Dockerfile ci/trampoline_v2.sh
docfx build
, and uploads the result..tgz
file, unpack it, inspect a few
files (you should see <html devsite="">
at the top), and try staging it to
confirm it looks OK.