Codename: "Bridge"
quay.io/openshift/origin-console
The console is a more friendly kubectl
in the form of a single page webapp. It also integrates with other services like monitoring, chargeback, and OLM. Some things that go on behind the scenes include:
/api/kubernetes
contrib/environment.sh
)This project uses Go modules,
so you should clone the project outside of your GOPATH
. To build both the
frontend and backend, run:
./build.sh
Backend binaries are output to ./bin
.
The following instructions assume you have an existing cluster you can connect to. OpenShift 4.x clusters can be installed using the OpenShift Installer. More information about installing OpenShift can be found at https://try.openshift.com/. You can also use CodeReady Containers for local installs, or native Kubernetes clusters.
For local development, you can disable OAuth and run bridge with an OpenShift
user's access token. If you've installed OpenShift 4.0, run the following
commands to login as the kubeadmin user and start a local console for
development. Make sure to replace /path/to/install-dir
with the directory you
used to install OpenShift.
oc login -u kubeadmin -p $(cat /path/to/install-dir/auth/kubeadmin-password)
source ./contrib/oc-environment.sh
./bin/bridge
The console will be running at localhost:9000.
If you don't have kubeadmin
access, you can use any user's API token,
although you will be limited to that user's access and might not be able to run
the full integration test suite.
If you need to work on the backend code for authentication or you need to test
different users, you can set up authentication in your development environment.
Registering an OpenShift OAuth client requires administrative privileges for
the entire cluster, not just a local project. You must be logged in as a
cluster admin such as system:admin
or kubeadmin
.
To run bridge locally connected to an OpenShift cluster, create an
OAuthClient
resource with a generated secret and read that secret:
oc process -f examples/console-oauth-client.yaml | oc apply -f -
oc get oauthclient console-oauth-client -o jsonpath='{.secret}' > examples/console-client-secret
If the CA bundle of the OpenShift API server is unavailable, fetch the CA
certificates from a service account secret. Due to upstream changes,
these service account secrets need to be created manually.
Otherwise copy the CA bundle to
examples/ca.crt
:
oc apply -f examples/sa-secrets.yaml
oc get secrets -n default --field-selector type=kubernetes.io/service-account-token -o json | \
jq '.items[0].data."ca.crt"' -r | python -m base64 -d > examples/ca.crt
# Note: use "openssl base64" because the "base64" tool is different between mac and linux
Finally run the console and visit localhost:9000:
./examples/run-bridge.sh
In order to enable the monitoring UI and see the "Observe" navigation item while running locally, you'll need to run the OpenShift Monitoring dynamic plugin alongside Bridge. To do so, follow these steps:
cd
to the monitoring-plugin root dirmake install && make start-frontend
export BRIDGE_PLUGINS="monitoring-plugin=http://localhost:9001"
tectonic-console-builder
imageUpdating tectonic-console-builder
image is needed whenever there is a change in the build-time dependencies and/or go versions.
In order to update the tectonic-console-builder
to a new version i.e. v27, follow these steps:
tectonic-console-builder
image tag in files listed below:
tectonic-console-builder:27
./push-builder.sh
script build and push the updated builder image to quay.io.
Note: You can test the image using ./builder-run.sh ./build-backend.sh
.
To update the image on quay.io, you need edit permission to the quay.io/coreos/ tectonic-console-builder repo.tectonic-console-builder
image tag in
[openshift/release](https:// github.com/openshift/release/blob/master/core-services/image-mirroring/supplemental-ci-images/mapping_supplemental_ci_images_ci) repository.
Note: There could be scenario were you would have to add the new image reference in the "mapping_supplemental_ci_images_ci" file, i.e. to avoid CI downtime for upcoming release cycle.
Optional: Request for the rhel-8-base-nodejs-openshift-4.15 nodebuilder update if it doesn't match the node version in tectonic-console-builder
.If you want to use CodeReady for local development, first make sure it is set up, and the OpenShift cluster is started.
To login to the cluster's API server, you can use the following command:
oc login -u kubeadmin -p $(cat ~/.crc/machines/crc/kubeadmin-password) https://api.crc.testing:6443
… or, alternatively, use the CRC daemon UI (Copy OC Login Command --> kubeadmin) to get the cluster-specific command.
Finally, prepare the environment, and run the console:
source ./contrib/environment.sh
./bin/bridge
If you have a working kubectl
on your path, you can run the application with:
export KUBECONFIG=/path/to/kubeconfig
source ./contrib/environment.sh
./bin/bridge
The script in contrib/environment.sh
sets sensible defaults in the environment, and uses kubectl
to query your cluster for endpoint and authentication information.
To configure the application to run by hand, (or if environment.sh
doesn't work for some reason) you can manually provide a Kubernetes bearer token with the following steps.
First get the secret ID that has a type of kubernetes.io/service-account-token
by running:
kubectl get secrets
then get the secret contents:
kubectl describe secrets/<secret-id-obtained-previously>
Use this token value to set the BRIDGE_K8S_AUTH_BEARER_TOKEN
environment variable when running Bridge.
In OpenShift 4.x, the console is installed and managed by the console operator.
See CONTRIBUTING for workflow & convention details.
See STYLEGUIDE for file format and coding style guide.
go 1.18+, nodejs/yarn, kubectl
All frontend code lives in the frontend/
directory. The frontend uses node, yarn, and webpack to compile dependencies into self contained bundles which are loaded dynamically at run time in the browser. These bundles are not committed to git. Tasks are defined in package.json
in the scripts
section and are aliased to yarn run <cmd>
(in the frontend directory).
To install the build tools and dependencies:
cd frontend
yarn install
You must run this command once, and every time the dependencies change. node_modules
are not committed to git.
The following build task will watch the source code for changes and compile automatically.
If you would like to disable hot reloading, set the environment variable HOT_RELOAD
to false
.
yarn run dev
If changes aren't detected, you might need to increase fs.inotify.max_user_watches
. See https://webpack.js.org/configuration/watch/#not-enough-watchers. If you need to increase your watchers, it's common to see multiple errors beginning with Error from chokidar
.
Note: ensure yarn run dev
has finished its initial build before visiting http://localhost:9000, otherwise ./bin/bridge
will stop running.
Run all unit tests:
./test.sh
Run backend tests:
./test-backend.sh
Run frontend tests:
./test-frontend.sh
cd frontend; yarn run build
debugger;
statements to any unit testyarn debug-test route-pages
debugger;
statementsCypress integration tests are implemented in Cypress.io.
To install Cypress:
cd frontend
yarn run cypress install
Launch Cypress test runner:
cd frontend
oc login ...
yarn run test-cypress-console
This will launch the Cypress Test Runner UI in the console
package, where you can run one or all Cypress tests.
Important: when testing with authentication, set BRIDGE_KUBEADMIN_PASSWORD
environment variable in your shell.
An alternate way to execute cypress tests is via frontend/integration-tests/test-cypress.sh which takes a -p <package>
parameter to allow execution in different packages. It also can run Cypress tests in the Test Runner UI or in -- headless
mode:
console/frontend > ./integration-tests/test-cypress.sh
Runs Cypress tests in Test Runner or headless mode
Usage: test-cypress [-p] <package> [-s] <filemask> [-h true]
'-p <package>' may be 'console, 'olm' or 'devconsole'
'-s <specmask>' is a file mask for spec test files, such as 'tests/monitoring/*'. Used only in headless mode when '-p' is specified.
'-h true' runs Cypress in headless mode. When omitted, launches Cypress Test Runner
Examples:
./integration-tests/test-cypress.sh // displays this help text
./integration-tests/test-cypress.sh -p console // opens Cypress Test Runner for console tests
./integration-tests/test-cypress.sh -p olm // opens Cypress Test Runner for OLM tests
./integration-tests/test-cypress.sh -h true // runs all packages in headless mode
./integration-tests/test-cypress.sh -p olm -h true // runs OLM tests in headless mode
./integration-tests/test-cypress.sh -p console -s 'tests/crud/*' -h true // runs console CRUD tests in headless mode
When running in headless mode, Cypress will test using its integrated Electron browser, but if you want to use Chrome or Firefox instead, set BRIDGE_E2E_BROWSER_NAME
environment variable in your shell with the value chrome
or firefox
.
More information on Console's Cypress usage
More information on DevConsole's Cypress usage
The end-to-end tests run against pull requests using ci-operator. The tests are defined in this manifest in the openshift/release repo and were generated with ci-operator-prowgen.
CI runs the test-prow-e2e.sh script, which runs frontend/integration-tests/test-cypress.sh.
test-cypress.sh
runs all Cypress tests, in all 'packages' (console, olm, and devconsole), in -- headless
mode via:
test-cypress.sh -h true
For more information on test-cypress.sh
usage please see Execute Cypress in different packages
See INTERNATIONALIZATION for information on our internationalization tools and guidelines.
Once you have made changes locally, these instructions will allow you to push changes to an OpenShift cluster for others to review. This involves building a local image, pushing the image to an image registry, then updating the OpenShift cluster to pull the new image.
docker build -t <your-image-name> <path-to-repository | url>
. For example:docker build -t quay.io/myaccount/console:latest .
docker push <your-image-name>
. Make sure
docker is logged into your image registry! For example:docker push quay.io/myaccount/console:latest
oc patch consoles.operator.openshift.io cluster --patch '{ "spec": { "managementState": "Unmanaged" } }' --type=merge
oc set image deploy console console=quay.io/myaccount/console:latest -n openshift-console
oc rollout status -w deploy/console -n openshift-console
You should now be able to see your development changes on the remote OpenShift cluster!
When done, you can put the console operator back in a managed state to remove the custom image:
oc patch consoles.operator.openshift.io cluster --patch '{ "spec": { "managementState": "Managed" } }' --type=merge
Dependencies should be pinned to an exact semver, sha, or git tag (eg, no ^).
Whenever making vendor changes:
vendor/
(eg, server: add x feature
)vendor/
(eg, vendor: revendor
)Adding new or updating existing backend dependencies:
go.mod
file to the desired version (most likely a git hash)go mod tidy && go mod vendor
go.sum
will have been updated to reflect the changes to go.mod
and the package will have been updated in vendor
.Add new frontend dependencies:
yarn add <package@version>
Update existing frontend dependencies:
yarn upgrade <package@version>
To upgrade yarn itself, download a new yarn release from
https://github.com/yarnpkg/yarn/releases, replace the release in
frontend/.yarn/releases
with the new version, and update yarn-path
in
frontend/.yarnrc
.
Note that when upgrading @patternfly packages, we've seen in the past that it can cause the JavaScript heap to run out of memory, or the bundle being too large if multiple versions of the same @patternfly package is pulled in. To increase efficiency, run the following after updating packages:
npx yarn-deduplicate --scopes @patternfly
We support the latest versions of the following browsers:
IE 11 and earlier is not supported.
The server provides oc
binaries from the quay.io/repository/openshift/origin-cli-artifacts image.
./build-downloads.sh
After building, the server can be run directly with:
./bin/downloads --config-path=cmd/downloads/config/defaultArtifactsConfig.yaml
Alternatively, you can use the provided Dockerfile.downloads to build an image containing the server. Use the following command to build the Docker image:
docker build -f Dockerfile.downloads -t downloadsserver:latest .
Note: If you are running on macOS, you might need to pass the --platform linux/amd64
flag to the Docker build command. The origin-cli-artifacts image is not supported on macOS.
To launch the server using the built image, you can run:
docker run -p 8081:8081 downloadsserver:latest
The console application automatically reports CSP violations to telemetry. This detection and
reporting logic attempts to parse a dynamic plugin name from the securitypolicyviolation event to
include in the data reported to telemetry. If a plugin name is not determined in
this way, then 'none' will be used. Additionally, violation reporting is throttled to prevent
spamming the telemetry service with repetitive data. Identical violations will not be
reported more than once a day.