Terarium is the client application for the ASKEM program providing capabilities to create, modify, simulate, and publish machine extracted models.
The Terarium client is built with Typescript and Vue3. The Terarium server is built with Java and Spring Boot. To run and develop Terarium, you will need these as a prerequisite:
There are many ways/package managers to install these dependencies. We recommend using Homebrew on MacOS.
brew install openjdk@17
brew install gradle
brew install node
brew install yarn
brew install ansible
You will need to have the ansible askem vault password in your home directory in a file named askem-vault-id.txt
. You can find this file in the ASKEM TERArium (Shared External) drive on Google Drive. This file is not included in the repository for security reasons. Please contact the team for access to this file.
There is a companion project to Terarium which handles spinning up the required external services. Depending on what you're doing this can be configured to run all or some of the related services. If this is necessary, you will need to start the orchestration project up before continuing (see documentation here).
To install client package dependencies, run the following command in the root directory:
yarn install
Running the client in dev mode requires running two processes: the local client dev server and the typescript model generation.
To run both processes with a single command:
yarn dev
To run as individual processes:
yarn workspace hmi-client run dev # client development server
yarn workspace @uncharted/server-type-generator run dev # typescript model generator
To generate the typescript models as a single command:
yarn workspace @uncharted/server-type-generator run generateTypes
The client, when running with the command yarn dev
, connects to the server in the dev environment, enabling client-side development without the need to spin up the server locally.
To run the client while connecting to the server running locally, use the following command:
yarn local
If you don't intend to run the backend with a debugger, you can simply kick off the back end process via the hmiServerDev.sh
script located in the root of this directory. It will handle decrypting secrets, starting the server, and re-encrypting secrets once you shut the server down. If you do intend to debug the back end, skip this step and see the below debug instructions
./hmiServerDev.sh start local run
Note: to run everything local you need to update your
/etc/hosts
with the following127.0.0.1 minio
.sudo sh -c 'grep -qF "127.0.0.1 minio" /etc/hosts || echo "127.0.0.1 minio" >> /etc/hosts'
If you are going to run the server using the Intellij / VSCode debugger, you can run just the required containers and handle decryption with the following command
./hmiServerDev.sh start local
If you're looking to just decrypt or encrypt secrets you can run:
./hmiServerDev.sh decrypt
or
./hmiServerDev.sh encrypt
If running decrypt, you'll see the contents of application-secrets.properties.encrypted
decrypted to plain text.
There should now be a application-secrets.properties
file in the packages/server/src/main/resources
dir.
If running encrypt, application-secrets.properties
's content will be encrypted into the *.encrypted file.
To debug task runners locally, follow these steps to modify specific Docker Compose and configuration files:
Update docker-compose-local.yml
:
docker-compose-local.yml
.scripts/docker-compose-taskrunner.yml
.Configure scripts/docker-compose-taskrunner.yml
:
scripts/docker-compose-taskrunner.yml
.Edit application-local.properties
:
application-local.properties
.addresses
, username
, and password
properties for the task runner you want to run locally.Following these steps will enable you to debug the task runners in a local environment.
A functional docker-compose-lean.yml
with all services necessary to run the terarium
backend can be spun up with the following:
docker compose --file containers/docker-compose-lean.yml pull
docker compose --file containers/docker-compose-lean.yml up --detach --wait
This will standup a local terarium
server on port 3000
supporting all data service endpoints.
The terarium
backend uses OAuth2.0
via keycloak
for user authentication. In order to make calls to the data services simpler, a service-user
can be used by providing a basic auth credential instead.
Please use the following basic auth credential if running docker-compose-lean.yml
:
'Authorization: Basic YWRhbTphc2RmMUFTREY='
If you prefer the JSON request / response keys to be snake_case
rather than camelCase
include the following header in any data service request:
'X-Enable-Snake-Case'
If integrating the docker-compose-lean.yml
into another repo, the following files and directory structure is expected:
- scripts
- init.sql // initialize the postgres databases
- realm
- Terarium-realm.json // keycloak realm definition
- Terarium-users-0.json // keycloak user definitions
- docker-compose-lean.yml
For convenience, a Swagger UI is provided to experiment with the API. With the server running
locally (eg, not via Docker), it can be accessed at http://localhost:3000/swagger-ui/index.html.
To authorize requests, click the Authorize
button and click Authorize
on the modal that appears. You can enter the credentials
of the user you want to use to make requests.
Note: In order to "logout" from Swagger, you will need to clear your browser's cookies.
A Postman collection can be imported via the OpenAPI specification at http://localhost:3000/v3/api-docs. In Postman:
Import
button at the top left of the Postman windowContinue
Import
and you should have a new collection named Terarium APIs
Authorization
tabClient ID
is app
and the Authorize using browser
checkbox is checkedyarn workspace hmi-client run test
./gradlew test
Please see further documentation in the Terarium Contributing Guide
This repository follows the Conventional Commits Specification using CommitLint to validate the commit message on the PR. If the message does not conform to the specification the PR will not be allowed to be merged.
This automatic check is done through the use of CI workflows on GitHub defined in commitlint.yaml. It uses the configuration from the Commitlint Configuration File.
Currently the CI configuration is set to check only the PR message as the commits are being squashed. If this ever changes and all commits need to be validated then appropriate changes (as commented) in the commitlint.yaml should be made.