beverloo / volunteer-manager

Volunteer Managing environment for AnimeCon
MIT License
1 stars 0 forks source link

AnimeCon Volunteer Manager

This repository contains the AnimeCon Volunteer Manager, through which the Crew, Crew Care, Festival Host and Steward teams will be managed. The following features are available in the current version:

The Volunteer Manager has integrations with AnPlan, Google Cloud and the Google Vertex AI API for sourcing event information and implementing certain functionality. A MySQL (or MariaDB) database is used, for which we use the excellent ts-sql-query library.

A purpose built scheduler is included to execute one-off and repeating tasks outside of user requests, enabling the Volunteer Manager to optimise for imediate responsiveness.

Building and deploying

Checking out the repository

We depend on a private volunteer-manager-timeline component, availability of which is restricted; talk to one of the maintainers of this repository if you need access. Releases of this package are published on GitHub's NPM registry (also see StackOverflow).

You will have to log in to GitHub's NPM repository to access the private package:

npm login --scope=@beverloo --registry=https://npm.pkg.github.com

Building a developer environment

Developing the AnimeCon Volunteer Manager follows NextJS' best practices. The following commands are enabled and actively supported:

$ npm run build
$ npm run serve

It is recommended to run the build and test commands prior to committing a change.

The serve command spawns a local server that features live reload and advanced debugging capabilities. This is the recommended environment for development. In order for this to work well, you will need to copy .env.development.sample to .env.development and fill in the details of a MySQL database, as well as various encryption passwords. Each of those passwords needs to be at least 32 characters in length.

Building a production environment

Deployment of the AnimeCon Volunteer Manager happens using a Docker image. One can be created by running the following command, instructed by our Dockerfile:

$ npm run build-prod

Once the image has been created, you can run it locally through npm run serve-prod, providing Docker has been installed on your system. The production environment will need a completed .env.production file based on .env.production.sample.

Deploying to production

Deployment to the actual server is done through a GitHub Action that mimics these steps remotely. This action is accessible through the GitHub user interface.

Testing

Jest and unit tests

We use Jest for unit tests in the project. They primarily focus on server-side logic, as using them for client-side components is awkward at best. (Consider Playwright.) Adding unit tests is easy and cheap, so should be the default for anything involving non-trivial logic.

$ npm run test

Playwright end-to-end tests

We use Playwright to enable end-to-end testing of the critical user journeys part of the Volunteer Manager. The full suite can be found in e2e/, where the cases are grouped together based on their use case.

$ npm run test:e2e

Not everything is expected to be covered by end-to-end tests, their primary purpose is to act as a smoke test for important user journeys that we don't manually verify frequently. An example would be creating an account, as it's safe to assume everyone working on this project has one.

Debugging

TypeScript performance tracing

Our project relies heavily on TypeScript for typing, and the "linting and checking validity of types" step is the slowest step in our build process. Use of recursive types and branches with far reaching consequences have repeatedly doubled, if not tripled build times.

The TypeScript Performance page contains a wealth of information on how to debug this. We've had success in identifying issues using tracing, which can be created as follows:

$ npx tsc -p ./tsconfig.json --generateTrace trace

The generated trace.json file, in the trace/ directory, can then be inspected using the Perfetto tool. Any file or type that takes more than 100ms should be considered a concern.

Another way to analyse the generated trace is to use the @typescript/analyze-trace tool, which can be done as follows:

$ npm install --no-save @typescript/analyze-trace
$ npx analyze-trace trace

This tool will highlight any file that takes more than 500ms of compilation time. Note that the output is a little bit harder to read, as there exists a compounding effect where e.g. all of MUI may be attributed to an otherwise unrelated file.