Minimal API for the ACRFD (formerly: PRC) application.
Technology | Version | Website | Description |
---|---|---|---|
node | 10.x.x | https://nodejs.org/en/ | JavaScript Runtime |
npm | 6.x.x | https://www.npmjs.com/ | Node Package Manager |
yarn | latest | https://yarnpkg.com/en/ | Package Manager (more efficient than npm) |
mongodb | 3.2 | https://docs.mongodb.com/v3.2/installation/ | NoSQL database |
Note: Windows users can use NVM Windows to install and manage multiple versions of Node+Npm.
npm install -g yarn
yarn install
npm start
Go to http://localhost:3000/api/docs to verify that the application is running.
Note: To change the default port edit swagger.yaml
.
Linting and formatting is handled by a combiation of TSlint
and Prettier
. The reason for this, is that you get the best of both worlds: TSlint's larger selection of linting rules with Prettier's robust formatting rules.
These 2 linters (tslint, Prettier) do have overlapping rules. To avoid weird rule interactions, TSlint has been configured to defer any overlapping rules to Prettier, via the use of tslint-config-prettier
in tslint.json
.
TSLint, Prettier, Stylelint, husky, lint-staged
Package.json has been configured to use husky
/lint-staged
to run the lint-fix
(linting + formatting) commands, against the files staged to be committed, whenever you perform a commit. This ensures that all committed code has been linted and formatted correctly.
If the linters or formatters find issues that cannot be automatically fixed, it will throw an error and provide output as to what is wrong. Fix the issues and commit again.
*.js
files using ESLint
.
npm run lint
Note: In the worst case scenario, where linting/formatting has been neglected, then these lint-fix
commands have the potential to create 100's of file changes. In this case, it is recommended to only run these commands as part of a separate commit.
Note: Not all linting/formatting errors can be automatically fixed, and will require human intervention.
*.js
files using ESLint
+ Prettier
.npm run lint-fix
The API is defined in swagger.yaml
.
If this project is running locally, you can view the api docs at: http://localhost:3000/api/docs/
This project uses npm package swagger-tools
via ./app.js
to automatically generate the express server and its routes, based on the contents of swagger.yaml
.
Useful Note: The handler function for each route is specified by the operationId
field.
Recommend reviewing the Open API Specification before making any changes to the swagger.yaml
file.
A centralized logger has been created (see api/helpers/logger.js
).
The loggers log level can be configured via an environment variable: LOG_LEVEL
Set this variable to one of: error
, warn
, info
, debug
Default value: info
const log = require('./logger)('a meaningful label, typically the class name`)
log.error('Used when logging unexpected errors. Generally these will only exist in catch() blocks');
log.warn('Used when logging soft errors. For example, if your request finished but returned a 404 not found');
log.info('General log messages about the state of the application');
log.debug('Useful for logging objects and other developer data', JSON.stringify(myObject));
This project contains two kinds of unit tests. Regular unit tests and API unit tests, which require some special considerations and setup, as detailed in the API Testing section below.
Jest, SuperTest, Nock, Mongodb-Memory-Server
package.json
tests
command sets the UPLOAD_DIRECTORY
environment variable, the command for which may be OS specific and therefore may need adjusting depending on your machines OS.npm run tests
This project is using jest as a testing framework. You can run tests with
yarn test
or jest
. Running either command with the --watch
flag will re-run the tests every time a file is changed.
To run the tests in one file, simply pass the path of the file name e.g. jest api/test/search.test.js --watch
. To run only one test in that file, chain the .only
command e.g. test.only("Search returns results", () => {})
.
The MOST IMPORTANT thing to know about this project's test environment is the router setup. At the time of writing this, it wasn't possible to get swagger-tools router working in the test environment. As a result, all tests COMPLETELY bypass the real life swagger-tools router. Instead, a middleware router called supertest is used to map routes to controller actions. In each controller test, you will need to add code like the following:
const test_helper = require('./test_helper');
const app = test_helper.app;
const featureController = require('../controllers/feature.js');
const fieldNames = ['tags', 'properties', 'applicationID'];
app.get('/api/feature/:id', function(req, res) {
let params = test_helper.buildParams({'featureId': req.params.id});
let paramsWithFeatureId = test_helper.createPublicSwaggerParams(fieldNames, params);
return featureController.protectedGet(paramsWithFeatureId, res);
});
test("GET /api/feature/:id returns 200", done => {
request(app)
.get('/api/feature/AAABBB')
.expect(200)
.then(done)
});
This code will stand in for the swagger-tools router, and help build the objects that swagger-tools magically generates when HTTP calls go through it's router. The above code will send an object like below to the api/controllers/feature.js
controller protectedGet
function as the first parameter (typically called args
).
{
swagger: {
params: {
auth_payload: {
scopes: ['sysadmin', 'public'],
userID: null
},
fields: {
value: ['tags', 'properties', 'applicationID']
},
featureId: {
value: 'AAABBB'
}
}
}
}
Unfortunately, this results in a lot of boilerplate code in each of the controller tests. There are some helpers to reduce the amount you need to write, but you will still need to check the parameter field names sent by your middleware router match what the controller(and swagger router) expect. However, this method results in pretty effective integration tests as they exercise the controller code and save objects in the database.
The tests run on an in-memory MongoDB server, using the mongodb-memory-server package. The setup can be viewed at test_helper.js, and additional config in [config/mongoose_options.js]. It is currently configured to wipe out the database after each test run to prevent database pollution.
Factory-Girl is used to easily create models(persisted to db) for testing purposes.
External http calls (such as GETs to BCGW) are mocked with a tool called nock. Currently sample JSON responses are stored in the test/fixtures directory. This allows you to intercept a call to an external service such as bcgw, and respond with your own sample data.
const bcgwDomain = 'https://openmaps.gov.bc.ca';
const searchPath = '/geo/pub/FOOO';
const crownlandsResponse = require('./fixtures/crownlands_response.json');
var bcgw = nock(bcgwDomain);
let dispositionId = 666666;
beforeEach(() => {
bcgw.get(searchPath + urlEncodedDispositionId)
.reply(200, crownlandsResponse);
});
test('returns the features data from bcgw', done => {
request(app).get('/api/public/search/bcgw/dispositionTransactionId/' + dispositionId)
.expect(200)
.then(response => {
let firstFeature = response.body.features[0];
expect(firstFeature).toHaveProperty('properties');
expect(firstFeature.properties).toHaveProperty('DISPOSITION_TRANSACTION_SID');
done();
});
});
This project uses Keycloak to handle authentication and manage user roles.
Required environment variables:
TTLS_API_ENDPOINT="<see OpenShift api deployment variables>"
WEBADE_AUTH_ENDPOINT="<see OpenShift api deployment variables>"
WEBADE_USERNAME="<see OpenShift api deployment variables>"
WEBADE_PASSWORD="<see OpenShift ttls-api-test secret>"
A list of recommended/helpful VS Code extensions.