This system is an Apollo Server that support GraphQL interactions.
The primary purpose of this system is to be the backend for the corresponding DMP Tool's nextJS based user interface.
It has been decoupled from that system though in order to facilitate its use beyond the DMP Tool. For example if a University wants to develop an in-house integration, they can use the authentication endpoints to authenticate their users and interact with both templates and data management plans directly.
GraphQL makes these types of integrations easier than a standard REST API.
Our Apollo server consists of:
CORS The system uses CORS to ensure that traffic only comes from approved callers.
CSRF
The system uses the X-CSRF-Token
header to store CSRF tokens. CSRF tokens are generated on each request. The token is hashed and stored in the Redis cache to ensure that it has not been tampered with. When the caller submits a non-GET (or OPTIONS or HEAD) request, they must include the CSRF token.
Access/Refresh Tokens
Once a user signs in, the system generates an Access Token and a Refresh Token. The refresh token is hashed and stored in the Redis cache. The system then stores these tokens in HTTP-only cookies: access token dmspt
and refresh token dmspr
.
The system provides a few endpoints that exist outside the Apollo Server GraphQL context. These endpoints allow a user to authenticate and acquire JSON web tokens which are placed into HTTP-only cookies. Those cookies are then used to perform authorization checks when running GraphQL queries.
The system generates a short lived (10 minute) access token dmspt
that should be used on all requests to the GraphQL endpoint.
A longer lived (24 hour) refresh token dmspr
is also generated. The refresh token can be used to refresh an expired access token. Doing so generates completely new access AND refresh tokens.
When the user signs out, their access token is added to the Redis cache black list to ensure that it cannot be used afterward.
The signup controller responds to POST
requests to /apollo-signup
. It will validate the posted user data in the body of the request and attempt to create a new User record in the database.
flowchart LR;
a[User data]-->b[Register];
b-->c{User has errors?};
c-->|no| d[generateTokens]
c-->|yes| e[return 400]
d-->|yes| f[add refresh token to cache]
f-->g[set HTTP-only cookies]
d-->|no| h[return 500]
If successful, a new User record is created and an access token and refresh token are generated. The refresh token is stored in the Redis cache and then both tokens are added to the response as HTTP-only cookies.
The signin controller responds to POST
requests to /apollo-signin
. It will validate the posted user email and password in the body of the request and attempt to locate the User record in the database. If the user is found the system will validate the password.
flowchart LR;
a[User credentials]-->b[login];
b-->c{Success?};
c-->|yes| d[generateTokens]
c-->|no| e[return 401]
d-->|yes| f[add refresh token to cache]
f-->g[set HTTP-only cookies]
d-->|no| h[return 500]
If successful, an access token and refresh token are generated. The refresh token is stored in the Redis cache and then both tokens are added to the response as HTTP-only cookies.
The signout controller responds to POST
requests to /apollo-signout
. It will retrieve the access token from the request's HTTP-only cookies and attempt to verify the token. If the token is still valid it will proceed with the signout.
flowchart LR;
a[Access token]-->b[verify token];
b-->c{Success?};
c-->|yes| d[remove refresh token from cache]
c-->|no| e[return 400]
d-->f[add access token to black list]
f-->g[delete HTTP-only cookies]
If successful, the refresh token will be removed from the Redis cache. Then the access token will be added to the black list on the Redis cache and then both HTTP-only cookies will be deleted.
The signout controller responds to POST
requests to /apollo-refresh
. It will retrieve the refresh token from the request's HTTP-only cookies and attempt to verify the token. If the token is still valid.
flowchart LR;
a[Refresh token]-->b[verify token & token not revoked];
b-->c{Success?};
c-->|yes| d[verify user]
c-->|no| e[return 401]
d-->f{Success}
f-->|yes| g[generate access token]
f-->|no| h[return 401]
g-->|yes| i[set HTTP-only cookies]
g-->|no| j[return 500]
If successful, a new access token will be created and replace the existing one in the response's HTTP-only cookies.
The access token received from the authentication endpoints above will then be used by the system to determine whether or not a user is authorized to access certain data.
flowchart LR;
a[Access token]-->b[verify token & token not revoked];
b-->c{Success?};
c-->|yes| d[Authorization check]
c-->|no| e[return 401]
d-->f{Authorized?};
f-->|yes| g[perform query/mutation]
f-->|no| h[return 403]
GraphQL consists of:
cp ./.env.example ./.env
.env
file if necessary (make sure it has no references to MYSQL unless you want to override the docker-compose db setings)docker-compose build
to build the containerdocker-compose up
docker-compose exec apollo bash ./data-migrations/database-init.sh
to create the database and build the dataMigrations
table which will be used to track which data migrations have been run.docker-compose exec apollo bash ./data-migrations/process.sh
to build out the remaining database tables and seed them with sample datahttp://localhost:4000/graphql
to load the Apollo server explorer and verify that the system is running.Once the application is installed and the database has been initialized you can start the Apollo server with: docker-compose up
This will startup a docker container that consists of a local Redis cache, a MariaDB database, and the Apollo server node.js application.
Once the container is up and running you can visit http://localhost:4000/graphql
to load the Apollo server explorer and verify that the system is running. For an overview of how the explorer works, please refer to the offical docs for the GraphOS Studio Explorer
You can manually build the application for your production environment by running docker run build
and then docker run start
to startup the application.
When deploying manually, you will need to ensure that all of the environment variables defined in the .env.example
and the docker-compose.yaml
are available to the application. This can be done via a .env
file or environment variables.
If you plan on deploying to the AWS cloud, you can refer to the corresponding AWS infrastructure repository. For the CloudFormation templates needed to build this application using CodePipeline and then host it within an ECS cluster.
Local Docker container
docker-compose exec apollo bash ./data-migrations/process.sh
in a seperate terminal window.AWS Cloud9 Bastion Host
./data-migrations/process.sh [env]
In the event that you want to delete all of the tables and data from your database and rebuild a clean database you can run.
Local Docker container
docker-compose exec apollo bash ./data-migrations/nuke-db.sh
docker-compose exec apollo bash ./data-migrations/process.sh
Note that the container must be running!
You may find that you receive an error that the dataMigrations
table already exists when running the process.sh
script. If so, restart the container and try again.
AWS Cloud9 Bastion Host
NEVER EVER do this in production! You will lose ALL data.
./data-migrations/nuke-db.sh
./data-migrations/process.sh
The local development environment is encapsulated within a docker container. To build and run the development Docker containers:
docker-compose build
docker-compose up
To stop the docker container, run:
docker-compose down
Run the following to check that your container is up:
docker container ls -a
To run bash commands within the container (e.g. to run DB migrations):
docker-compose exec apollo bash path/to/script
If you need to add additional Queries and/or Mutations, you will typically need to update 3 distinct sections of the Apollo server framework.
Our context is defined in src/context.ts
and consists of several items that are instantiated when the server starts up or as part of processing the incoming request.
Official Apollo Server docs for schemas
To add Queries or Mutations you should locate the appropriate schema in the src/schemas
directory. If you have a completely new entity you want to add, then create a new schema file in that directory (use an existing one as reference) and then be sure to import it into src/schemas.ts
and make sure it is getting passed to the Apollo server.
Once a schema has been added/modified, you will need to run npm run generate
this kicks off a script that builds out Typescript Types for the new schema and queries.
Official Apollo Server docs for resolvers
Resolvers can be found in the src/resolvers/
directory. You should have a corresponding Query and Mutation for each one defined within the GraphQL schema.
A resolver receives the following for each request:
Resolvers are responsible for doing basic input validation and performing authorization checks (e.g. is the person an ADMIN?). They then hand off the request to a Model.
GraphQL using a concept called chaining to resolve complex queries that request access to multiple object types. When you define a relationship between objects within the GraphQL schema, resolvers will be called when appropriate to retrieve each object.
For example, assume the following schema:
extend type Query {
user(userId: Int!): User
}
type User {
id: Int
email: String!
affiliation: Affiliation!
}
In this schema, we have a query to fetch a user record. The User object exposes a reference to an asscoiated affiliation.
GraphQL allows the caller to dictate what data they want to receive back from a query request. So, if the caller requests:
query user($userId: Int!) {
user(userId: $userId) {
email
}
}
{
"userId": 1
}
Apollo server will call the resolver for the user to fetch the email from the database but will ignore the associated affiliation because the caller did not request it.
If on the other hand the caller asked for the affiliation in the request:
query user($userId: Int!) {
user(userId: $userId) {
email
affiliation {
id
name
}
}
}
{
"userId": 1
}
Apollo server will call the resolver to get the email and affiliationId for the user from the database. Once it has retreived the affiliationId, it will make a subsequent call to the DMPHub API to fetch the ROR id and the name of the affiliation.
To define chainging in the resolver you would do something like this:
Query: {
// Resolver exposed by GraphQL
user: async (_, { userId }, context: MyContext): Promise<User> => {
return await User.findById('user resolver', context, userId);
},
},
User: {
// Chained resolver to fetch the Affiliation info for a user
affiliation: async (parent: User, _, context): Promise<Affiliation> => {
return Affiliation.findById('Chained User.affiliation', context, parent.affiliationId);
},
},
If you added a new resolver, be sure to import it into the src/resolvers.ts
file and include it for Apollo Server.
Models can be found in the src/models
directory. They typically contain all of the business logic for accessing and modifying data. They are called by Resolvers and interact with Data Sources.
They are also used to normalize data from the data sources before returning it to the caller. For example:
funder_id
and I want to send a boolean flag called isFunder
to the caller, I perform the logic in a Model.identifier
but needing to send DMPId
to the caller.There are abstract base classes available to help offload some of the redundant code. For example the MySqlModel provides standardized fields common to every DB record as well as query
, insert
, update
and delete
functions that handle calls to the DB.
In some situations, the data source will not be ready. In this scenario you can create a mock for use during development. Mocks live in src/mocks/
and there is an example for affiliations there.
To use a mock, simply import it into your resolver and then setup your Query and Mutation handlers to interact with the canned mock data.
Note that mocks will refresh each time the server is restarted!
You MUST add unit tests if you added or modified a Model! To do so, find the corresponding file (or add a new one) in the src/models/__tests_/
directory. We appreciate unit tests everywhere else too!
Resolver tests are not yet particularly useful. We will be updating this to add these integration tests in the near future.
To run the unit tests npm run test
To run the functional tests npm run mocha
See the .env.example
file and the docker-compose.yaml
for the list of environment variables currently required by the system.
If you are running locally and using the docker container, you simply need to make a copy of the .env.example
file and update it's variables where appropriate: cp .env.example .env
.
If you are running elsewhere, you will need to either make a copy of the .env.example
as described above and add all of the variables for the docker-compose.yaml
to it, OR set these environment variables up individually.
/up
- a simple healthcheck endpoint which can be used by load balancers
/apollo-signin
- authenticate a user and receive an access token
/apollo-signup
- register a new user and receive an access token
/apollo-signout
- delete an access token
/apollo-refresh
- regenerate an expired access token
graphql
- perform a GraphQL query or mutation
/apollo-authenticate
- OAuth2 endpoint to authenticate an external system and receive an access token
/apollo-authorize
- OAuth2 endpoint to allow a user to authorize the external system to access their data. Returns a short lived authorization code
apollo-token
- OAuth2 endpoint to exchange an authorization code for a long lived access token
bug
, chore
or feature
based on type of update: git checkout -b feature/your-feature
git add .; git commit -m "added new feature
. A pre-commit is run with the commit which checks to make sure linting and test coverage pass before a commit goes throughgit push --set-upstream origin feature/your-feature
This project is licensed under the MIT License - see the LICENSE for for details.