The backend is currently being refactored from a monolith to bunch of microservices to help with stability and scalability. Prior, the backend would run a Koa server with the various endpoints, but this doesn't scale well, and the backend would often restart from various issues, causing downtime for the api.
The mesh of services ideally would look something like this:
The Koa server has been separated out into the gateway package, where it's sole responsibility should be to listen to client requests (currently only via rest, but in the future also maybe via graphql or websockets), where it queries mongodb and returns results. Multiple instances of the Gateway can then be run in parallel and load balanced, helping scale up the amount of requests as needed.
Currently the main backend running in production (which from now on is referred to as core), just imports the Server class from the gateway package, and runs this if the enable flag is set in the config. Once there are gateway deployements running independant of core, this can be set to false.
In terms of containerizing the different services and packages, there is now a single Dockerfile used to build them all, the only difference is the PACKAGE arg that is passed into it, which corresponds to the package name.
For example to build the core docker image you would run something like the following from the root of the repo:
These will run the start:js:<PACKAGE_NAME> script in each individual package, which basically runs node index.js for the built js.
The CI build and CI Publish script for these should already be ready, but there needs to be a otv-gateway docker repo made so that things can be pushed to it.
Afterwards there should be a otv-gateway set of charts made, with some of the following considerations:
the config file will need the db and server parts filled in. It can also use the same config file as the main 1kv-be
resource limit and requests can be probably much less than what the current 1kv-backend
the gateway should have multiple replicas that are autoscaled and load balanced
the current endpoint (ie https://kusama.w3f.community/) should be the load balancer that perhaps points to both the gateway pods as well as the current backend/core as an intermediary step
once it looks like the gateway pods are working the backend/core can be removed from the load balancer, the server config could be set to false, and should be replaced with a more simple koa endpoint that only serves things like healthchecks
tl;dr
[x] create otv-gateway docker repo
[x] enable build and publish gateway in the ci pipelines
The backend is currently being refactored from a monolith to bunch of microservices to help with stability and scalability. Prior, the backend would run a Koa server with the various endpoints, but this doesn't scale well, and the backend would often restart from various issues, causing downtime for the api.
The mesh of services ideally would look something like this:
The Koa server has been separated out into the gateway package, where it's sole responsibility should be to listen to client requests (currently only via rest, but in the future also maybe via graphql or websockets), where it queries mongodb and returns results. Multiple instances of the Gateway can then be run in parallel and load balanced, helping scale up the amount of requests as needed.
Currently the main backend running in production (which from now on is referred to as
core
), just imports the Server class from the gateway package, and runs this if theenable
flag is set in the config. Once there are gateway deployements running independant ofcore
, this can be set to false.In terms of containerizing the different services and packages, there is now a single Dockerfile used to build them all, the only difference is the
PACKAGE
arg that is passed into it, which corresponds to the package name.For example to build the
core
docker image you would run something like the following from the root of the repo:docker build -f Dockerfile . --build-arg PACKAGE=core
And to build the
gateway
image you would run:docker build -f Dockerfile . --build-arg PACKAGE=gateway
These will run the
start:js:<PACKAGE_NAME>
script in each individual package, which basically runsnode index.js
for the built js.The CI build and CI Publish script for these should already be ready, but there needs to be a
otv-gateway
docker repo made so that things can be pushed to it.Afterwards there should be a
otv-gateway
set of charts made, with some of the following considerations:db
andserver
parts filled in. It can also use the same config file as the main 1kv-behttps://kusama.w3f.community/
) should be the load balancer that perhaps points to both the gateway pods as well as the current backend/core as an intermediary steptl;dr
otv-gateway
docker repobuild
andpublish
gateway in the ci pipelinesotv-gateway
charts and helmfile