Open nitekon1 opened 2 years ago
After a discussion with the development team, we have decided to go with a microservice architecture for the backend.
I have attached the initial architecture diagram as we have discussed.
We have identified a handful of backend services we've identified for implementation. Each of those may or may not require their own database, but the diagram for now shows that each has a database for their data.
The API Gateway will be used to proxy requests to each of the services. We plan to do initial development and testing with Kong. This is an open source API gateway product that we can run in a container for local development and in a Kubernetes/Open Shift cluster later on. I would also be possible to swap in another API gateway product down the line if needed.
For communication between the services themselves, we have identified RabbitMQ as our tool of choice. It is also open source, very fast, and the typical tool used for this purpose. It can also be run in a container for local development and has a Kubernetes Operator for creating a cluster for high scale event processing.
Any questions or comments are very welcome. This diagram will probably change over time.
After some discussion, the team realized we may be better served to use a different event bus for the microservice communication. We are now opting to use Kafka.
There are a couple of immediate reasons for this. The first is our primary focus at this point is to use OpenLiberty for the microservices (at least the Incidents service) and the OpenLiberty starter guides make use of Kafka for service communication. The second is when we're looking at cloud-based deployments. IBM Cloud and other cloud providers offer a managed event streaming service that is built on Kafka. This again makes that particular layer in our diagram easy to swap out if a deployment chooses to do so.
I have attached the updated diagram.
A brief description of the feature or enhancement you'd like to see:
A clear and concise description of what the feature or enhancement is, Ex. Using this tool be would be much easier if [...] We will use this feature request to plan updates or a replacement to the server-side backend.
How will this feature be used?
A clear description of how, who, and when this feature would be used. Ex. User A wants to create a new Incident [...] The current backend is written in NodeJS. Here we will discuss the scalability of this platform, necessary and other platforms which offer better options.
What is the impact of this feature/enhancement?
Describe the impact of this feature or enhancement, how will it help users or adoption of this project. Ex. All users will be less frustrated when [...] This decision will shape how this project's stability, performance and future scalability.
Acceptance Criteria
What does it look like if we've implemented this enhancement or new feature correctly? Ex. User A was able to create a new incident 5s faster than previously [...] No immediate impacts. From a long term perspective performance issues may surface, which will likely require a rewrite to resolve which we will want to avoid.