crusttech / crust-server

Apache License 2.0
87 stars 21 forks source link

Feature/monolithic build #93

Closed titpetric closed 5 years ago

titpetric commented 5 years ago

Changes introduced:

  1. cmd/crust which binds together system, messaging and crm,
  2. some internals changes over all services
    • moved route printer into internal/routes,
    • split up Routes() to provide MountRoutes(r chi.Router)
  3. update internal/config/Database to support multiple configurations in one app

With the changes, the API endpoints become:

titpetric commented 5 years ago

Latest commits add loading from a public_html folder. Currently, I added existing service rules:

This should take care of hot-loading any complete webapp build.

Also included: global middleware (CORS, logging,...) and a middleware pkg with system routes (pprof, metrics, version) - most of services now just have a thin routing layer, unlike before.

PTAL

darh commented 5 years ago

Didn't go through the code changes, just a few questions that run through my mind:

a) Any reason now to keep 3 separate api/backend binaries/containers? b) What's the plan with system-api cli? Bundled in as well? c) What effect will this have on the config (*_DB_DSN...), will we merge the variables?

titpetric commented 5 years ago

Hi @darh, first of, thank you for some insightful questions.

a) the reason is mostly pre-emptive, scaling; while particularly the monolith build is motivated by plesk, effectively it just replaces an orchestration-like deployment (cmd/crust is effectively nginx glue, but using the available Go API-s which we have). There are bonuses on both sides, but generally the two main ones are:

b) currently no (as discussed). non-service cli scripts are easy to deliver, as we don't need orchestration for them and if we need to invoke them, it's just an exec() call away.

c) due to some coupling that's hard to detect (... well, that's a relative term), is that we could be using things like sys_user from messaging directly using SQL joins. As such, effectively we're using the same database credentials for all three microservices. The prefixes stay separated.

Additionally there are two more coupling issues, which I suspect we're making:

  1. the DB object crossing from messaging into system,
  2. transactional system/service APIs which shouldn't be available to messaging/ crm/...

These are solvable by introducing [app]/internal, and moving the repository and service/ layers there outright (they can still be used from rest, websocket,...). It just leaves us to provide and enforce a read-only API for system, that will basically resolve a set of User IDs to a []User slice, so we can remove the joins. It doesn't really need to be under system/ either.

Similar restrictions will need to be made in the long term for anything that uses the DB from the /internal package. Effectively we might move some of this into the system/service and system/internal/service. The main restriction is that, 1) the individual services must use the correct DB, 2) and they should provide a read-only API that can play nice with transactions (no modifying users from messaging, anything that gets modified successfully will not be rolled back in case). Particularly user status (online, do not disturb, etc.) is one of such cases.


On a more specific note, particularly because of your question c), this needs some more changes. Currently the database factory is still bound on the "default" key, so there's going to be some commits to separate the connections. I am inclined to merge them, if we encounter issues on testing - and then take some time to fix the reasons mentioned above (and possibly some others which we didn't consider yet).

tl;dr

a) you gotta keep 'em separated, b) no bundling needed, c) stays separated, fixing coupling where we detect issues