ModaOperandi / agora

Engineering standards
3 stars 1 forks source link

microservices migration #45

Closed mchimirev closed 4 years ago

mchimirev commented 4 years ago

Let's have a conversation about micro services migration. There seems to be a consensus (maybe up for debate) that micro services should:

  1. own their own data
  2. communicate with each other either via HTTP calls or via Streams (kinesis / kafka / SNS etc...)

Couple of proposed ideas:

  1. Do not let micro service talk to MySql even during transition

microservices-migration-1

  1. Let micro service talk to MySql during transition

microservices-migration-2

semenodm commented 4 years ago

I understand the main reason we need to write catalog service data into Mojo Monolith DB is other services communicating with catalog trough the MySQL direct queries. If it is true, then i suggest to use slightly different approach.

  1. We do not use Monolith DB, if we start do it, microservices worth nothing, because we still have huge monolith service called Mojo MySQL database. It ruins all idea of microservices.
  2. There are many ways to expose microservice (catalog) data/evens: API, Events, and also rarely used, but still valid: readonly (for other services) database instance (separate from Mojo Monolith) where product catalog stores it's internal state projection, suitable for other services, which has legacy integration through direct database queries.

Catalog service will have it's internal state persisted, say in dynamo and additional MySQL database with the catalog data projection suitable for legacy consumers. Also catalog service will expose some API. All new consumers will use API, however we will give the ability to communicate with catalog to old services through the MySQL interface. If we replicate the schema, the only thing we need to change in all legacy catalog consumers is jdbc connection url, and slightly change code to use this new connection, the rest remains the same.

I see multiple advantages in this approach:

  1. Catalog service is completely decoupled from monolith, which means it has it's own release cycle and roadmap.
  2. Legacy service may migrate to the new catalog API at their own pace
  3. Catalog service data incapsulated in microservice and it is sole responsibility of Catalog squad how and when it would be changed
  4. We do not create precedent where we give access to monolith for microservices, people will abuse this hole, and eventually create distributed monolith, where things go even worse than we have right now with Mojo
rlmartin commented 4 years ago

I agree with @semenodm that exposing the monlithic DB to microservices feels like an anti-pattern.

An alternative I would suggest is to set up a replicator that copies data into a read-only table in the MySQL monolithic DB. On top of this we could put a view that combines that data with the legacy table and presents it in a way that ActiveRecord can read the data without code changes. Migration would look like: identify all writers to the legacy table and migrate them to a write API in the new microservice. Over time the view would move to entirely reflecting the microservice-driven data. Eventually we would want to move all reads to the microservice, but that could be tackled separately.

rlmartin commented 4 years ago

Also want to lightly question including Dynamo as the core/default in this architecture. I understand the benefits of being schema-less - which definitely has its uses - but sometimes relational is useful. Maybe Dynamo is always there as a dumb landing spot (though so too could be S3, or a JSON column in a DB), with transformers moving data from there to a structured format. But I think it is dangerous to always assume that Dynamo is the right tool to be using - the danger being that some people may not question the default and assume it is the "always use". Of course, once schema matters, then versioning of the data also matters.