linagora / tmail-backend

GNU Affero General Public License v3.0
42 stars 22 forks source link

S3 reliability: write accross 2 AZ #1166

Open chibenwa opened 3 months ago

chibenwa commented 3 months ago

Description

A customer rely wishes data loss never to happen again and is paranoid about it.

We wishes to offer them a Twake mail feature in order to write across 2 availability zones, synchronously.

(disclaimer: I personally advocate against this feature...)

Thus in case of failure:

Configuration changes

In blob.properties

objectstorage.s3.secondary.enabled=true
objectstorage.s3.secondary.endPoint=${env:TMAIL_S3_ENDPOINT}
objectstorage.s3.secondary.region=${env:TMAIL_S3_REGION}
objectstorage.s3.secondary.accessKeyId=${env:TMAIL_S3_ACCESS_KEY}
objectstorage.s3.secondary.secretKey=${env:TMAIL_S3_SECRET_KEY}

Plugged to a Tmail backend module chooser.

Code & location

maven module: tmail-baclend/blob/secondary-blob-store

Write a SecondaryBlobStoreDAO class that takes 2 blob store DAO

Plug this into the TMail blob module chooser

Definition of done:

vttranlina commented 3 months ago

the personal view, I see it looks not good practice. It should be responsibility of s3.

I recently received a claim from the Ops team regarding an S3 bucket. @ducnm0711 do you have any better ideal?

Looks like S3 replication feature: https://aws.amazon.com/s3/features/replication/#:~:text=Amazon%20S3%20CRR%20automatically%20replicates,access%20in%20different%20geographic%20regions.

chibenwa commented 3 months ago

the personal view, I see it looks not good practice. It should be responsibility of s3.

Agreed but it is not supported by OVH: we do not have a choice here.

tk-nguyen commented 3 months ago

It just got added: https://github.com/ovh/public-cloud-roadmap/issues/179#issuecomment-2203498282 Docs: https://help.ovhcloud.com/csm/asia-public-cloud-storage-s3-asynchronous-replication-buckets?id=kb_article_view&sysparm_article=KB0062424

quantranhong1999 commented 3 months ago

It just got added: https://github.com/ovh/public-cloud-roadmap/issues/179#issuecomment-2203498282

I have just seen that too. Likely we can publicly use that now.

tk-nguyen commented 3 months ago

Just tested, seem to work OK. I'm following this tutorial: https://help.ovhcloud.com/csm/asia-public-cloud-storage-s3-asynchronous-replication-buckets?id=kb_article_view&sysparm_article=KB0062424#using-the-cli

Note: only work on objects uploaded after applying the replication rule. See https://help.ovhcloud.com/csm/asia-public-cloud-storage-s3-asynchronous-replication-buckets?id=kb_article_view&sysparm_article=KB0062424#what-is-replicated-and-what-is-not

chibenwa commented 3 months ago

Let's validate if S3 asynchronous replication is acceptable by the customer first.

chibenwa commented 2 months ago

Edit: @PatrickPereiraLinagora will further cehck with customer if async replication is acceptable to them.

TL;DR following march incident our margin for maneuver are not great. We might be forced to swallow our hats, but at least we will try!

ALSO it turns out I badly understood the ticket, and we would also want to maintain automatic write availability.

Namely:

Nominal case

GIVEN we parallely write to blobStoreA and blobStoreB
WHEN both operation succeeds
THEN we return a storage success

Partial failure

GIVEN we parallely write to blobStoreA and blobStoreB
WHEN write on blobStoreA succeeds and write on blobStoreB fails (or the reverse)
THEN we publish a message on RabbitMQ to retry the write operation later
AND the write succeeds

This means we need to set up a RabbitMQ queue to retry failed writes. The listener of the queue would then asynchronously read blobStoreA to complete the write on blobStoreB.

Total failure

GIVEN we parallely write to blobStoreA and blobStoreB
WHEN write on blobStoreA fails and write on blobStoreB fails
THEN the write fails
AND no message is published on RabbitMQ

Read path

Read operation are performed in A, and fallback to B in case of error, or if the object is not found in A.

Arsnael commented 2 months ago

TODO: write tickets

Arsnael commented 2 months ago

Remark: can plug it to blob module chooser to just encrypt with aes once for both s3 blobstores

hungphan227 commented 2 months ago

Should we have a cron job to ensure the consistency of 2 AZ?

vttranlina commented 2 months ago

Should we have a cron job to ensure the consistency of 2 AZ?

cron job for trigger what?//

ah, cron job for trigger webadmin, rerun task from deadletter queue

hungphan227 commented 2 months ago

Should we have a cron job to ensure the consistency of 2 AZ?

cron job for trigger what?

maybe checking any mismatch between 2 AZ or executing event dead letter, in case retry fail

vttranlina commented 2 months ago

maybe checking any mismatch between 2 AZ or executing event dead letter, in case retry fail

as my understand, we always trust objectstorage.s3.primary When write blobStoreA fail -> total fail (even save to blobStoreB success)

hungphan227 commented 2 months ago

"When write blobStoreA fail -> total fail (even save to blobStoreB success)" -----------> isn't this partial failure?

Arsnael commented 2 months ago

When write blobStoreA fail -> total fail (even save to blobStoreB success)

No. The reverse is true too, read again https://github.com/linagora/tmail-backend/issues/1166#issuecomment-2334065790

vttranlina commented 2 months ago

When write blobStoreA fail -> total fail (even save to blobStoreB success)

No. The reverse is true too, read again #1166 (comment)

I tried to read it again, but do not see what wrong We have 2 blobStore:

Benoit give 3 examples, it does not contain case A fail, B success, but I think it nearly to case Total failure

Where did I go wrong?

Arsnael commented 2 months ago

From Benoit:

Partial failure

GIVEN we parallely write to blobStoreA and blobStoreB
WHEN write on blobStoreA succeeds and write on blobStoreB fails (or the reverse)
THEN we publish a message on RabbitMQ to retry the write operation later
AND the write succeeds

This means we need to set up a RabbitMQ queue to retry failed writes. The listener of the queue would then asynchronously read blobStoreA to complete the write on blobStoreB.

I think the step WHEN write on blobStoreA succeeds and write on blobStoreB fails (or the reverse) is clear :)

Arsnael commented 2 months ago

The reverse meaning: WHEN write on blobStoreA fails and write on blobStoreB succeeds

vttranlina commented 2 months ago

I see "(or the reverse)" With this logic, we have 2 "primary" blobStore. With this logic, common issues may arise, such as misunderstanding it as adding new data or deleting when the data on both sides is out of sync...and the event ordering problem...blabla

I propose: WHEN write on blobStoreA fails and write on blobStoreB succeeds -> TOTAL FAIL

WDYT?

Arsnael commented 2 months ago

Isn't it the job of the rabbitmq queue to retry the failed item on one of the blobstore? Read blobStoreA for getting blob. Missing? Means then read on blobstoreB and write on A.

Not sure about your concern here

Arsnael commented 2 months ago

@chibenwa thoughts on @vttranlina concern above?

chibenwa commented 2 months ago

We are dealing with immutable data. Not a concern as long as we rely on RabbitMQ for resiliency.

We would only get residual data on failure anyway, the way we get them with out current architecture anyway.

Not a concern

Though I will be short on time to provide you a formal demonstration.

chibenwa commented 1 month ago

Need deployment at least deployed on CNB preprod for me to concider this done!

Arsnael commented 1 month ago

Sorry my bad