apache / servicecomb-pack

Apache ServiceComb Pack is an eventually data consistency solution for micro-service applications. ServiceComb Pack currently provides TCC and Saga distributed transaction co-ordination solutions by using Alpha as a transaction coordinator and Omega as an transaction agent .
https://servicecomb.apache.org/
Apache License 2.0
1.93k stars 436 forks source link

状态机器存储支持redis-cluster #689

Closed githubcheng2978 closed 3 years ago

githubcheng2978 commented 3 years ago

状态机的存储,支持了redis,但是不支持redis-cluster

coolbeevip commented 3 years ago

akka-persistence-redis 官方只支持 redis 单例和哨兵模式

https://index.scala-lang.org/safety-data/akka-persistence-redis/akka-persistence-redis/0.4.2?target=_2.13

githubcheng2978 commented 3 years ago

我看了akka-persistence-redis对应的源码,主要是以下:` def asyncWriteMessages(messages: Seq[AtomicWrite]): Future[Seq[Try[Unit]]] = Future.sequence(messages.map(asyncWriteBatch)) var redisClient: RedisClient = _ private def asyncWriteBatch(a: AtomicWrite): Future[Try[Unit]] = {

val batchOperations = Future
  .sequence(a.payload.map(asyncWriteOperation(redis, _)))

  .zip(redis.set(highestSequenceNrKey(a.persistenceId), a.highestSequenceNr))
  .zip(redis.sadd(identifiersKey, a.persistenceId))
  .flatMap {
    case ((_, _), n) =>
      // notify about new persistence identifier if needed
      if (n > 0)
        redis.publish(identifiersChannel, a.persistenceId).map(_ => ())
      else
        Future.successful(())
  }

batchOperations
  .map(Success(_))
  .recover {
    case ex => Failure(ex)
  }

}`

在存储状态机事件,这个地方用到了redis的事务机制的,但是cluster 默认是不支持事务的,但是scala 官方redis库支持了KeyTag,这个机制可以在cluster模式下支持事务 https://index.scala-lang.org/debasishg/scala-redis/redisclient/3.30?target=_2.13

WillemJiang commented 3 years ago

对于Alpha 来说如果分布式事务路由规则是唯一的话,状态机是否对事务要求是否可以降低?

githubcheng2978 commented 3 years ago

我理解是可以的,状态机的持久化的用处只会在集群做恢复的时候,才会用到 。 在整个globalTxId结束后,也会清理掉对应的持久化数据。 而且redis的事物机制也太可靠

githubcheng2978 commented 3 years ago

不太可靠

coolbeevip commented 3 years ago

Redis Cluster is not able to guarantee strong consistency.

In document https://redis.io/topics/cluster-tutorial, it is described as follows

Redis Cluster is not able to guarantee strong consistency. In practical terms this means that under certain conditions it is possible that Redis Cluster will lose writes that were acknowledged by the system to the client.

The first reason why Redis Cluster can lose writes is because it uses asynchronous replication. This means that during writes the following happens:

As you can see, B does not wait for an acknowledgement from B1, B2, B3 before replying to the client, since this would be a prohibitive latency penalty for Redis, so if your client writes something, B acknowledges the write, but crashes before being able to send the write to its slaves, one of the slaves (that did not receive the write) can be promoted to master, losing the write forever.