Description:
We are using the Copper Engine in a cloud environment and leveraging MariaDB (DBaaS) for persistent workflows. In our workflows, we set savepoints. To meet disaster recovery requirements, we now run our pods in a second cloud location in parallel using the same database.
However, we observe that parts of a started workflow are executed twice, which contradicts the information under "High Availability/ Load Distribution" in the documentation, suggesting that such duplicate execution should not occur.
Environment:
Copper Engine Version: 5.4.2
Database: MariaDB (DBaaS)
Cloud Environment: private cloud based on Kubernetes
Possible Workaround:
We have found a workaround that appears to work based on our tests: we set different Pool IDs and configured each instance to use only its own Pool ID.
Unfortunately, I cannot share the specific code as it is not public.
Is it supported to use the same database instance across different containers for persistent workflows?
Description: We are using the Copper Engine in a cloud environment and leveraging MariaDB (DBaaS) for persistent workflows. In our workflows, we set savepoints. To meet disaster recovery requirements, we now run our pods in a second cloud location in parallel using the same database. However, we observe that parts of a started workflow are executed twice, which contradicts the information under "High Availability/ Load Distribution" in the documentation, suggesting that such duplicate execution should not occur.
Environment: Copper Engine Version: 5.4.2 Database: MariaDB (DBaaS) Cloud Environment: private cloud based on Kubernetes
Possible Workaround: We have found a workaround that appears to work based on our tests: we set different Pool IDs and configured each instance to use only its own Pool ID.
Unfortunately, I cannot share the specific code as it is not public.
Is it supported to use the same database instance across different containers for persistent workflows?