Open acravenho opened 6 years ago
As we concluded in the POC (https://github.com/poanetwork/blockscout/issues/827, https://github.com/poanetwork/blockscout/pull/842) this issue will be about separating the user data from the blockchain data so we don't lose the user info everytime we need to re-index the database.
The communication between the explorers will be handled in another card.
scheduler
module to run from times in times;
export
the database data;S3 bucket
;app.spec.yml.example
at the https://github.com/poanetwork/blockscout-terraform;
ApplicationStart
include new step to restore_data
; We can also consider solving this in the DevOps
side, using the terraform scripts.
@amandasposito
All deploys should not dump data, only the explorers that we deploy. There is a possibility of an outside blockscout instance giving us false data. However, all of our deployed explorers should dump their data.
The idea time should be every 12-24 hours.
I think 1 file is fine for now. Remember, once they are communicating this data will be able to be extracted from all instances.
Before the deploy is fine for now.
Since this issue is not the priority right now we are closing the PR and moving it back for future references.
The Problem
When an RDS instance is dropped from production or staging the user data such as source codes, address name, and other contract data is lost.
The Solution
We should dump this type of data in a public repo, S3 bucket or IPFS to allow other explorers to immediately index the data upon deploy and periodically update. Having this type of communication between the explorers will provide better decentralization of user data.