Open nikooo777 opened 5 years ago
This is a good issue. We need to solve it for internal-apis as well. Both use the same api server as well.
This is a potential problem. If we lose a database connection our dataset could be in a corrupt state. We need something similar to a transaction for chainquery. Since we have multiple routines running and processing, Inputs, Outputs, and Transactions concurrently ( soon to be blocks too ), in a block, any loss of a db connection puts the application in a bad state. The atomic state currently is at the block level. We save things at different points in time across go routines. If any of the routines report an error, the block is rolled back. This significantly improves our error handling. However, in the event of a loss of db connection, the block cannot be rolled back currently. So we need a way to determine if a blocks successfully finished processing, besides its mere existence.
An improvement that would make this possible is to have a column on block
called is_processed
or something similar. When all go routines finish successfully the last db statement on a block is to flip this column to true. This way if we lose the db connection, on successful reconnect, the choice on which block to process next will be the highest height where is_processed
is true + 1.
This will be required for parallel block processing. I will add it to the 2.0 list.
While testing I accidentally restarted mysql and understandably so chainquery crashed.
The stacktrace is as follow:
Would it be possible to better handle this case? i.e. by re-attempting a connection every 10 seconds or so.
This would ensure that even in the case of a database restart, chainquery maintains continuity.