Going forward, we want to implement the following design:
A single repository to hold spark-stats database schema migration scripts, the code writing the stats (smart-contract event observer), the code reading the stats (REST API server)
A monorepo-like structure that makes it easy to install dependencies for all components, run tests for all components, etc. One command to perform the action for all components.
### Tasks
- [ ] https://github.com/filecoin-station/spark-impact-evaluator/issues/12
- [x] Migrate `spark-stats` database to `spark-evaluate` database. Update `spark-evaluate` and `spark-stats` database connection string in Fly secrets to use the new database.
- [x] Implement a new monorepo structure in the spark-api repository.
- [x] Rework the spark-stats repository to use this new monorepo structure too.
- [x] Add a new component `spark-observer` to the `spark-stats` repository, setup Fly.io deployment
- [x] Add database migration scripts to spark-stats repository.
- [x] Modify `spark-stats` to read from two databases: `spark-evaluate` for existing data, `spark-stats` for the future data created by spark-observer
### Nice-to-have improvements
- [ ] Add npm scripts like `test:api` and `lint:publish` to spark-api
- [ ] Add npm scripts like `test:stats` and `lint:observer` to spark-stats
As part of https://github.com/filecoin-station/roadmap/issues/97 and https://github.com/filecoin-station/roadmap/issues/106, we need to build a new service to listen for Meridian events and update spark-stats database. The current design, where spark-evaluate manages and updates the spark-stats database is no longer serving us well.
Going forward, we want to implement the following design:
spark-stats
database schema migration scripts, the code writing the stats (smart-contract event observer), the code reading the stats (REST API server)