Current link: https://app-backend-production-676d.up.railway.app/docs
This service is designed to handle two primary tasks:
Data Aggregation:
API Endpoint:
Event Listener: The service listens for RewardsConvertedToUsdc
events from the fee manager contract and stores it in the database for future APR calculations.
Daily Aggregation: The service runs a daily job that fetches and stores historical vault data, like TVL Token0 and Token1 price.
startingBlock
env property is taken as startTimestamp
)rate
parameter fetched from the fee manager contract multiplied with the fees per day.src/
├── aggregators/
│ ├── vault-aggregator/ # Handles daily vault data aggregation
│ └── events-aggregator/ # Handles listening to "RewardsConvertedToUsdc" events
├── api/
│ └── vaults/ # API module for vault-related data, including APR calculations
├── blockchain-connectors/ # Manages blockchain connectivity
├── contract-connectors/ # Services for interacting with contracts (eg. Arrakis, ERC-20 and Fee Manager)
├── database/ # MongoDB schemas and database interaction services
├── price-oracles/ # Price-Oracle service (Coingecko API)
├── config/ # Configuration settings for the service
├── shared/ # Shared models, types, classes and enums and utility functions
└── utils/ # Various utility functions
src/aggregators/vault-aggregator/vault-aggregator.service.ts
:
src/aggregators/events-aggregator/events-aggregator.service.ts
:
RewardsConvertedToUsdc
events from startingBlock, then schedules an hourly sync job.src/api/vault/vault.service.ts
:
src/database/
:
RewardsConvertedToUsdc.schema.ts
, VaultHistoricalData.schema.ts
).src/blockchain-connectors/
:
Clone the Repository:
git clone <repository-url>
cd <project-directory>
Install Dependencies:
npm install
Environment Variables:
.env
file. You’ll need:
Run the Application:
npm run start
Running Tests:
npm run test
The vault aggregation logic starts automatically:
RewardsConvertedToUsdc
events and stores them in the database.The vault information is exposed via an API:
To manage multiple whitelists for different projects, follow the steps below to import your JSON files into MongoDB. This process needs to be done only once for each whitelist file, as the data will be stored and accessible in the database thereafter.
/src/static
directory.src/ ├── static/ │ ├── fomo.json │ ├── projectA.json │ ├── projectB.json │ └── … other JSON files └── … other directories
[
{
"value": [
"0xAddress1",
"384992472620497583"
],
"proof": [
"0xfc13c899b6516cf2dac5e27ecb0752e46e0ee419ad13d8b6c556d94ee8752ae2",
"0x3b86523d566ffbd123f49de172f6b82cb9df34900acd7a2f8f4d2a913d24c0f9",
// ... more proof entries
]
},
// ... more entries
]
2. Note on the /src/static Directory
• The /src/static directory is included in .gitignore due to the large size of JSON files. This means these files will not be tracked by Git and must be managed manually or through another method (e.g., deployment scripts).
3. Running the Import Script
The import process is designed to read all JSON files within the /src/static directory and populate the MongoDB database accordingly.
a. Ensure Your MongoDB Connection
• Verify that your MongoDB connection URI is correctly set in your .env file. Example:
MONGODB_URI=mongodb://localhost:27017/yourdbname
b. Execute the Import Command
• Run the following command to start the import process:
npm run import
Note: Ensure that the import script is defined in your package.json. If not, you can add it as follows:
// package.json
{
"scripts": {
// ... other scripts
"import": "ts-node src/import.ts" // Adjust the path and command as needed
}
}
• Example Output:
[ImportService] Reading data directory: /path/to/src/static
[ImportService] Reading data from /path/to/src/static/fomo.json...
[ImportService] Starting import of 1000 records from fomo.json...
[ImportService] Import from fomo.json completed. Inserted: 1000, Modified: 0
[ImportService] Reading data from /path/to/src/static/projectA.json...
[ImportService] Starting import of 1500 records from projectA.json...
[ImportService] Import from projectA.json completed. Inserted: 1500, Modified: 0
[ImportService] All data imports completed.
c. Verify the Import
• After running the import script, verify that the data has been successfully inserted into MongoDB.
• Using MongoDB Compass or the Mongo Shell:
// Example using Mongo shell
use yourdbname
db.whitelists.find({ project: "fomo" }).limit(5).pretty()
Expected Document Structure:
{
"_id": ObjectId("..."),
"address": "0x7bcd8185b7f4171017397993345726e15457b1ee",
"proof": [
"0xfc13c899b6516cf2dac5e27ecb0752e46e0ee419ad13d8b6c556d94ee8752ae2",
"0x3b86523d566ffbd123f49de172f6b82cb9df34900acd7a2f8f4d2a913d24c0f9",
// ... more proof entries
],
"project": "fomo"
}
4. Managing Future Whitelist Files
For future projects, simply add the new JSON files to the /src/static directory and run the import script again. Since each project is identified by the filename (e.g., projectB.json), the import script will handle them appropriately.
Example:
1. Add projectC.json to /src/static.
2. Run the import script:
npm run import
3. Verify the import in MongoDB.
5. Import Script Overview
Here’s a brief overview of how the import script works:
• File Location: The import script reads all .json files located in /src/static.
• Project Identification: Each JSON file’s name (e.g., fomo.json) is used as the project identifier in the database.
• Data Mapping: For each entry in the JSON file:
• value[0] is mapped to the address field.
• proof array is mapped to the proof field.
• The filename (without extension) is mapped to the project field.
• Database Operation: Utilizes bulk write operations to efficiently insert or update records in MongoDB.
## Further improvements
- Split up data-aggregator & api, if api requests are growing
- Share common contract calls between data-aggregator & api
- /contract-connectors & /api/lp/lp-data-provider have lot of overlap