I implemented the first version of the API design with some small data processing scripts for ETL purposes in this PR:
Add summary response with mocked data: Initial setup to return mocked data in the summary response.
Add data ingestion: Implemented data ingestion to support data operations.
Adapt API to read data: Modified the API to utilize the new data ingestion logic for enhanced data retrieval performance.
Fix and clean up H3 generation functionality: Improve the H3 index generation logic for better accuracy and performance.
Add tests on API: Introduced initial tests for API endpoints to ensure reliability and correctness.
Adapt API for storing data within PostgreSQL: Transitioned to PostgreSQL for data storage, added functionality to load sample data for NYC, and created a visualization notebook with Lonboard for quick quality assurance.
Add configuration variable for table name: Made the table name configurable through environment variables for flexibility.
Add unit tests for db_utils and update existing API tests: Refactored API logic to use utility functions from db_utils.py, updated existing tests, and added new unit tests for database utility functions using pytest and unittest.mock.
Fix definition of environment variables error: Resolved an issue with the order of environment variable definitions, ensuring they are loaded correctly. Added a notebook visualization that includes the available fields endpoint.
I implemented the first version of the API design with some small data processing scripts for ETL purposes in this PR: