Closed mmcdermott closed 1 month ago
[!CAUTION]
Review failed
The pull request is closed.
The changes introduce a new benchmarking workflow in the GitHub Actions configuration, enhancing the project's performance testing capabilities. This includes the implementation of a memory tracking framework and various dataset generation tools. Additionally, several files related to performance tests have been removed or modified to streamline the testing process, while new scripts and configurations have been added to facilitate dataset creation.
Files | Change Summary |
---|---|
.github/workflows/benchmark.yaml |
Added a new workflow for performance benchmarking, including job configurations for running benchmarks and storing results. |
.github/workflows/tests.yaml |
Updated pytest command to ignore additional directories during test execution. |
.gitignore |
Updated to ignore logs, profiling statistics, and output directories related to performance tests and benchmarks. |
.pre-commit-config.yaml |
Modified hook configurations to exclude specific patterns and enforce removal of unused imports. |
README.md |
Updated performance testing instructions, emphasizing external documentation for performance metrics. |
benchmark/README.md |
Added a new README for performance benchmarking, detailing the benchmarking process and dataset generation. |
benchmark/benchmarkable_dataset.py |
Introduced a class for benchmarking datasets, including methods for memory tracking and performance evaluation. |
benchmark/nrt_dataset.py |
Added a new dataset class for handling nested ragged tensors, with methods for data processing and retrieval. |
benchmark/run.py |
Implemented a benchmarking framework for evaluating dataset processing performance, including functions for summarizing output metrics. |
performance_tests/configs/config.yaml |
Removed obsolete configuration for performance testing. |
performance_tests/configs/dataset_spec/default.yaml |
Removed max_events_per_item configuration, allowing for more flexible dataset structures. |
sample_dataset_builder/LICENSE |
Added an MIT License file to specify usage terms. |
sample_dataset_builder/README.md |
Introduced documentation for the Sample Dataset Builder, detailing installation and usage instructions. |
sample_dataset_builder/pyproject.toml |
Defined project metadata and dependencies for the Sample Dataset Builder package. |
sample_dataset_builder/src/sample_dataset_builder/__main__.py |
Created a main script for generating datasets based on configurations. |
sample_dataset_builder/src/sample_dataset_builder/dataset_generator.py |
Introduced classes for generating synthetic datasets with validation and error handling. |
Objective | Addressed | Explanation |
---|---|---|
Incorporating memory tracking into the benchmark code. | ✅ |
🐰 In the meadow, we hop with glee,
New benchmarks and datasets, oh so free!
With memory tracked and tests refined,
A brighter future for code we find!
Let’s celebrate with a joyful cheer,
For improvements made, we hold so dear! 🌼
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
:warning: Please install the to ensure uploads and comments are reliably processed by Codecov.
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 92.77%. Comparing base (
d239733
) to head (0f1dbb5
).
:white_check_mark: All tests successful. No failed tests found.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Sets up the benchmarking code to track memory usage, run only on NRT data, use a committed and run on every PR via GH actions
Summary by CodeRabbit
New Features
Documentation
Chores
.gitignore
to exclude additional temporary files.Bug Fixes