Sample test benchmark for using one opensource BI CSV source.
The logic of a test is
- download if parquet file does not exists source in .bz2 format
- convert it to parquet format
- prepare library with it containing several symbols that are constructed based on this DF
- for each query we want to benchmark do a pre-check that this query produces SAME result on Pandas and arcticDB
- run the benchmark tests
What does this implement or fix?
Any other comments?
Checklist
Checklist for code changes...
- [ ] Have you updated the relevant docstrings, documentation and copyright notice?
- [ ] Is this contribution tested against [all ArcticDB's features](../docs/mkdocs/docs/technical/contributing.md)?
- [ ] Do all exceptions introduced raise appropriate [error messages](https://docs.arcticdb.io/error_messages/)?
- [ ] Are API changes highlighted in the PR description?
- [ ] Is the PR labelled as enhancement or bug so it appears in autogenerated release notes?
Reference Issues/PRs
From PR #1995 (all comments addressed)
What does this implement or fix?
Any other comments?
Checklist
Checklist for code changes...
- [ ] Have you updated the relevant docstrings, documentation and copyright notice? - [ ] Is this contribution tested against [all ArcticDB's features](../docs/mkdocs/docs/technical/contributing.md)? - [ ] Do all exceptions introduced raise appropriate [error messages](https://docs.arcticdb.io/error_messages/)? - [ ] Are API changes highlighted in the PR description? - [ ] Is the PR labelled as enhancement or bug so it appears in autogenerated release notes?