X-lab2017 / open-perf

Benchmark suit for large scale socio-technical datasets in open collaboration
MIT License
11 stars 18 forks source link

OpenPerf

OpenPerf is a benchmarking suite tailored for the sustainable management of open-source projects. It assesses key metrics and standards vital for the successful development of open-source ecosystems.

Features

Installation

To get started with OpenPerf, clone the repository to your local machine:

git clone https://github.com/yourgithubusername/openperf.git
cd openperf

Install the required dependencies:

pip install -r requirements.txt

Usage

OpenPerf is equipped with a CLI for easy execution of benchmarks. Here’s how you can run different types of benchmarks:

Running Data Science Benchmarks

To run the bot detection benchmark, which helps understand automated interactions in project management:

openperf data-science bot-detection

Running Standard Benchmarks

Evaluate the impact of companies, developers, and projects on open-source sustainability:

openperf standard company
openperf standard developer
openperf standard project

Running Index Benchmarks

To assess activity and influence indices, crucial for understanding leadership and contributions in open-source projects:

openperf index activity
openperf index influence

Extending OpenPerf

To add a new benchmark, create a new module under the appropriate directory and update the main.py to include this benchmark in the CLI.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

Acknowledgments

Thanks to all the contributors who have helped to expand and maintain OpenPerf. Special thanks to the community for the continuous feedback that enriches the project's scope and functionality.