This pull request includes several changes to improve the scheduling and tuning capabilities in the bitblas module, along with some code refactoring and cleanup. The most important changes include updating the ThreadPoolExecutor usage, adding hardware-aware configuration methods, introducing a new fine-grained matrix multiplication scheduler, and making various code style improvements.
Enhancements to Scheduling and Tuning:
bitblas/base/utils.py: Changed the ThreadPoolExecutor to use a dynamic number of workers based on the max_workers parameter.
bitblas/ops/base_scheduler.py: Added a method to get hardware-aware configurations for matrix multiplication schedulers.
bitblas/ops/operator.py: Refactored multiple methods for better readability and maintainability, including apply_fast_tuning, hardware_aware_finetune, and _build_default_module. [1][2][3]
bitblas/ops/general_matmul/tilelang/dense/matmul.py to matmul_tensorcore.py: Renamed the file for better clarity and organization.
This pull request includes several changes to improve the scheduling and tuning capabilities in the
bitblas
module, along with some code refactoring and cleanup. The most important changes include updating theThreadPoolExecutor
usage, adding hardware-aware configuration methods, introducing a new fine-grained matrix multiplication scheduler, and making various code style improvements.Enhancements to Scheduling and Tuning:
bitblas/base/utils.py
: Changed theThreadPoolExecutor
to use a dynamic number of workers based on themax_workers
parameter.bitblas/ops/base_scheduler.py
: Added a method to get hardware-aware configurations for matrix multiplication schedulers.bitblas/ops/general_matmul/tilelang/dense/matmul_tensorcore.py
: Added methods to get hardware-aware configurations for CUDA architectures.New Scheduler Introduction:
bitblas/ops/general_matmul/tilelang/dense/matmul_simt.py
: IntroducedMatmulFineGrainSIMTScheduler
, a new fine-grained matrix multiplication scheduler.Code Refactoring and Cleanup:
bitblas/ops/operator.py
: Refactored multiple methods for better readability and maintainability, includingapply_fast_tuning
,hardware_aware_finetune
, and_build_default_module
. [1] [2] [3]bitblas/ops/general_matmul/tilelang/dense/matmul.py
tomatmul_tensorcore.py
: Renamed the file for better clarity and organization.