Closed merrymercy closed 6 years ago
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
mobula/glue/th.py | 0 | 4 | 0.0% | ||
<!-- | Total: | 0 | 4 | 0.0% | --> |
Totals | |
---|---|
Change from base Build 198: | 0.8% |
Covered Lines: | 990 |
Relevant Lines: | 1246 |
Changes Missing Coverage | Covered Lines | Changed/Added Lines | % | ||
---|---|---|---|---|---|
mobula/glue/th.py | 0 | 4 | 0.0% | ||
<!-- | Total: | 0 | 4 | 0.0% | --> |
Files with Coverage Reduction | New Missed Lines | % | ||
---|---|---|---|---|
mobula/test_utils.py | 1 | 65.85% | ||
mobula/glue/common.py | 1 | 92.78% | ||
mobula/op/load_module.py | 4 | 94.57% | ||
<!-- | Total: | 6 | --> |
Totals | |
---|---|
Change from base Build 205: | -0.1% |
Covered Lines: | 985 |
Relevant Lines: | 1249 |
TVM is great :-) Thank you for your contribution!
I like the idea of cross platform operator. TVM also supports dlpack so no memory copy happens during the integration with mxnet and pytorch (see our blog). In this project, tvm's role is similar to mobula.func
. The part that I can reuse here is mainly mobula.backend
Interesting future works include supporting auto differential (related halide paper) and auto tuning (necessary to get the best performance) for the operators.
I know TVM and DLPack. They are great. It's better to get a higher performance with TVM. Thank you!
TVM is useful for generating high performance kernels for cpu, gpu and accelerators.
This pr adds a simple example about how to use tvm generated kernels for mxnet and pytorch on cpu and gpu.