Closed mshr-h closed 1 year ago
Merging #677 (f863b9c) into main (36ebab4) will decrease coverage by
0.12%
. The diff coverage is83.75%
.
@@ Coverage Diff @@
## main #677 +/- ##
==========================================
- Coverage 90.25% 90.14% -0.12%
==========================================
Files 78 80 +2
Lines 4545 4625 +80
Branches 840 848 +8
==========================================
+ Hits 4102 4169 +67
- Misses 252 259 +7
- Partials 191 197 +6
Flag | Coverage Δ | |
---|---|---|
unittests | 90.14% <83.75%> (-0.12%) |
:arrow_down: |
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files | Coverage Δ | |
---|---|---|
...ml/operator_converters/_mixture_implementations.py | 79.68% <79.68%> (ø) |
|
hummingbird/ml/operator_converters/__init__.py | 100.00% <100.00%> (ø) |
|
...mingbird/ml/operator_converters/sklearn/mixture.py | 100.00% <100.00%> (ø) |
|
hummingbird/ml/supported.py | 93.04% <100.00%> (+0.06%) |
:arrow_up: |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
@ksaur @interesaaat Can you review it? Thanks!
LGTM. @mshr-h thanks for adding this! I am curios, how fast is this on GPU compared to sklearn?
@interesaaat We haven't benchmarked on the GPU since we are using HB to convert SKL models for microcontrollers through the microTVM. The BGMM->microTVM conversion is not currently available because there's some missing operator support in the PyTorch frontend of the TVM.
But on the CPU (Core i9-10885H), HB is 3x faster than SKL.
Did you reach out to the TVM folks regarding the missing ops? I am surprised that on CPU we are already 3x faster than SKL!
Did you reach out to the TVM folks regarding the missing ops?
Not yet.
Tagging @masahi, maybe he can help.
633