Closed najeeb-kazmi closed 5 years ago
Edit: Adding prediction runtimes for the legacy LearningPipeline
API.
In the ML.NET 0.6 release, we made a couple of performance improvements in making single predictions from a trained model. The first improvement comes from moving from the legacy LearningPipeline
API to the new Estimators
API. The second improvement comes from optimizing the performance of PredictionFunction
in the new API.
Here is a comparison of runtimes for single predictions between the old LearningPipeline
API, the new Estimators
API with PredictionFunction
, and the new Estimators
API with the improved PredictionFunction
. The benchmarks do a single prediction 10000 times on three different models, and each benchmark is run 20 times, with the average runtimes reported in the tables below, along with the standard deviations. We see the following speedups in average runtime, comparing the LearningPipeline
with the improved PredictionFunction
in the new Estimators
API:
Estimators
API, with a further 112x speedup with improvements to PredictionFunction
).Estimators
API, with a further 8.68x speedup with improvements to PredictionFunction
). This model contains a text featurizer, so it is not surprising that we see a smaller gain.Estimators
API, with a further 109x speedup with improvements to PredictionFunction
).LearningPipeline
API:Method | Mean | Error | StdDev |
---|---|---|---|
MakeIrisPredictions | 9.879 s | 0.1461 s | 0.1295 s |
MakeSentimentPredictions | 10.225 s | 0.0915 s | 0.0856 s |
MakeBreastCancerPredictions | 8.850 s | 0.1622 s | 0.1518 s |
Estimators
API, old PredictionFunction
:Method | Mean | Error | StdDev |
---|---|---|---|
MakeIrisPredictions | 338.0 ms | 4.951 ms | 4.389 ms |
MakeSentimentPredictions | 447.7 ms | 7.453 ms | 6.607 ms |
MakeBreastCancerPredictions | 148.2 ms | 2.317 ms | 2.054 ms |
Estimators
API, new improved PredictionFunction
:Method | Mean | Error | StdDev |
---|---|---|---|
MakeIrisPredictions | 3.019 ms | 0.0388 ms | 0.0363 ms |
MakeSentimentPredictions | 51.575 ms | 0.3298 ms | 0.3085 ms |
MakeBreastCancerPredictions | 1.353 ms | 0.0059 ms | 0.0055 ms |
cc: @GalOshri @TomFinley @Zruty0 @justinormont
@najeeb-kazmi , is this LearningPipeline
vs. PredictionFunction
, or is it old PredictionFunction
vs. new?
This is old PredictionFunction
vs new.
@Zruty0 @GalOshri @shauheen @TomFinley @justinormont I've updated the numbers with prediction runtimes with the LearningPipeline
API. The speedups are even more impressive when compared to how slow things were in 0.5.
Um wow.
Still, 1.353 ms per 10k on BC means, 135 nanoseconds per cycle, which will correspond on a typical machine to around a few hundred CPU cycles. A few hundred CPU cycles for 9 multiplies, followed by 9 multiply-adds, followed by the application of a logistic function, is at least somewhat understandable, but suggests to me that there may be some additional speedups here and there to be had.
But at least the situation is not ridiculous.
@adamsitnik curious whether the traces you shared earlier are of the fast case above.
@TomFinley dowe have profile evidence that the time spent is dominated by "9 multiplies, followed by 9 multiply-adds, followed by the application of a logistic function"? If so that seems like something that may be amenable to eg @tannergooding looking at such factors as code gen and whether there is possibly another intrinsic we could use.
Add benchmark test to measure performance of single predictions made by PredictionEngine.