dotnet / machinelearning

ML.NET is an open source and cross-platform machine learning framework for .NET.
https://dot.net/ml
MIT License
9k stars 1.88k forks source link

PredictionEngine benchmark test #1013

Closed najeeb-kazmi closed 5 years ago

najeeb-kazmi commented 5 years ago

Add benchmark test to measure performance of single predictions made by PredictionEngine.

najeeb-kazmi commented 5 years ago

Edit: Adding prediction runtimes for the legacy LearningPipeline API.

Benchmarks

In the ML.NET 0.6 release, we made a couple of performance improvements in making single predictions from a trained model. The first improvement comes from moving from the legacy LearningPipeline API to the new Estimators API. The second improvement comes from optimizing the performance of PredictionFunction in the new API.

Here is a comparison of runtimes for single predictions between the old LearningPipeline API, the new Estimators API with PredictionFunction, and the new Estimators API with the improved PredictionFunction. The benchmarks do a single prediction 10000 times on three different models, and each benchmark is run 20 times, with the average runtimes reported in the tables below, along with the standard deviations. We see the following speedups in average runtime, comparing the LearningPipeline with the improved PredictionFunction in the new Estimators API:

Predictions with LearningPipeline API:

Method Mean Error StdDev
MakeIrisPredictions 9.879 s 0.1461 s 0.1295 s
MakeSentimentPredictions 10.225 s 0.0915 s 0.0856 s
MakeBreastCancerPredictions 8.850 s 0.1622 s 0.1518 s

Predictions with Estimators API, old PredictionFunction:

Method Mean Error StdDev
MakeIrisPredictions 338.0 ms 4.951 ms 4.389 ms
MakeSentimentPredictions 447.7 ms 7.453 ms 6.607 ms
MakeBreastCancerPredictions 148.2 ms 2.317 ms 2.054 ms

Predictions with Estimators API, new improved PredictionFunction:

Method Mean Error StdDev
MakeIrisPredictions 3.019 ms 0.0388 ms 0.0363 ms
MakeSentimentPredictions 51.575 ms 0.3298 ms 0.3085 ms
MakeBreastCancerPredictions 1.353 ms 0.0059 ms 0.0055 ms

cc: @GalOshri @TomFinley @Zruty0 @justinormont

Zruty0 commented 5 years ago

@najeeb-kazmi , is this LearningPipeline vs. PredictionFunction, or is it old PredictionFunction vs. new?

najeeb-kazmi commented 5 years ago

This is old PredictionFunction vs new.

najeeb-kazmi commented 5 years ago

@Zruty0 @GalOshri @shauheen @TomFinley @justinormont I've updated the numbers with prediction runtimes with the LearningPipeline API. The speedups are even more impressive when compared to how slow things were in 0.5.

TomFinley commented 5 years ago

Um wow.

Still, 1.353 ms per 10k on BC means, 135 nanoseconds per cycle, which will correspond on a typical machine to around a few hundred CPU cycles. A few hundred CPU cycles for 9 multiplies, followed by 9 multiply-adds, followed by the application of a logistic function, is at least somewhat understandable, but suggests to me that there may be some additional speedups here and there to be had.

But at least the situation is not ridiculous.

danmoseley commented 5 years ago

@adamsitnik curious whether the traces you shared earlier are of the fast case above.

@TomFinley dowe have profile evidence that the time spent is dominated by "9 multiplies, followed by 9 multiply-adds, followed by the application of a logistic function"? If so that seems like something that may be amenable to eg @tannergooding looking at such factors as code gen and whether there is possibly another intrinsic we could use.