Building-ML-Pipelines / building-machine-learning-pipelines

Code repository for the O'Reilly publication "Building Machine Learning Pipelines" by Hannes Hapke & Catherine Nelson
MIT License
584 stars 249 forks source link

Beam Pipeline Evaluator Does Not Generate Overall Metrics - Different results to Interactive. #27

Closed mshearer0 closed 2 years ago

mshearer0 commented 4 years ago

Beam Pipeline evaluator produces sliced metrics but not overall model scores, including 'auc'. Therefore it seems the model is Not Blessed

For same training and evaluation steps the sliced metrics from beam pipeline and interactive are different.

mshearer0 commented 4 years ago

Removing product slice from base_pipeline.py as

slicing_specs=[tfma.SlicingSpec()],

produces overall model scores and allows the pipeline to be Blessed.

Threshold must be changed to 'auc' as per issue #22

hanneshapke commented 2 years ago

Hi @mshearer0 ,

Thank you for reporting this issue. Check out the latest updates to the example code: https://github.com/Building-ML-Pipelines/building-machine-learning-pipelines/releases/tag/examples_based_on_tfx_1.4

The example still contains the product-specific slice. Please reopen if you run into trouble. Thank you again for reporting the issue.