Closed simonndiritu477 closed 2 years ago
👋 @simonndiritu477 Good afternoon and thank you for submitting your topic suggestion. Your topic form has been entered into our queue and should be reviewed (for approval) as soon as a content moderator is finished reviewing the ones in the queue before it.
Sounds like a helpful topic - let's please be sure it adds value beyond what is in any official docs and/or what is covered in other blog sites. (the articles should go beyond a basic explanation - and it is always best to reference any EngEd article and build upon it). @simonndiritu477
Please be attentive to grammar/readability and make sure that you put your article through a thorough editing review prior to submitting it for final approval. (There are some great free tools that we reference in EngEd resources.) ANY ARTICLE SUBMITTED WITH GLARING ERRORS WILL BE IMMEDIATELY CLOSED.
Please be sure to double-check that it does not overlap with any existing EngEd articles, articles on other blog sites, or any incoming EngEd topic suggestions (if you haven't already) to avoid any potential article closure, please reference any relevant EngEd articles in yours. - Approved
Proposal Submission
Proposed title of article
[Machine Learning] Model monitoring and detecting drifts in machine learning models using Deepchecks
Proposed article introduction
Model monitoring is an operational stage in the machine learning lifecycle that comes after model deployment. It entails monitoring your ML models for changes such as model degradation, data drift, and concept drift, and ensuring that the model is maintaining an acceptable level of performance. It is a close tracking of the performance of ML models in production so that production and AI teams can identify potential issues before they impact the business
Validation results during development will seldom fully justify your model's performance in production. This is a key reason why you have to monitor your models after deployment to make sure they keep performing as well as they're supposed to. A robust MLOps infrastructure should be able to proactively monitor for service health, data relevance, model performance, and business impact.
In predictive analytics and machine learning, we have model/concept drift and data drift. Model/concept drift means that the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Model drift is caused due to the degradation of model performance due to changes in data and relationships between input and output variables. It is relatively common for model drift to impact an organization negatively over time or sometimes suddenly.
Data-drift is defined as a variation in the production data from the data that was used to test and validate the model before deploying it in production.
Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. This includes checks related to various types of issues, such as model performance, data integrity, and distribution mismatches. It checks the ML models for things like errors, crashes, and latency, but most importantly, to ensure that your model is maintaining a predetermined desired level of performance.
Key takeaways
Article quality
In this tutorial, we will cover the concept of model monitoring and why it is important to monitor the model in production. We will also discuss what you should track in the model during production to detect changes that may affect the model performance. We will also implement a machine learning model using Python. We will then monitor the model using Deepchecks to detect the concept and data drifts by running a Deepchecks full Suite. We will explain all the Deepchecks concepts and implement them practically in Google Colab.
The tutorial will be detailed and it will have everything for the reader to get started with model monitoring and handling drifts in the model.
References
Please list links to any published content/research that you intend to use to support/guide this article.
Conclusion
Finally, remove the Pre-Submission advice section and all our blockquoted notes as you fill in the form before you submit. We look forwarding to reviewing your topic suggestion.
Templates to use as guides