There is nothing magic about magic. The magician merely understands something simple which doesn’t appear to be simple or natural to the untrained audience. Once you learn how to hold a card while making your hand look empty, you only need practice before you, too, can “do magic.” – Jeffrey Friedl in the book Mastering Regular Expressions
Note: Please raise an issue for any suggestions, corrections, and feedback.
The goal of the series is to understand the basics of MLOps like model building, monitoring, configurations, testing, packaging, deployment, cicd, etc.
Refer to the Blog Post here
The project I have implemented is a simple classification problem. The scope of this week is to understand the following topics:
How to get the data?
How to process the data?
How to define dataloaders?
How to declare the model?
How to train the model?
How to do the inference?
Following tech stack is used:
Refer to the Blog Post here
Tracking all the experiments like tweaking hyper-parameters, trying different models to test their performance and seeing the connection between model and the input data will help in developing a better model.
The scope of this week is to understand the following topics:
How to configure basic logging with W&B?
How to compute metrics and log them in W&B?
How to add plots in W&B?
How to add data samples to W&B?
Following tech stack is used:
References:
Refer to the Blog Post here
Configuration management is a necessary for managing complex software systems. Lack of configuration management can cause serious problems with reliability, uptime, and the ability to scale a system.
The scope of this week is to understand the following topics:
Basics of Hydra
Overridding configurations
Splitting configuration across multiple files
Variable Interpolation
How to run model with different parameter combinations?
Following tech stack is used:
References
Refer to the Blog Post here
Classical code version control systems are not designed to handle large files, which make cloning and storing the history impractical. Which are very common in Machine Learning.
The scope of this week is to understand the following topics:
Basics of DVC
Initialising DVC
Configuring Remote Storage
Saving Model to the Remote Storage
Versioning the models
Following tech stack is used:
References
Refer to the Blog Post here
Why do we need model packaging? Models can be built using any machine learning framework available out there (sklearn, tensorflow, pytorch, etc.). We might want to deploy models in different environments like (mobile, web, raspberry pi) or want to run in a different framework (trained in pytorch, inference in tensorflow). A common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers will help a lot.
This is acheived by a community project ONNX
.
The scope of this week is to understand the following topics:
What is ONNX?
How to convert a trained model to ONNX format?
What is ONNX Runtime?
How to run ONNX converted model in ONNX Runtime?
Comparisions
Following tech stack is used:
References
Refer to the Blog Post here
Why do we need packaging? We might have to share our application with others, and when they try to run the application most of the time it doesn’t run due to dependencies issues / OS related issues and for that, we say (famous quote across engineers) that It works on my laptop/system
.
So for others to run the applications they have to set up the same environment as it was run on the host side which means a lot of manual configuration and installation of components.
The solution to these limitations is a technology called Containers.
By containerizing/packaging the application, we can run the application on any cloud platform to get advantages of managed services and autoscaling and reliability, and many more.
The most prominent tool to do the packaging of application is Docker 🛳
The scope of this week is to understand the following topics:
FastAPI wrapper
Basics of Docker
Building Docker Container
Docker Compose
References
Refer to the Blog Post here
CI/CD is a coding philosophy and set of practices with which you can continuously build, test, and deploy iterative code changes.
This iterative process helps reduce the chance that you develop new code based on a buggy or failed previous versions. With this method, you strive to have less human intervention or even no intervention at all, from the development of new code until its deployment.
In this post, I will be going through the following topics:
References
Refer to the Blog Post here
A container registry is a place to store container images. A container image is a file comprised of multiple layers which can execute applications in a single instance. Hosting all the images in one stored location allows users to commit, identify and pull images when needed.
Amazon Simple Storage Service (S3) is a storage for the internet. It is designed for large-capacity, low-cost storage provision across multiple geographical regions.
In this week, I will be going through the following topics:
Basics of S3
Programmatic access to S3
Configuring AWS S3 as remote storage in DVC
Basics of ECR
Configuring GitHub Actions to use S3, ECR
Refer to the Blog Post here
A serverless architecture is a way to build and run applications and services without having to manage infrastructure. The application still runs on servers, but all the server management is done by third party service (AWS). We no longer have to provision, scale, and maintain servers to run the applications. By using a serverless architecture, developers can focus on their core product instead of worrying about managing and operating servers or runtimes, either in the cloud or on-premises.
In this week, I will be going through the following topics:
Basics of Serverless
Basics of AWS Lambda
Triggering Lambda with API Gateway
Deploying Container using Lambda
Automating deployment to Lambda using Github Actions
Refer to the Blog Post here
Monitoring systems can help give us confidence that our systems are running smoothly and, in the event of a system failure, can quickly provide appropriate context when diagnosing the root cause.
Things we want to monitor during and training and inference are different. During training we are concered about whether the loss is decreasing or not, whether the model is overfitting, etc.
But, during inference, We like to have confidence that our model is making correct predictions.
There are many reasons why a model can fail to make useful predictions:
The underlying data distribution has shifted over time and the model has gone stale. i.e inference data characteristics is different from the data characteristics used to train the model.
The inference data stream contains edge cases (not seen during model training). In this scenarios model might perform poorly or can lead to errors.
The model was misconfigured in its production deployment. (Configuration issues are common)
In all of these scenarios, the model could still make a successful
prediction from a service perspective, but the predictions will likely not be useful. Monitoring machine learning models can help us detect such scenarios and intervene (e.g. trigger a model retraining/deployment pipeline).
In this week, I will be going through the following topics:
Basics of Cloudwatch Logs
Creating Elastic Search Cluster
Configuring Cloudwatch Logs with Elastic Search
Creating Index Patterns in Kibana
Creating Kibana Visualisations
Creating Kibana Dashboard