Tracking, comparing, explaining, and optimizing machine learning models and experiments. You can use it with any machine learning library, such as Scikit-learn, Pytorch, TensorFlow, and HuggingFace.
Experiment tracking, data and model versioning, hyperparameter optimization, and model management. Furthermore, you can use it to log artifacts (datasets, models, dependencies, pipelines, and results) and visualize the datasets (audio, visual, text, and tabular).
Creating reproducible, maintainable, and modular data science projects. It integrates the concepts from software engineering into machine learning, such as modularity, separation of concerns, and versioning.
Configure, organize, log and reproduce computational experiments. It is designed to introduce only minimal overhead while encouraging modularity and configurability of experiments.
Features
keep track of all the parameters of your experiment
easily run your experiment for different settings
save configurations for individual runs in files or a database
Tracking and Metadata Management
MLflow
Comet ML
Weights & Biases
ZenML
Orchestration and MLOps
Prefect
Kedro
[[#[Kubeflow](https //www.kubeflow.org/docs/)|Kubeflow]]
[[#[Data Version Control](https //dvc.org/)|DVC]]
Data and Pipeline Versioning Tools
Pachyderm
Data Version Control
Sacred
Model Monitoring
Evidently AI
End to end
Kubeflow
References