ModelDB

[wtm_mlop_cats] ModelDB: An open-source system for Machine Learning model versioning, metadata, and experiment management. Features Works on Docker, KubernetesClients in Python and ScalaBeautiful dashboards for model performance and reportingGit-like operations on any modelFlexible metadata logging including metrics, artifacts, tags and user informationPluggable storage systemsIntegration into state-of-the-art frameworks like Tensorflow and PyTorchBattle-tested in production environments Official […]

Mlflow

[wtm_mlop_cats] MLflow is an open source platform to manage the ML lifecycle, including experimentation, reproducibility, deployment, and a central model registry. Features MLflow Tracking: Automatically log parameters, code versions, metrics, and artifacts for each run using Python, REST, R API, and Java API.MLflow Tracking Server: Get started quickly with a built-in tracking server to log […]

Keepsake

[wtm_mlop_cats] The Keepsake Python library is used to create experiments and checkpoints in your training script. It also has functions for programmatically analyzing the experiments. These two modes are comprehensively described below in the Experiment tracking and Analyze and plot experiments sections. Features Track experiments: Automatically track code, hyperparameters, training data, weights, metrics, Python dependencies […]

Guild AI

[wtm_mlop_cats] Guild AI brings systematic control to machine learning to help you build better models faster. It’s freely available under the Apache 2.0 open source license. Features Run unmodified training scripts, capturing each run result as a unique experiment.Automate trials using grid search, random search, and Bayesian optimization.Compare and analyze runs to understand and improve […]

Comet

[wtm_mlop_cats] Comet enables data scientists and teams to track, compare, explain and optimize experiments and models across the model’s entire lifecycle. From training to production. With just two lines of code, you can start building better models today. Features Just one line of codeCollaborateReproducible ResearchTrack and CompareVisualize AnythingDebug your ModelsNotebooks or scriptsModel Registry Official website […]

Aim

[wtm_mlop_cats] Aim is an open-source comparison tool for AI experiments. With more resources and complex models more experiments are ran than ever. Use Aim to deeply inspect thousands of hyperparameter-sensitive training runs at once. Features use multiple sessions in one training script to store multiple runs at once. When not initialized explicitly, Aim creates a […]

Weights and biases

[wtm_mlop_cats] Weights & Biases is the machine learning platform for developers to build better models faster. Use W&B’s lightweight, interoperable tools to quickly track experiments, version and iterate on datasets, evaluate model performance, reproduce models, visualize results and spot regressions, and share findings with colleagues. Features Integrate quickly: Track, compare, and visualize ML experiments with […]

Sacred

[wtm_mlop_cats] Sacred is a tool to configure, organize, log and reproduce computational experiments. It is designed to introduce only minimal overhead, while encouraging modularity and configurability of experiments. The ability to conveniently make experiments configurable is at the heart of Sacred. If the parameters of an experiment are exposed in this way, it will help […]

Neptune AI

[wtm_mlop_cats] Neptune is a metadata store for MLOps, built for teams that run a lot of experiments.‌ It gives you a single place to log, store, display, organize, compare, and query all your model-building metadata. ‌Neptune is used for:‌Experiment tracking: Log, display, organize, and compare ML experiments in a single place.Model registry: Version, store, manage, […]

Enter your contact information to continue reading