Neptune is a metadata store for MLOps, built for teams that run a lot of experiments. It gives you a single place to log, store, display, organize, compare, and query all your model-building metadata.
Neptune is used for:
Experiment tracking: Log, display, organize, and compare ML experiments in a single place.
Model registry: Version, store, manage, and query trained models, and model building metadata.
Monitoring ML runs live: Record and monitor model training, evaluation, or production runs live
1. Fast and beautiful UI with a lot of capabilities to organize runs in groups, save custom dashboard views and share them with the team
Version, store, organize, and query models, and model development metadata including dataset, code, env config versions, parameters and evaluation metrics, model binaries, description, and other details
2. Filter, sort, and group model training runs in a dashboard to better organize your work
3. Compare metrics and parameters in a table that automatically finds what changed between runs and what are the anomalies
4. Automatically record the code, environment, parameters, model binaries, and evaluation metrics every time you run an experiment
5. Your team can track experiments that are executed in scripts (Python, R, other), notebooks (local, Google Colab, AWS SageMaker) and do that on any infrastructure (cloud, laptop, cluster)
6. Extensive experiment tracking and visualization capabilities (resource consumption, scrolling through lists of images)