WhyLogs 

[wtm_mlop_cats] whylogs is an open source standard for data and ML logging whylogs logging agent is the easiest way to enable logging, testing, and monitoring in an ML/AI application. The lightweight agent profiles data in real time, collecting thousands of metrics from structured data, unstructured data, and ML model predictions with zero configuration. whylogs can […]

Vespa 

[wtm_mlop_cats] Vespa provides metrics integration with CloudWatch, Datadog and Prometheus / Grafana, as well as a JSON HTTP API. See monitoring with Grafana quick start if you just want to get started monitoring your system. There are two main approaches to transfer metrics to an external system: Have the external system pull metrics from VespaMake […]

Triton Inference Server 

[wtm_mlop_cats] Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows […]

Triton Inference Server 

[wtm_mlop_cats] Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C API that allows […]

TorchServe

[wtm_mlop_cats] TorchServe is a flexible and easy to use tool for serving PyTorch models. Features Serving Quick Start – Basic server usage tutorialModel Archive Quick Start – Tutorial that shows you how to package a model archive file.Installation – Installation proceduresServing Models – Explains how to use TorchServeREST API – Specification on the API endpoint […]

TensorFlow Serving 

[wtm_mlop_cats] TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models […]

Tempo 

[wtm_mlop_cats] Tempo is a python SDK for data scientists to help them move their models to production. It has 4 core goals: Data science friendly. Pluggable runtimes. Custom python inference components. Powerful orchestration logic. Features Package your trained model artifacts to optimized server runtimes (Tensorflow, PyTorch, Sklearn, XGBoost etc) Package custom business logic to production […]

Streamlit 

[wtm_mlop_cats] Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. In just a few minutes you can build and deploy powerful data apps – so let’s get started! Features At Streamlit, we like to move quick while keeping things stable. […]

Seldon

[wtm_mlop_cats] Seldon core converts your ML models (Tensorflow, Pytorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production REST/GRPC microservices. Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries and more. Features […]

Redis-AI 

[wtm_mlop_cats] RedisAI is a Redis module for executing Deep Learning/Machine Learning models and managing their data. Its purpose is being a “workhorse” for model serving, by providing out-of-the-box support for popular DL/ML frameworks and unparalleled performance. RedisAI both simplifies the deployment and serving of graphs by leveraging on Redis’ production-proven infrastructure, as well as maximizes […]

Enter your contact information to continue reading