WhyLogs 

BOOK A MEETING FR whylogs is an open source standard for data and ML logging whylogs logging agent is the easiest way to enable logging, testing, and monitoring in an ML/AI application. The lightweight agent profiles data in real time, collecting thousands of metrics from structured data, unstructured data, and ML model predictions with zero …

Vespa 

BOOK A MEETING FR Vespa provides metrics integration with CloudWatch, Datadog and Prometheus / Grafana, as well as a JSON HTTP API. See monitoring with Grafana quick start if you just want to get started monitoring your system. There are two main approaches to transfer metrics to an external system: Have the external system pull …

Triton Inference Server 

BOOK A MEETING FR Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C …

Triton Inference Server 

BOOK A MEETING FR Triton Inference Server provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Triton supports an HTTP/REST and GRPC protocol that allows remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton is available as a shared library with a C …

TorchServe

BOOK A MEETING FR TorchServe is a flexible and easy to use tool for serving PyTorch models. Features Serving Quick Start – Basic server usage tutorial Model Archive Quick Start – Tutorial that shows you how to package a model archive file. Installation – Installation procedures Serving Models – Explains how to use TorchServe REST …

TensorFlow Serving 

BOOK A MEETING FR TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other …

Tempo 

BOOK A MEETING FR Tempo is a python SDK for data scientists to help them move their models to production. It has 4 core goals: Data science friendly. Pluggable runtimes. Custom python inference components. Powerful orchestration logic. Features Package your trained model artifacts to optimized server runtimes (Tensorflow, PyTorch, Sklearn, XGBoost etc) Package custom business …

Streamlit 

BOOK A MEETING FR Streamlit is an open-source Python library that makes it easy to create and share beautiful, custom web apps for machine learning and data science. In just a few minutes you can build and deploy powerful data apps – so let’s get started! Features At Streamlit, we like to move quick while …

Seldon

BOOK A MEETING FR Seldon core converts your ML models (Tensorflow, Pytorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production REST/GRPC microservices. Seldon handles scaling to thousands of production machine learning models and provides advanced machine learning capabilities out of the box including Advanced Metrics, Request Logging, Explainers, Outlier Detectors, A/B Tests, Canaries …

Redis-AI 

BOOK A MEETING FR RedisAI is a Redis module for executing Deep Learning/Machine Learning models and managing their data. Its purpose is being a “workhorse” for model serving, by providing out-of-the-box support for popular DL/ML frameworks and unparalleled performance. RedisAI both simplifies the deployment and serving of graphs by leveraging on Redis’ production-proven infrastructure, as …

Enter your contact information to continue reading