Model Server for Apache MXNet (MMS) 

[wtm_mlop_cats] Multi Model Server (MMS) is a flexible and easy to use tool for serving deep learning models trained using any ML/DL framework. Use the MMS Server CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests. Features Serving Quick Start – Basic server usage […]

Merlin 

[wtm_mlop_cats] Merlin is a platform for deploying and serving machine learning models. The project was born of the belief that model deployment should be: Easy and self-serve: Human should not become the bottleneck for deploying model into production.Scalable: The model deployed should be able to handle Gojek scale and beyond.Fast: The framework should be able […]

PredictionIO 

[wtm_mlop_cats] Apache PredictionIO® is an open source Machine Learning Server built on top of a state-of-the-art open source stack for developers and data scientists to create predictive engines for any machine learning task. Features quickly build and deploy an engine as a web service on production with customizable templates;respond to dynamic queries in real-time once […]

m2cgen 

[wtm_mlop_cats] m2cgen (Model 2 Code Generator) – is a lightweight library which provides an easy way to transpile trained statistical models into a native code (Python, C, Java, Go, JavaScript, Visual Basic, C#, PowerShell, R, PHP, Dart, Haskell, Ruby, F#, Rust). Features Linear SVM TreeRandom Forest Boosting Official website Link Tutorial and documentation Click here […]

KFServing

[wtm_mlop_cats] The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run […]

Jina  

[wtm_mlop_cats] Jina🔊 is a neural search framework that empowers anyone to build SOTA and scalable deep learning search applications in minutes. 🌌 All data types – Scalable indexing, querying, understanding of any data: video, image, long/short text, music, source code, PDF, etc. ⏱️ Save time – The design pattern of neural search systems, from zero […]

Opyrator

[wtm_mlop_cats] Instantly turn your Python functions into production-ready microservices. Deploy and access your services via HTTP API or interactive UI. Seamlessly export your services into portable, shareable, and executable files or Docker images. Opyrator builds on open standards – OpenAPI, JSON Schema, and Python type hints – and is powered by FastAPI, Streamlit, and Pydantic. […]

OpenScoring 

[wtm_mlop_cats] REST web service for scoring PMML models. Openscoring is a Java service that provides a JSON REST interface to the Java Predictive Model Markup Language (PMML) evaluator JPMML Features Full support for PMML specification versions 3.0 through 4.4. The evaluation is handled by the JPMML-Evaluator library.Simple and powerful REST API:Model deployment and undeployment.Model evaluation […]

Hydrosphere 

[wtm_mlop_cats] Hydrosphere is a platform for deploying, versioning, and monitoring your machine learning models in production. It is language-agnostic and framework-agnostic, with support for all major programming languages and frameworks – Python, Java, Tensorflow, Pytorch, etc. Features Model RegistryInference PipelinesA/B Model Version DeploymentTraffic ShadowingLanguage-Agnostic Deployment Official website Link Tutorial and documentation Click here to view […]

GraphPipe 

[wtm_mlop_cats] GraphPipe is a protocol and collection of software designed to simplify machine learning model deployment and decouple it from framework-specific model implementations. Model serving network protocols are tied to underlying model implementations. If you have a Tensorflow model, for example, you need to use tensorflow’s protocol buffer server (tensorflow-serving) to perform remote inference.Pytorch and […]

Enter your contact information to continue reading