Deploy and scale AI/ML models in production

Serve open standard and open source models using Kubernetes of On-Premise or Cloud: PMML, Scikit-learn, XGBoost, LightGBM, Spark ML; ONNX, TensorFlow, Keras, PyTorch, MXNet, and even custom models are productionized in minutes.

View AI-DaaS

Design Philosophy

Leverage Function as a Service

Deploy pipelines, not just models

Function as a Service (FaaS) provides maximum flexibility and extensibility, it is easy and straightforward to deploy AI pipelines in functions:
[features engineering -> model -> prediction transformations]

Open-Standard & Open-Source

Automatic deployment of popular AI/ML models

Focus on PMML and ONNX

Support PMML, Scikit-learn, XGBoost, LightGBM, Spark-ML; ONNX, TensorFlow, Keras, PyTorch, MXNet, and even custom models.

Deployment Lifecycle

Manage and monitor Deployment Lifecycle

Analytic assets and collaborations

Manage analytic assets: models, versions, scripts, Jupyter Notebooks, and datasets in one system. Monitor performance and healthy of deployments, and web service metrics, all shown in the visualization dashboard.

Web Service API

Provide REST API for different deployments

Online real-time and offline batch scoring

REST API of real-time scoring, batch scoring, and model evaluations in both Dev and Product modes. Support connections of a wide variety of data sources: CSV, HDFS, and various relational databases.