Online / 6 & 7 February 2021

visit

Production Machine Learning Monitoring: Outliers, Drift, Explainers & Statistical Performance


The lifecycle of a machine learning model only begins once it's in production. In this talk we provide a practical deep dive of the best practices, principles, patterns and techniques around production monitoring of machine learning models. We will cover standard microservice monitoring techniques applied into deployed machine learning models, as well as more advanced paradigms to monitor machine learning models through concept drift, outlier detector and explainability.

We'll dive into a hands on example, where we will train an image classification machine learning model from scratch, deploy it as a microservice in Kubernetes, and introduce advanced monitoring components as architectural patterns with hands on examples. These monitoring techniques will include AI Explainers, Outlier Detectors, Concept Drift detectors and Adversarial Detectors. We will also be understanding high level architectural patterns that abstract these complex and advanced monitoring techniques into infrastructural components that will enable for scale, introducing the standardised interfaces required for us to enable monitoring across hundreds or thousands of heterogeneous machine learning models.

Speakers

Photo of Alejandro Saucedo Alejandro Saucedo

Links