Kairos: Building Cost-Efficient Machine Learning Inference Systems with Heterogeneous Cloud Resources

Abstract

Online inference is becoming a key service product for many businesses, deployed in cloud platforms to meet customer demands. Despite their revenue-generation capability, these services need to operate under tight Quality-of-Service (QoS) and cost budget constraints. This paper introduces Kairos, a novel runtime framework that maximizes the query throughput while meeting QoS target and a cost budget. Kairos designs and implements novel techniques to build a pool of heterogeneous compute hardware without online exploration overhead, and distribute inference queries optimally at runtime. Our evaluation using industry-grade machine learning (ML) models shows that Kairos yields up to 2× the throughput of an optimal homogeneous solution, and outperforms state-of-the-art schemes by up to 70%, despite advantageous implementations of the competing schemes to ignore their exploration overhead.

Publication
In Proceedings of the 2023 ACM International Symposium on High-Performance Parallel and Distributed Computing (HPDC)
Baolin Li
Baolin Li
Ph.D.

My research interests include high performance computing, cloud computing, and machine learing.