Baolin Li

Baolin Li

Ph.D. candidate

Northeastern University

Biography

I am a Ph.D. candidate in Computer Engineering at Northeastern University, working under the guidance of Professor Devesh Tiwari. My research focuses on optimizing High Performance Computing (HPC) and Cloud Computing systems for Machine Learning (ML) applications. Throughout my doctoral studies, I have taken the lead on several research projects tackling various challenges in terms of cost-effectiveness, resource sharing, and sustainability of such HPC/cloud systems for machine learning. I am dedicated to addressing real-world challenges faced by modern computing systems and developing solutions that are both efficient and sustainable.

For more information and publications, please refer to my full resumé .

Interests
  • HPC
  • Cloud Computing
  • Systems for Machine Learning
Education
  • Ph.D. in Computer Engineering, 2024

    Northeastern University

  • M.S. in Electrical and Computer Engineering, 2017

    The University of Texas at Austin

  • B.Eng. (honours) in Electrical and Electronic Engineering, 2015

    The University of Manchester

Experience

 
 
 
 
 
Research Assistant
Northeastern University
Jul 2019 – Present Boston, MA
Currently active in research of efficient ML system, Multi-Instance GPU, and sustainable HPC for AI.
 
 
 
 
 
Machine Learning Intern, Research
Netflix
Jun 2023 – Sep 2023 Los Gatos, CA
Scalable machine learning platform for foundation model training
 
 
 
 
 
Research Intern
Bosch Research
Jun 2022 – Sep 2022 Sunnyvale, CA
Cloud optimization for ML inference applications
 
 
 
 
 
System Engineer
Silicon Labs
Jun 2017 – Jun 2019 Austin, TX
Testing software development for for ultra-low power IoT microcontroller.

Publications

(2023). Clover: Toward Sustainable AI with Carbon-Aware Machine Learning Inference Service. In SC ‘23.

PDF Cite Code Slides DOI

(2023). Toward Sustainable HPC: Carbon Footprint Estimation and Environmental Implications of HPC Systems. In SC ‘23.

PDF Cite Code Slides DOI

(2023). Sustainable Supercomputing for AI: GPU Power Capping at HPC Scale. In SoCC ‘23.

PDF Cite DOI

(2022). MISO: Exploiting Multi-Instance GPU Capability on Multi-Tenant GPU Clusters. In SoCC ‘22.

PDF Cite Code Slides DOI

(2022). AI-Enabling Workloads on Large-Scale GPU-Accelerated System: Characterization, Opportunities, and Implications. In HPCA ‘22.

PDF Cite Code Dataset DOI

(2022). Great Power, Great Respobsibility: Recommendations for Reducing Energy for Training Language Models. In NAACL ‘22 Findings.

PDF Cite DOI

(2022). Do Temperature and Humidity Exposures Hurt or Benefit Your SSDs?. In DATE ‘22 (Best Paper Finalist).

PDF Cite DOI

(2021). RIBBON: Cost-Effective and QoS-Aware Deep Learning Model Inference using a Diverse Pool of Cloud Computing Instances. In SC ‘21.

PDF Cite Code Slides Video DOI

(2021). Serving Machine Learning Inference Using Heterogeneous Hardware. In HPEC ‘21 (Outstanding Student Paper Award).

PDF Cite Slides DOI

(2020). Experimental Evaluation of NISQ Quantum Computers: Error Measurement, Characterization, and Implications. In SC ‘20 (Best Paper and Best Student Paper Finalist).

PDF Cite Code DOI

Contact