Chaoyang He is Co-founder and CTO of FedML, Inc., a company building an open and collaborative AI platform that can train, deploy, monitor, and improve machine learning models leveraging combined data, models, and computing resources. Previously, he worked closely with researchers/engineers at Google, Facebook, and Amazon. He was an R&D Team Manager and Principal Software Engineer at Tencent (2014-2018), a Team Leader and Senior Software Engineer at Baidu (2012-2014), and a Software Engineer at Huawei (2011-2012). He has received a number of awards in academia and industry, including the Amazon ML Fellowship (2021-2022), Qualcomm Innovation Fellowship (2021-2022), Tencent Outstanding Staff Award (2015-2016), WeChat Special Award for Innovation (2016), Baidu LBS Group Star Awards (2013), and Huawei Golden Network Award (2012). His research focuses on machine learning, distributed systems, blockchain, edge/cloud computing, primarily distributed/federated machine learning, and efficient distributed training of large foundation models (LLM, Vision Transformer). For these topics, he has published papers at ICML, NeurIPS, CVPR, ICLR, AAAI, MLSys, and VLDB, among others. Besides pure research, he has experience in Internet-scale products and businesses such as Tencent Cloud, Tencent WeChat Automotive / AI in Car, Tencent Games, Tencent Maps, Baidu Maps, and Huawei Smartphone. He received his Ph.D. in Computer Science from the University of Southern California, Los Angeles, USA, advised by Salman Avestimehr (USC), Professor Mahdi Soltanolkotabi (USC), Professor Murali Annavaram (USC), and Professor Tong Zhang (HKUST). More details are available at his homepage: https://ChaoyangHe.com
Industrial grade machine learning algorithms and systems at scale, which lies at the intersection of machine learning, distributed systems, edge/cloud/sky computing, blockchain, and security/privacy. My recent focus:
Machine Learning System and Algorithm Co-design: 1) End-to-end MLOps platform for data preparation, training, inference, deployment, collaboration, monitoring, observability, security/privacy. 2) device-edge-cloud federated/distributed/collaborative machine learning 3) Security/privacy for ML: MPC (secure aggregation), DP (differential privacy), FHE (fully homomorphic encryption), TEE (Trusted Execution Environment) 4) Integrated Edge ML engine for training and inference 5) MLOps with strong observability and monitoring capability to mitigate issues from data drift over time, skewed data distribution between training and deployment, and system heterogeneity 6) training efficiency on resource-constrained devices: reducing memory/energy/computation/communication cost for training large models on a resource-constrained commodity device; 7) system for decoupling storage and computing resource for model training; 8) serverless computing for cloud-based distributed training and multi-tanant training system; 9) multi-cloud machine learning and data analytics
Responsible and Trustworthy Data Economy: blockchain-empowered machine learning and data analytics: zero-knowledge proof (ZKP), verification, proof of contribution, privacy protection, and robustness to malicious/cheating users.
Trustworthy Federated Learning: achieving high model performance on decentralized data under constraints of security, privacy, label deficiency, and system resources in a lifelong manner via the operational and practical system and ML co-design
Machine Learning Algorithms with Strong Demand in Real World: computer vision, natural language processing, graph learning (graph neural networks), recommendation systems, time-series forecasting
General Computer Systems: distributed/cloud/edge systems, mobile systems, open source library, product design
04/01/2022: quantitative summarization of Ph.D. (as of March 2022): (1) Publications 30, h-index 12, citations 2153, US Patent 1; (2) Professional Reviews: 97 (52 reviews in conferences, 28 reviews in workshops, and 17 reviews in journals); (3) Open Source Slack Users: 988; (4) Invited Talks: > 5 (Facebook, Amazon, Stanford, USC ISI, Sony, etc.); (5) Funding raised with my advisors: several grants and startup (> $3M USD)
03/25/2022: I passed my Ph.D. defense. See my acknowledgment here. The thesis manuscript is maintained here.
11/21/2021: FL4NLP Workshop Proposal (Federated Learning for Natural Language Processing) has been accepted to ACL 2022. Welcome to submit your excellent works! Check this PDF for more details (topics, speaker, organizer, etc). You can read our FedNLP paper to find some interesting topics. We will release the next step soon.